Mobile data traffic is doubling every nine months, according to Cisco Systems. By 2013, mobile traffic will hit 2 exabytes--2 million terabytes--per month.
For some vendors, the growth rate is even higher. AT&T says its network load has been growing by 4.5x per year for the last two years, in large part (I assume) because of iPhone sales. You may have read about AT&T's pledge to spend over $12 billion this year to expand its wireless and broadband networks, including new 3G spectrum with better coverage and trials of 4G service.
At the Linley Group's Tech Processor Conference this week in San Jose, Calif., we learned what effect this growth is having on equipment makers, especially the companies making the microprocessors that go into network gear.
According to that same Cisco study, the problem goes well beyond iPhones. A 3G-equipped laptop "can generate as much traffic as 450 basic-feature phones" and 15 times the traffic of an iPhone or BlackBerry.
Networks have also gotten smarter, so network processors have much more work to do. Instead of just hundreds or thousands of clock cycles of work per packet on the network, new functions like firewalls, intrusion detection, and antivirus scanning to keep smartphones and laptops safe can require 100,000 cycles of processing on each packet.
Factoring in the growth in the network itself, Michael Coward of Continuous Computing, a company that sells equipment, software, and services to the telecom market, said that network operators need to achieve a 1,200x boost in processing performance between the systems deployed in 2008 and those that will be needed in 2013.
That's a really staggering figure, but not unachievable. It happens to be pretty close to the growth rate of 3D graphics in the late 1990s, where performance per chip doubled every six months for several years in a row.
Moore's Law alone would give us about a 6x performance improvement. Coward says that mobile broadband providers will have to buy 10x as much equipment to keep up with network growth. What's left--a further 20:1 performance gain--is going to have to come from new kinds of processors, not just faster versions of the same old chips.
Much of this gain has to come from shifting the processing load from simple CPU cores to custom-designed hardware function units. This is an ongoing process that started long ago. Even 10 years ago, instead of running a software program to route packets, some routers were using ASICs or FPGAs to do the same work at much higher data rates.
In the next few years, even fairly high-level functions will have to be moved to hardware accelerators. Another conference speaker mentioned a challenging task: to look for a virus in a ZIP file in an e-mail while the email is traveling through the network at full speed. Such files use Base64 MIME encoding, so the virus is essentially hidden behind three levels of obfuscation: the ZIP file, the MIME encoding, and the e-mail headers. Today, we don't expect that level of virus protection. But network operators may have to provide it eventually, and it'll take some very smart accelerator logic in network processors to achieve it.
Most of Wednesday at the conference was spent discussing individual new chips from companies such as Freescale, RMI, LSI, and Cavium. Linley Gwennap, founder of conference host Linley Group (and formerly my boss at Microprocessor Report), said that Freescale now holds more than half of the market for general-purpose processors used in networking and communication. Intel has less than 25 percent, and most of the rest of these sales are split among several other companies.
In 2008, these sales added up to $1.28 billion in revenue. So with all this future growth coming, it seems like a pretty healthy market.
It had better be healthy--it has a lot of work to do.