First of all, don't panic.
If Moore's Law came to an end and computers stopped getting steadily faster, plenty of companies would suffer. But an end likely would come with lots of warning, lots of measures to cushion the blow, and lots of continued development even if transistors stopped shrinking.
The hardest hit would be companies dependent on consumers replacing their electronics every few years and tech companies such as Google whose long-term plans hinge on faster computers, cheaper storage, and better bandwidth. And the continuing miniaturization of computers -- mainframes to minicomputers to PCs to smartphones -- might not make the leap to even smaller devices such as tiny networked sensors.
Innovations to keep Moore's Law alive
For the rest of us, there would be ripple effects. Corporate productivity gains might slow as the spread of computerization into new domains stops. You might not get your Dick Tracy watch, your mom might not get her cancer-attacking nanobots, and impoverished children might never get that supercheap mobile phone.
But even if chip progress stopped, that wouldn't mean computing progress would screech to a halt. Instead, attention would focus on new ways of getting more work out of existing computing technology.
There are several prominent examples of what happens when explosions of innovation settle down. Perhaps the best is the auto industry.
There, an early flurry of activity and experimentation eventually stabilized. Occasional ideas such as rotary engines, automatic transmissions, or fuel injection cropped up, but many of the basics remain unchanged. Even today's dramatic technology departures -- electric vehicles and self-driving vehicles -- reuse many of the same mechanical workings.
"I drive a 1964 car. I also have a 2010. There's not that much difference -- gross performance indicators like top speed and miles per gallon aren't that different. It's safer, and there are a lot of creature comforts in the interior," said Nvidia Chief Scientist Bill Dally. If Moore's Law fizzles, "We'll start to look like the auto industry."
That's not to say nothing would change -- the aforementioned e-vehicles and robot cars are now becoming reality, for example. But it would mean a more sedate pace of innovation. Technophiles could lose that sense of perpetual excitement even as everybody got a chance to figure out how to use their electronic gizmos before they're obsolete.
"Moore's Law means that every two years you're throwing away your laptop to get a better laptop and throwing away your smartphone to get a better smartphone," said William Tunstall-Pedoe, an artificial-intelligence researcher and founder of semantic search company Evi. "If your smartphone ended up being good for another 10 years, or your laptop wouldn't be replaced for another 10 years, the amount spent on hardware and on new phones would be dramatically less."
That of course would be disastrous for electronics makers and their suppliers, who'd have to get accustomed to lower revenues and therefore lower investments in future technology. But hardware isn't the only factor in computing technology.
For related coverage, see why Moore's Law is the rule that really matters in tech and a Q&A with Intel's Mike Mayberry.
Software picks up the slack
Programmers would be the first in the hot seat to pick up where hardware improvements left off.
"If Moore's Law were to come to an end tomorrow, you'd still see performance improvements, but that would come from improvements to the software," Tunstall-Pedoe said. "There would be less resources spent on features and more on squeezing extra performance and capabilities."
Jon Bennett, chief technology officer of flash-storage company Violin Memory, agrees that a lot of performance is squandered today. His company helps customers open up bottlenecks in their software that become evident with today's storage speeds, he said.
"Even if [chipmakers] start to slow down, we have plenty of time catching up to what we can consume today," he said.
"You could see a 10-year software wave when that becomes the best way to get the economics moving," said Kevin Brown, chief executive of Coraid, another storage system maker. "Right now, doing just the performance work just in software is somewhat wasteful because it's pretty easy to ride that curve," where hardware improvements deliver the new computing speed.
Here's one example of how the software industry has worked: the addition of new layers of abstraction that make life easier for programmers.
The earliest computers were programmed at a very low level -- for example, instructions for the chip to put a particular number in a storage register, to add the value of another to it, to compare the result with what's in another register. Higher-level languages like C came along that were much easier for humans to understand but that had to be compiled into native instructions for the chip.
Software wouldn't be the only vein to mine for speed boosts. Chips can be designed more cleverly, for example sacrificing backward compatibility with existing software to move to designs with a fresh start.
"There's plenty of room left in architectural innovation," said Bob Doud, director of marketing at chip designer Tilera.
Another refinement: multidie packaging, in which several chips are sandwiched atop one another, perhaps linking a processor on one layer with memory on another. High-speed links called through-silicon vias, or TSVs, connect the layers.
The processor power panic
We've already tangled with the end of Moore's Law in one sense. Last decade, the processor industry ran into a wall: excess power consumption.
Intel's NetBurst chip architecture was supposed to carry its Pentium processors to 4GHz, but instead it carried them to inordinately high electrical power usage. That's crippling in a computer: it leads directly to overheating that crashes and potentially even damages a computer. And nowadays, with laptops reigning supreme, it means batteries don't last long.
The result of this problem has been an industry focus not just on transistor counts, but on performance per watt of power used. In the good old days, processors ran faster with each shrink, but that's not the case anymore.
"Since six years ago or so, the clock rates of microprocessors have not increased much above several gigahertz, and the power has not gone much above 100 watts," said Sam Fuller, CTO of Analog Devices.
The clock in a 2.5GHz Intel Core processor ticks 2.5 billion times each second, fetching new instructions and executing them step by step with each tick. A hundred watts is enough to power a bright incandescent lightbulb, which up until a few years ago was plenty to power a chip.
The party ended with the end of a phenomenon called Dennard scaling. It's named after IBM researcher Robert Dennard, who in 1974 observed that the increases in the number of transistors enabled by next-generation manufacturing was counterbalanced exactly by reductions in each transistor's power usage.
"It went on for more than three decades. It was really great. You shrank the size of the circuits, scaled down the voltage, and adjusted the doping," which means adding carefully chosen chemical extras to the chip's silicon substrate, Fuller said. "What you got with each generation was twice the transistors and an increase in speed and performance, with no increase in power consumed and no increase in cost."
With the end of Dennard scaling, processors have been getting more transistors, but typically not faster ones. Instead, chips have pushed in the multicore direction. Where there once was a single processing engine, dual-core chips share the work between two engines on a single slice of silicon. Mainstream personal computer chips now are quad-core models, and server chips have eight cores.
The plight of parallelism
Multicore systems can juggle multiple tasks better, and many computing chores such as displaying high-resolution graphics or encoding video get faster on multicore machines. Unfortunately, though, many tasks don't.
One persistent computer industry challenge is parallel programming -- the creation of software split into multiple pieces that execute simultaneously. It's a thorny problem. People naturally think of algorithms that take place with a single thread of instructions. And parallel programming gets profoundly complicated when it's time to manage how different threads try to change the same data at the same time. Or when one thread stalls because it has to wait for another to finish. Or even worse, when two threads deadlock because each is waiting for the other.
Tilera has aggressively embraced the multicore philosophy by designing chips now used for network gear, media processing, and cloud computing. Doud thinks software developers have to wake up and smell the multicore coffee.
"It's virtually impossible to buy anything with fewer than two cores these days. Multicore is here to stay," Doud said. Programmers who can't handle multicore have stale skills. "That might have played in 2005, but now anybody who's not on board is going to be a dinosaur," he said.
Moving to parallel programming is tough, despite the apparent mutability of software.
"Anything that requires a software change is always harder than a hardware change," said Patrick Moorhead, analyst at Moor Insights & Strategy.
Chipmakers are working to make parallel programming less painful, he added.
"I actually think both Intel and Nvidia are preparing for that already," with huge numbers of employees focused on software, Moorhead said. Some of that work involves programming tools that hide the complexities of parallel programming. "You see a lot more resources going in to keep the utility curve of what you can do with the silicon moving up to the right."
Lean on the cloud
But a 1,000-core processor in a smartphone? It's not going to happen, even in Doud's view. Instead, the sensible approach is to offload work to servers on the cloud, the way Apple uses servers to handle Siri voice commands, he said.
The utility of the cloud will improve as networks get faster and more ubiquitous. And higher-end Internet companies have already figured out how to build massive data centers: They've partly cracked the nut of parallel programming.
The upshot is people won't focus on chip transistor density, because the cloud will offer a more relevant measurement: "It's compute power per dollar," said Coraid's Brown.
A lot of companies haven't matched the state of the art, though, he added. If the steady hardware progress embodied by Moore's Law slowed down, ordinary companies would rush to achieve the high computing efficiencies that only elite companies today have achieved, Brown said.
"Many IT companies have no idea. They have nothing that looks like Amazon and Google. There's a lot of change left to happen there," Brown said. "One way or another evolution will drive the cost down."
Moore's Law won't be easy to maintain, but a persistent optimism pervades the industry that computing hardware will steadily improve, even after today's silicon transistor technology meets its limits.
"I'm going to bet," Brown said, "on human ingenuity."