December 9, 2005 4:00 AM PST

Power could cost more than servers, Google warns

A Google engineer has warned that if the performance per watt of today's computers doesn't improve, the electrical costs of running them could end up far greater than the initial hardware price tag.

That situation that wouldn't bode well for Google, which relies on thousands of its own servers.

"If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin," Luiz Andre Barroso, who previously designed processors for Digital Equipment Corp., said in a September paper published in the Association for Computing Machinery's Queue. "The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet."

Barroso's view is likely to go over well at Sun Microsystems, which on Tuesday launched its Sun Fire T2000 server, whose 72-watt UltraSparc T1 "Niagara" processor performs more work per watt than rivals. Indeed, the "Piranha" processor Barroso helped design at DEC, which never made it to market, is similar in some ways to Niagara, including its use of eight processing cores on the chip.

Niagara

To address the power problem, Barroso suggests the very approach Sun has taken with Niagara: processors that can simultaneously execute many instruction sequences, called threads. Typical server chips today can execute one, two or sometimes four threads, but Niagara's eight cores can execute 32 threads.

Power has also become an issue in the years-old rivalry between Intel and Advanced Micro Devices. AMD's Opteron server processor consumes a maximum of 95 watts, while Intel's Xeon consumes between 110 watts and 165 watts. Other components also draw power, but Barroso observes that in low-end servers, the processor typically accounts for 50 percent to 60 percent of the total consumption.

Fears about energy consumption and heat dissipation first became a common topic among chipmakers around 1999, when Transmeta burst onto the scene. Intel and others immediately latched onto the problem, but coming up with solutions, while providing customers with higher performance, has proved difficult. While the rate at which power consumption increases has declined a bit, the overall rate of energy required still grows. As a result, a "mini-boom" has occurred for companies that specialize in heat sinks and other components that cool.

Sun loudly trumpets Niagara's relatively low power consumption, but it's not the only one to get the religion. At its Intel Developer Forum in August, Intel detailed plans to rework its processor lines to focus on performance per watt.

Over the last three generations of Google's computing infrastructure, performance has nearly doubled, Barroso said. But because performance per watt remained nearly unchanged, that means electricity consumption has also almost doubled.

If server power consumption grows 20 percent per year, the four-year cost of a server's electricity bill will be larger than the $3,000 initial price of a typical low-end server with x86 processors. Google's data center is populated chiefly with such machines. But if power consumption grows at 50 percent per year, "power costs by the end of the decade would dwarf server prices," even without power increasing beyond its current 9 cents per kilowatt-hour cost, Barroso said.

Barroso's suggested solution is to use heavily multithreaded processors that can execute many threads. His term for the approach, "chip multiprocessor technology," or CMP, is close to the "chip multithreading" term Sun employs.

"The computing industry is ready to embrace chip multiprocessing as the mainstream solution for the desktop and server markets," Barroso argues, but acknowledges that there have been significant barriers.

For one thing, CMP requires a significantly different programming approach, in which tasks are subdivided so they can run in parallel and concurrently.

Indeed, in a separate article in the same issue of ACM Queue, Microsoft researchers Herb Sutter and James Larus wrote: "Concurrency is hard. Not only are today's languages and tools inadequate to transform applications into parallel programs, but also it is difficult to find parallelism in mainstream applications, and--worst of all--concurrency requires programmers to think in a way humans find difficult."

But the software situation is improving as programming tools gradually adapt to the technology and multithreading processors start to catch on, Barroso said.

Another hurdle has been that much of the industry has been focused on processors designed for the high-volume personal computer market. PCs, unlike servers, haven't needed multithreading.

But CMP is only a temporary solution, he said.

"CMPs cannot solve the power-efficiency challenge alone, but can simply mitigate it for the next two or three CPU generations," Barroso said. "Fundamental circuit and architectural innovations are still needed to address the longer-term trends."

CNET News.com's Michael Kanellos contributed to this report.

See more CNET content tagged:
power consumption, multiprocessor, Sun Microsystems Inc., Digital Equipment Corp., server

35 comments

Join the conversation!
Add your comment
Apple switch
The power per watt issue is one of the primary motivations for
Apple's switch from the PowerPC chip to Intel. It's extremely
important for laptops and servers where heat management is a
concern.
Posted by vchmielewski (59 comments )
Reply Link Flag
The solution is simple, Google
Really it is. And maybe you could throw your googles of dollars at it:

Invent fusion.

:)
Posted by Christopher Hall (1205 comments )
Reply Link Flag
Why bother with fusion when fission is easy and cheap
The smiley in your post indicates, that you were, of course,
joking, but I take such comments pretty seriously.

Nuclear fission is readily available, works like a champ and has a
long way to go before its technical limits have even been
explored, much less achieved.

Fusion is a pipe dream that consumes massive quantities of
money, talent and time.

I wrote an article on the subject several months ago - Fusion
versus Fission: Difficult versus Easy. If you are interested, you
can find it at:

<a class="jive-link-external" href="http://www.atomicinsights.com/AI_03-04-05.html" target="_newWindow">http://www.atomicinsights.com/AI_03-04-05.html</a>
Posted by Rod Adams (74 comments )
Link Flag
yikes! the sky is falling ...
:)
Posted by Lolo Gecko (131 comments )
Reply Link Flag
I'm shocked - a company copies the Apple line *gasp*
Gee, that never happens ;)
Posted by drhamad (117 comments )
Reply Link Flag
One solution could be the 80plus.org
This year we have been offering a way to save energy simply by replacing your computers power supply and get paid to do it. Visit jameco.com or 80plus.org for more information
Posted by mendozamanny (3 comments )
Reply Link Flag
definite no
If the best they can do is promise you "up to $30 in savings for the life of a PC" then it's not worth the cost/hassle of replacing a power supply. Try again.
Posted by sanenazok (3449 comments )
Link Flag
hmmmmmmm!
I kind of understood the passage, basically Google is screwed unless it can reduce its overall power usage of its servers, or it converts to servers that get better gas mileage? So does anyone else see trends or importance shifting? I'm asking because I still want to have a job in tech 20 years from now...:0
Posted by pworth (1 comment )
Reply Link Flag
concurrency isn't that hard
It is not true that concurrency forces people
to think in ways that are unnatural.

Conventional programming languages approach
concurrency in a way that is unintuitive, but
that's a completely different story.

Erlang (<a class="jive-link-external" href="http://www.erlang.org" target="_newWindow">http://www.erlang.org</a>) has been used for
over a decade in commercial products with lots of
concurrency, and we have lots of evidence that
the programming model is both intuitive and safe,
in fact more so that object-oriented design.

We are eagerly awaiting multi-core chips, as they
offer us a perfectly natural way to scale up the
capacity of our products.

Ulf Wiger
Senior Software Architect
Ericsson AB
Posted by uwiger (5 comments )
Reply Link Flag
Re: concurrency isn't that hard
concurrency at the software level is emulated and therefore easier

however, at the hardware level it might be a little pain in the...

again, security and concurrency don't like each other (problems with serialization, syncronization, memory cloning, etc)
Posted by (11 comments )
Link Flag
You are ignoring half the issue
Yes, servers/processors have to become more energy efficient in order to keep the cost of performance reasonable, but energy itself is the other half of the equation that must be addressed. There is too little investment in renewable energy sources (especially solar power) that have the potential to be cheaper than what is currently used (not to mention safer, environmentally cleaner, and more stable than relying on sources that use fossil fuels). If Google and other companies were so worried about this issue, they'd also be looking into where their energy comes from and how to improve that side of the equation.
Posted by grant999 (7 comments )
Reply Link Flag
reduction is better than reuse
For the environment, less consumption is far superior than improving energy supply. This is true in all situations.
Posted by Sonicsands (43 comments )
Link Flag
POWER!
GEEZ! POWER PLANTS ARE BUILT, MAINTAINED, USED, ALL OVER THE WORLD. WHAT IS WRONG WITH GOOGLE MAKING THEIR OWN POWER SUPPLY? THEY SEEM TO BE ABLE TO DO EVERYTHING ELSE. IT IS NOT LIKE THEY CANNOT AFFORD IT. THEY DON'T HAVE TO RELY ON THE POWER PLANT THAT SUPPLIES THEM. THEY CAN BUILD THEIR OWN. AND THEY CAN DO IT WISER, BETTER, MORE COST EFFICIENT, QUICKER, AND SO FAR FROM EVERYONE ELSES TECHNOLOGY, THEY WONT HAVE TO WORRY EVER ABOUT SOMEONE ELSES POWER.

ESK
Posted by Eskiegirl302 (82 comments )
Reply Link Flag
HAHAHHA .. LAMO!
YOU ARE FUNNY SERIOUSLY ... THE WHOLE THING ABOUT A SEARCH COMPANY THAT DOES MAPS, BLOGS, MAIL AMONG OTHER THINGS TO BUILD THEIR OWN POWER PLANTS! THATS JUST funny (Me so stupid! Me think double caps displays bigger text)
Posted by yehweh247 (7 comments )
Link Flag
Upgrading Hardware
Is this what they are talking about when they say the Government needs to pas laws to upgrade hardware? Are cell phones the only thing involved or do computers benifit by the government getting the lead out?
Posted by popcornut (5 comments )
Reply Link Flag
Power as money?
Years ago I speculated that any future wars would be fought over oil, because oil delivers power and power runs the world, particularly in computing. No electricty, no calculations, no "computing."
While I still believe that, I wonder if in parallel we don't develop a "New money," one based on BTUs (British Thermal Units - a method of detemining heat values of cumbustion sources).
If that happened, might we then start pricing computers on BTUs consumed/expended say, per Million Calculations Per Second? Would "computing economics" then force us all to have supercomputers for personal use, to justify the power expenditure? Or would "shared computing" a rapid growth area now called "outsourcing" or "co-location" computing on more efficient, much larger systems serving many customers at one time, become much more prevalent?
I just wonder.
Posted by bdennis410 (175 comments )
Reply Link Flag
different investment vector?
Maybe it's time for google to put their money where their mouth is, and start investing in alternative power generation, non-centralized and uncontrolled by megacorps/western governments.
Posted by dyoger (2 comments )
Reply Link Flag
and about those processors...
I'd also suggest google buy the first half a million cell processors from Sony, a la Apple purchasing large quantities of the critical components for their mp3 products, which helps guarantee high prices.

God bless capitalism!
Posted by dyoger (2 comments )
Link Flag
Why is this news?
Over the life of a product, initial investment is always a diminishing percentage of TCO. The longer the life the higher percentage of maintaining the equipment energy and support. It is a question of value not cost, however reducing the cost of any part of the equation is a good thing but this article is the equivalent of the line from "Casablanca": "I'm shocked there is gambling going on here."
Posted by jmmejzz (107 comments )
Reply Link Flag
The outlook is grim
If power consumption grows at 50 percent per year, in 25 years the average computer would require well over 1 megawatts of electricity. Perhaps we should start putting more R&#38;D money into developing personal nuclear generators?

With compound growth, you can make all kinds of silly prediction sound pausible.
Posted by Chung Leong (111 comments )
Reply Link Flag
Personal Nuclear Generators
Chung:

I recognize that you may have been trying to be funny, but
Adams Atomic Engines, Inc. (www.atomicengines.com) has spent
the past ten years working on designing atomic engines and
generators that come far closer to the "personal nuclear
generator" than you might imagine.

We certainly believe that our machines will be well suited for
powering server farms and other moderately sized, important
loads.

Our first machines will probably be 10 MW (electric) generators.
A couple of machines that size could provide reliable power to a
small city, a college campus, or a technology park.
Posted by Rod Adams (74 comments )
Link Flag
The cost of competition.
This is one of the cost of competition. When you have companies working hard to get out the top performer they sacrifice power consumption.

I suppose the ideal solution is stop buying performance and start buying by power consumption. My only thought there is nobody is going to do that.
Posted by System Tyrant (1453 comments )
Reply Link Flag
Double that Cost Estimate
Let's not forget that effectively all of the power into a system is converted into heat, which must be removed. At home, this my reduce your winter heating bill. However, much of the actual cost of any data center is electricity used to cool it, which is effectively equal to the electricity used to otherwise power it. So, unless these estimates already factored cooling into the cost (which they may have, if they are good TCO estimates) then double them.
Posted by brendlerjg (7 comments )
Reply Link Flag
Uh, is that like PRINTERS?
How does it go again? Buy 1 printer for $100 and spend $400 on ink for the lifetime of the printer?

The cost of power is one of the operating costs, but if you can't include that cost in your revenue mode;, you shouldn't be in business.

I've done power calculations for a 1000 PC array, and the cost was little more than having one extra specialist employee.

We'd all like to run things cheaply, so maybe now's the time to be looking at energy recovery, recycling and sources.
Posted by dr_no (7 comments )
Reply Link Flag
Google just cash in some stock and bingo!
Come on Google, who needs to worry about power when you guys can cash in a few million shares and have money for decades.

Duh!
Posted by (43 comments )
Reply Link Flag
So? Just move Google to Idaho
or some other place with cheap hydro-electricity.
Posted by (139 comments )
Reply Link Flag
Super Scalar = Multiple Threads? No
Which is better: a superscalar processor that executes up to 8 instructions per cycle, or Niagara with 8 cores, that processes up to 8 instructions per CPU cycle? Sounds the same?

Not if the Niagara instructions are 8 unique programs (threads), all in some 'state' of waiting on memory, in Thread queues.

On 2 core superscaler chip, 16 instuctions per cycle cycle, and 4 core 32 instructions per cycle. Or 8 core Sun with 8 instructions?

It's amazing how quickly we forget about SuperScalar Architectures that can leverage 8 processing units per CPU cycle: Load, Branch, Integer Operation, Floating Point Operation. On one chip. Now Sun Niagara has developed a Non-Superscalar chip that each Core only does one instruction/cycle, but has 8 of them. Rather than 1 processor that can do 8 instructions per cycle. Which would you rather manage to get added through-put: 8 unique instances, or 1?
To get worthwhile jbb results Niagara run 4 JVMs, versus one on SuperScalar chips. Do you want to run 4 instances of your Java application to get the scaling you can with one?

And at what cost? Is Niagara actually cheaper to buy and own than other 8 core solutions?

And since when is doing less work, with less power a novel idea? We can all run on 286s with today's fabs and use a fraction of the power...or simplied 1994 US2 technology to develop a new programming model.
Posted by kahalb (1 comment )
Reply Link Flag
thats a really very thought provoking article http://crossaffairs.blogspot.com
Posted by amanjain59 (1 comment )
Reply Link Flag
 

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot

Discussions

Shared

RSS Feeds

Add headlines from CNET News to your homepage or feedreader.