June 27, 2007 1:00 AM PDT

Which supercomputers rule?

It was November 2000 when the first supercomputer passed 4 teraflops, or 4 trillion calculations per second. Now, that's the minimum requirement to even show up on the latest version of a list of the 500 fastest machines.

Supercomputing, which pits the highest-end machines against challenges such as forecasting the global climate in coming decades or finding oil reservoirs underground, is a fast-changing field. The Top500 list, released twice annually at supercomputing conferences, has the most turnover compared with the preceding list yet, according to the researchers who compile it.

Many systems on the newest Top500 ranking, set to be released Wednesday at the International Supercomputing Conference in Dresden, Germany, weren't on the list at all when the last one was released in November 2006.

But one familiar supercomputer, IBM's BlueGene/L at Lawrence Livermore National Laboratory, again topped the Top500 List of Supercomputers with 131,072 processors, staying far ahead of its closest competitors by achieving speeds of 280.6 teraflops. The big change is Cray's Jaguar system, which leapfrogged from the No. 10 to the No. 2 position. The only two other systems to surpass 100 teraflops were also made by Cray: Oak Ridge National Laboratory's Jaguar, at 101.7 teraflops, and Sandia National Laboratory's Red Storm, at 101.4 teraflops.

The total performance of all 500 systems reached 4.92 petaflops, or 1,000 teraflops. The systems on the November list topped out at 3.54 petaflops, and the June 2006 compilation totaled 2.79 petaflops.

IBM dominated the list with 6 of the top 10 systems and 192 of the total 500, though Hewlett-Packard is actually the overall leader in terms of the percentage of systems. Forty percent, or 203 of 500, are powered by HP, but IBM's total teraflop sum is 2,060, almost double Hewlett-Packard's total of 1,202.

Coming in at No. 5, IBM's New York Blue at Stony Brook University is new to the list, as is IBM's similar Blue Gene system at Rensselaer Polytechnic Institute at No. 7. Dell's Abe PowerEdge 1955 server, at the University of Illinois' National Center for Supercomputing Applications, made its at the No. 8 spot.

The main measurement used in compiling the list is the Linpack measurement, which puts each system through its paces by having to solve a dense system of linear equations. The Top500 acknowledges it isn't a complete test of system performance, but it's a way to test for performance on a similar problem across each system. The need for a more complete benchmarking system has been under discussion for several years.

Intel managed to increase its lead over Advanced Micro Devices in the number of systems using x86 processors. Just over half--52 percent--of the systems use Intel's x86 chips, up from 45.6 percent, while AMD's share shrunk from a 22.6 percent share to 21.2 percent. Intel's Itanium processors are beginning to move off the list, appearing in 28 of the 500 systems, down from last year's count of 35.

The most noticeable change in applications of supercomputing is the rise in the number of systems used in geophysics, which rose from 23 in November to 37 on the newest list.

See more CNET content tagged:
teraflop, supercomputing, supercomputer, Cray Inc., Jaguar

15 comments

Join the conversation!
Add your comment
new days are coming
Finally, the terminal and supercomputer combination might
possibly replace all the PCs. A dream of 30 years in the making..
(with elimination of many local network support... the workforce
might be changing slowly...). Big savings in business (and improve
the efficiency, sercurity and network utilization... just think about
how much less if Y2K bug was only need to be fixed in
supercomputer... remind you, that another C bug is coming soon).
Posted by 1st (104 comments )
Reply Link Flag
No supercomputer needed
We moved about half of our users from traditional desktop machines to thin clients some time ago. IMO, the day of the fat client machine on most businesses desktops has come and gone. We run dozens of users on machines with the same horsepower and ram some people have in their home computer.

I wouldn't be surprised to see ISPs like Comcast start offering thin clients in the next few years.

It's common knowledge that your typical home user cannot maintain a Windows PC. A thin client that would allow Joe Sixpack to surf, email, etc. virus and spyware free would be just the thing.
Posted by rcrusoe (1305 comments )
Link Flag
Keep dreaming remember the Network PC
when it worked as a dumb terminal running JavaOS to an IBM Mainframe that provided the storage and processing over the Internet.

The Y2K bug was nothing, I was able to fix all software I wrote at the time way ahead in advance to handle the Y2K issue. Nobody noticed anything was wrong.

Anyway the changes I wrote handle a four year date in processing date time, so it won't be a problem until 9999 AD.

Windows already fixed that 2K57 bug because I am in Thailand now and their year is 2550 here and everything works fine. They measure their years based on the birth of Buddha instead of Jesus. Windows XP is handling the year 2550 without any problems.

Actually when you think about it, would a business big or small really trust their security and intellectual property to be handled by another company on their Mainframe Supercomputer that billions of other people can access. I mean how sure are you that at least a few of them won't learn how to bypass security and steal IP from various companies? Sometimes it is best when IP is stored on a hard drive somewhere that the Internet does not have access to, and only on a local Intranet. Even so, the Supercomputer company might have a disgruntled employee who goes corrupt and has admin access and can sell your IP to the highest bidder? Remember when AOL had their user and credit card list sold by a disgruntled employee.
Posted by Orion Blastar (590 comments )
Link Flag
Supercomputer does not equal mainframe
The sort of computers in discussion for this article are totally unrelated to what you have in mind. A supercomputer is designed to handle a very small number of extremely large problems.

What you're looking for is more of a mainframe, which is designed to handle a LOT of small tasks. There may be a certain amount of overlap between the two, but it's pretty limited. There are very good reasons why IBM sells BlueGene/L supercomputers, x-series, p-series and i-series servers and also their z-series mainframes. Different solutions for different problems.

Also, it's important to note that what you're proposing has been suggested time and again since basically the 1950s. It isn't going to happen. If anything we're moving AWAY from what you suggest and towards laptops which need even more independence from the corporate network.
Posted by Hoser McMoose (182 comments )
Link Flag
Both will take place.....hybrid look...
It will be a thin client with brains, dual core brains connected to
the network, able to run with or without (it's own independence)
the network, and with multi-tasking parallel processing
functionality, so those multiple cores fly. And security will be
handled at the foundation, and it will be handled.

A wireless, tower less society, at the office, home and out in
society; and all portable hardware devices will be connected to
your thin clients and the netowrk. Look for a Cisco and the Intels
& MS to be totally involved and in control in this evolution.

And you are correct, super computers handle a limit amount of
tasks (applications), and they have the Amdahl law threshold to
still overcome.

Supercomputers would come to a screeching halt if say they
were to run a business data pay roll application. Supercomputers
execute massively parallel applications, such as predicting
weather, financial forecast etc. They are expensive to own and
operate. They get hype, because it sounds fun discussing their
largeness.

They execute floating point instructions, which is very much
different then possible mainframe general purpose utility
computing.

But supercomputers do have their niche, but look for smaller
Cray models with parallel processing software to potentially take
off in all markets. We really could solve some interesting
problems once everything gets in order. 4-5 years is my
projection for acceptance, and commercialization of both
models.
Posted by thecatch (49 comments )
Reply Link Flag
Floating Point
You say: "They execute floating point instructions, which is very much different then possible mainframe general purpose utility
computing."

Burroughs had a floating-point, general-purpose mainframe computer in the 1960s. It was used by DOD, some banks, and others. Worked fine. It was all a matter of programming.
Posted by kakodes_too (20 comments )
Link Flag
 

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot

Discussions

Shared

RSS Feeds

Add headlines from CNET News to your homepage or feedreader.