June 14, 2007 4:00 AM PDT

Intel readies massive multicore processors

Intel readies massive multicore processors
Related Stories

Intel shows off 80-core processor

February 11, 2007

Intel pledges 80 cores in five years

September 26, 2006

Intel expands core concept for chips

December 17, 2004
Ants and beetles have exoskeletons--and chips with 60 and 80 cores are going to need them as well.

Researchers at Intel are working on ways to mask the intricate functionality of massive multicore chips to make it easier for computer makers and software developers to adapt to them, said Jerry Bautista, co-director of Intel's Tera-scale Computing Research Program.

These multicore chips, he added, will also likely contain both x86 processing cores, similar to the brains inside the vast majority of Intel's server and PC chips today, as well as other types of cores. A 64-core chip, for instance, might contain 42 x86 cores, 18 accelerators and four embedded graphics cores.

Some labs and companies such as ClearSpeed Technology, Azul Systems and Riken have developed chips with large numbers of cores--ClearSpeed has one with 96 cores--but the cores are capable of performing certain types of operations.

The 80-core mystery

Ever since Intel showed off its 80-core prototype processor, people have asked, "Why 80 cores?"

There's actually nothing magical about the number, Bautista and others have said. Intel wanted to make a chip that could perform 1 trillion floating-point operations per second, known as a teraflop. Eighty cores did the trick. The chip does not contain x86 cores, the kind of cores inside Intel's PC chips, but cores optimized for floating point (or decimal) math.

Other sources at Intel pointed out that 80 cores also allowed the company to maximize the room inside the reticle, the mask used to direct light from a lithography machine to a photo-resistant silicon wafer. Light shining through the reticle creates a pattern on the wafer, and the pattern then serves as a blueprint for the circuits of a chip. More cores, and Intel would have needed a larger reticle.

Last year, Intel showed off a prototype chip with 80 computing cores. While the semiconductor world took note of the achievement, the practical questions immediately arose: Will the company come out with a multicore chip with x86 cores? (The prototype doesn't have them.) Will these chips run existing software and operating systems? How do you solve data traffic, heat and latency problems?

Intel's answer essentially is, yes, and we're working on it.

One idea, proposed in a paper released this month at the Programming Language Design and Implementation Conference in San Diego, involves cloaking all of the cores in a heterogeneous multicore chip in a metaphorical exoskeleton so that all of the cores look like a series of conventional x86 cores, or even just one big core.

"It will look like a pool of resources that the run time will use as it sees fit," Bautista said. "It is for ease of programming."

A paper at the International Symposium on Computer Architecture, also in San Diego, details a hardware scheduler that will split up computing jobs among various cores on a chip. With the scheduler, certain computing tasks can be completed in less time, Bautista noted. It also can prevent the emergence of "hot spots"--if a single processor core starts to get warm because it's been performing nonstop, the scheduler can shift computing jobs to a neighbor.

Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access. Cores on many dual- and quad-core chips on the market today share caches, but it's a somewhat manageable problem.

"When you get to eight and 16 cores, it can get pretty complicated," Bautista said.

The technology would prioritize operations. Early indications show that improved cache management could improve overall chip performance by 10 percent to 20 percent, according to Intel.

Like the look and feel of technology for heterogeneous chips, programmers won't, ideally, have to understand or deliberately accommodate the cache-sharing or hardware-scheduling technologies. These operations will largely be handled by the chip itself and be obscured from view.

Intel's 80-core chips

Heat is another issue that will need to be contained. Right now, I/O (input-output) systems need about 10 watts of power to shuttle data at 1 terabit per second. An Intel lab has developed a low-power I/O system that can transfer 5 gigabits per second at 14 milliwatts--which is less than 14 percent of the power used by current 5Gbps systems today--and 15Gbps at 75 milliwatts, according to Intel. A paper outlining the issue was released at the VLSI Circuits Symposium in Japan this month.

Low-power I/O systems will be needed for core-to-core communication as well as chip-to-chip contacts.

"Without better power efficiency, this just won't happen," said Randy Mooney, an Intel fellow and director of I/O research.

Intel executives have said they would like to see massive multicore chips coming out in about five years. But a lot of work remains. Right now, for instance, Intel doesn't even have a massive multicore chip based around x86 cores, a company spokeswoman said.

The massive multicore chips from the company will likely rely on technology called Through Silicon Vias (TSVs), other executives have said. TSVs connect external memory chips to processors through thousands of microscopic wires rather than one large connection on the side. This increases bandwidth.

See more CNET content tagged:
multi-core processor, Intel x86, Intel, San Diego, researcher


Join the conversation!
Add your comment
Did they say it is RISC or CISC? LOL. When can I have it in my Mac
Pro towers and blades?
Posted by benjiernmd (123 comments )
Reply Link Flag
Their performance target is a teraflop? Count me as impressed!
Posted by ambigous (58 comments )
Reply Link Flag
Military or meteorological use only.
Also for other scientific endeavours, like gene mapping. But for
home use? This is overkill.
Posted by benjiernmd (123 comments )
Reply Link Flag
How Now Brown Cow?
Overkill is what we want. Wouldn't it be nice to move as fast as those who lie to us. It would be nice to run our own climate models and such. Global warming my ass. More c02 means faster plant growth. I don't see that happening. Stretch out that Heating and Cooling timeline and you see the c02 timeline falls 800 years after the warming when the world was actually cooling. HMMMMM
Posted by nuckelhedd (70 comments )
Link Flag
There are more things in heaven and earth
than are dreamt of in your philosophy.
Hamlet to Horatio.

Sure Office (today) doesn't need 16 cores, how about video editing, compression, and transcoding, interactive photo editing, voice recognition, photo realistic rendering, synthetic vision (interesting home applications like photo-3D model, or home security)

Everytime the processing power rachets up, we find ways to use it. When it goes up by factors of 100, we find entirely new usage models (c.f. the overused "paradigm" word)
Posted by SooperGenius (9 comments )
Link Flag
Yeah global warming is just a big conspiracy. i saw a documentary on it. Stupid lying Nobel Prize winners and other PhDs. I bet they are also responsible for hiding information about about aliens and stuff.
Posted by woollyyams (3 comments )
Link Flag
You can never have too much speed. The only limiting factor is cost.
Posted by nb2000nb (26 comments )
Link Flag
Good idea - creates more opportunities
Intel's multi-core idea is good as long it allows
one to partition core usage to be user defined. It
is not far-fetched to imagine uses of Supercomputing to come to normal day-to-day usage.
I see the requirement to pop-up within 3-5 years.

Intel's R&D has done the job but I don't expect
them to imagine the applications.
Posted by akvish (19 comments )
Reply Link Flag
It's rather ironic that Intel is crowing about 1 TFLOP when IBM already has 2 chips that are already capable of 1 TFLOP with far less cores. Both the cell and Power5 chip are cabaple of 1 TFLOP
Posted by rshimizu12 (98 comments )
Reply Link Flag
supercomputing is more than teraflops
You also need:
huge amounts of memory
error correcting memory
memory access that can keep up with the CPU
really huge amounts of data storage (e.g., hard drive)
fast access to that data storage
operating system that can control all this
software that can make use of all this
Posted by dmm (336 comments )
Reply Link Flag
When can I expect to see the laptop version ?
Posted by Disco-Mike (2 comments )
Reply Link Flag
With the arrival of these CPUs, multicore programming will be a competitive imperative for software developers within 12-18 months.

Here's an e-Book on multicore programming. Covers programming approaches including OpenMP, Intel's TBB, Pthreads, Cilk++, MPI.

<a class="jive-link-external" href="http://www.cilk.com/multicore-e-book" target="_newWindow">http://www.cilk.com/multicore-e-book</a>
Posted by ilya_cilk (6 comments )
Reply Link Flag
With the arrival of these CPUs, multicore programming will be a competitive imperative for software developers within 12-18 months.

Here's an e-Book on multicore programming. Covers programming approaches including OpenMP, Intel's TBB, Pthreads, Cilk++, MPI.

<a class="jive-link-external" href="http://www.cilk.com/multicore-e-book" target="_newWindow">http://www.cilk.com/multicore-e-book</a>
Posted by ilya_cilk (6 comments )
Reply Link Flag

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot



RSS Feeds

Add headlines from CNET News to your homepage or feedreader.