September 8, 2006 4:00 AM PDT
Intel server revamp to follow AMD
- Related Stories
HP overhauls Integrity server lineSeptember 6, 2006
Sun recoups server market shareAugust 22, 2006
Intel readies 'Tulsa' Xeon debutAugust 22, 2006
Intel acknowledges Itanium flubs, predicts strong futureMarch 23, 2006
Intel pushes back Itanium chips, revamps XeonOctober 24, 2005
If Intel's newly competitive chips recently brought to market act as the brains of a server, then the Common System Interface (CSI) is its nervous system. The technology, set for release in 2008, provides a new way for processors to communicate with each other and with the rest of a computer.
And alongside CSI, Intel plans to release an integrated memory controller, which is housed on the main processor rather than on a separate supporting chip. This will speed memory performance and so dovetail with the new communications system, the company expects.
Together, they could help Intel provide a much-needed counterpunch to AMD, which in 2003 introduced an integrated memory controller and a high-speed interconnect called HyperTransport in its Opteron and Athlon 64 processors. The two communication technologies, marketed together as "Direct Connect Architecture," deliver lower processor costs and chip performance advantages, which AMD has used to win a place in the designs of all of the big four server makers.
"Intel is hoping CSI will do for them in servers what 'CSI' did for CBS in ratings," said Insight 64 analyst Nathan Brookwood, referring to the hit TV series "CSI: Crime Scene Investigation."
Intel has been tight-lipped about CSI. However, Tom Kilroy, general manager of the company's Digital Enterprise Group, did confirm some details in a recent CNET News.com interview. Further glimpses have come from server makers, who are eager for CSI's debut in the "Tukwila" Itanium chip, due in 2008.
CSI brings two major changes. First, it will boost processor performance compared with Intel's current chip communication technology, the front-side bus.
"From a pure performance perspective, when we get to Tukwila and CSI, and we actually get some of the benefits of that protocol introduced into our systems, I think it's going to be really a big deal," said Rich Marcello, general manager of HP's Business Critical Server group.
CSI will be instrumental in helping double the performance of the Tukwila generation of servers, he noted.
Second, CSI will help Itanium server designers take advantage of mainstream Xeon server technology. Both chip families will use the interface, Kilroy said. That's particularly useful for companies such as Unisys, whose servers can use both processor types. It will make it possible for elements of a design to be used in both kinds of machine, reducing development costs and speeding development times.
"CSI allows us to continue to consolidate and standardize on fewer technologies," said Mark Feverston, Unisys' director of enterprise servers. "We can now go to a more common platform that allows us to build the same solutions in a more economical fashion."
CSI hasn't been easy to bring to market, though. In 2005, Intel dramatically altered the schedule for its introduction. Initially, the plan was for it to debut in 2007 with the Tukwila Itanium processor and the high-end "Whitefield" Xeon. But in October, Intel delayed Tukwila to 2008 and canceled Whitefield.
Whitefield's replacement "Tigerton," and a sequel called "Dunnington," both use the front-side bus for communications. That means CSI won't arrive in high-end Xeons until 2009.
In the meantime, Intel has used other methods to compete with AMD--speeding up the front-side bus and building in large amounts of cache memory, for example.
"We've taken a different road, but down the road we'll end up getting an integrated memory controller and CSI in our platform," Kilroy said. "It's just a matter of priority for us."
Why add CSI?
Memory communication speed is a major factor in computer design today. In particular, its increasing performance sluggishness compared with processors is causing problems. To compensate, computer designers have put special high-speed memory, called "cache," directly on the processor.
But in multiprocessor systems, cache poses a problem. If one processor changes a cache memory entry, but that change isn't reflected in the main memory, there's a risk that another processor might retrieve out-of-date information from that main memory. To keep caches synchronized--a requirement called "cache coherency"--processors must keep abreast of changes other processors make.
With Intel's current designs, an extra chip called the chipset coordinates such communications between processors via the front-side bus. In contrast, with HyperTransport and CSI, the processors communicate directly with each other.
Intel also relies on the chipset to help with the communication between chips and the main memory. But technology such as CSI makes it easier for processors to communicate directly with memory. That's because one processor can quickly retrieve data stored in memory connected to another chip.
"The biggest advantage CSI offers is performance and the fact that you basically get a direct connection between the processors. That results in reduced latency between the processors," said Craig Church, Unisys's director of server development. The integrated memory controllers, too, will reduce latency, or communication delays, when a chip is fetching data from its own memory, he added.
8 commentsJoin the conversation! Add your comment