Supercomputers bulk up on power while shedding price pounds
High-performance systems are getting larger and larger. But lower costs are broadening their appeal within IT.
By Patrick Thibodeau, Computerworld | Computerworld UK | Published: 01:00, 03 December 2007
Nine years ago, the most powerful supercomputer in the world was the ASCI Red system, built by Intel and installed at Sandia National Laboratories in Albuquerque.
That system included 9,152 Pentium processors, took up 2,500 square feet of space and cost $55 million. On benchmark tests, it reached a performance level of 1.3 trillion floating-point operations per second, or teraFLOPS.
Now you can get nearly 1 teraFLOP of throughput from the supercomputer-in-a-box system that HP announced at SC07. The machine, a version of HP’s Blade System c3000 designed for midsize users, includes eight server blades, each with two of Intel’s new Xeon 5400 quad-core chips.
HP said the system takes up just two square feet of space, can run off a standard wall socket and doesn’t need to be located in a datacentre. Typically, it will cost between $25,000 and $50,000.
Thanks to such systems, IDC forecasts that worldwide HPC revenues will rise from about $11 billion this year to more than $15 billion in 2011 — an average annual growth rate of nine percent.
But in some respects, the HPC market is still focused more on scientific researchers than it is on users like John Picklo, HPC manager at car maker Chrysler.
Picklo, who oversees clustered Linux and Unix systems with a total of 1,650 processor cores, said that the vendors of HPC applications aren’t keeping up with the shift to multicore chips. According to Picklo, many software vendors still base their pricing on the number of processor cores in a system. The problem, he said, is that quad-core processors don’t necessarily deliver performance equal to that of four single-core chips.
“If I was buying four single cores, I wouldn’t mind buying four licences,” Picklo said. “But if a quad-core [processor] requires four licences, I’m not going to get the same benefit out of that.”
He added that he wants application vendors to consider alternative licensing models, such as ones based on processor performance.
Software licensing isn’t as big of an issue for academic and government researchers, who typically run custom application code. For those users, vendors are packing quad-core chips into HPC systems in increasingly dense configurations. For instance, there are just under 213,000 processor cores in the BlueGene/L system that IBM built for the US Department of Energy’s National Nuclear Security Administration.
The BlueGene/L at Lawrence Livermore National Laboratory has been No. 1 on the Top500 list since November 2004. Following an upgrade earlier this year, its sustained benchmark throughput is 478.2 teraFLOPS.
But IBM vows that next year, it will build multiple systems that can reach the petaFLOPS level — more than twice what is possible now. “You’ll probably see several petaFLOPS machines,” said Leo Suarez, IBM’s vice president of deep computing.
The growth rates are such that by 2015, all of the Top500 systems will at least be in the petaFLOPS category, predicted Erich Strohmaier, a researcher at Lawrence Berkeley National Laboratory who helps compile the supercomputer list.