IT on a chip
Embedded functions are helping do things from improving security to virtualising servers
Hardware performance is about much more than clock speed and raw processing power these days, thanks to embedded functions that are helping do things from improving security to virtualising servers.
Chip makers including Intel and AMD are ushering in a new era in processor design by adding hardware-enabled features to their wares. The goal is to either replace functions that have traditionally been done via software or, more often, significantly improve the operation of the software.
As an added bonus, those hardware-assisted processor functions improve overall system performance without increasing the heat generated, the vendors claim, allowing corporations to keep a lid on utility costs and reduce the need for exotic cooling strategies.
"This is something that has been coming for a long time," says Rick Sturm, president at Enterprise Management Associates. "It’s the natural course of evolution, and an affordable and rational thing to do to put some of this functionality down on the chip level."
As computer platforms and overall system management increase in complexity, IT professionals are demanding that systems have 100 per cent availability, sub-second response times and instant problem resolution, Sturm says. Those goals are no longer strictly the purview of any one area -- silicon, software or human intervention -- but are now being addressed by taking advantage of advances on all fronts.
"IT is strangling" from the costs of operations, Sturm says. "We’re spending so much money on management that it is preventing us from innovating and addressing the needs of business."
The Charlotte Observer, the largest daily newspaper in the US state of North Carolina, began last December migrating some of the publication’s most important applications to a virtualised environment.
The paper is moving its Oracle-based circulation system database to servers that have Intel's new quad-core Xeon processors with baked-in, hardware-enabled virtualisation technology. Also being placed on these same virtualised servers is the paper’s editorial content workflow system.
Geoff Shorter, IT infrastructure manager at The Observer, says he found out during his testing phase how these new servers can run virtualisation in near-native speeds. The database used for the test prepares subscription renewal notices and determines which accounts need to billed, how much to bill and for what period of time.
Mike Grandinetti, chief marketing officer for virtualisation software provider Virtual Iron, says virtualisation often results in overall hardware performance penalties, ranging from 10 per cent to 50 per cent. But when using chip-enabled virtualisation, that penalty can drop to four per cent or less.
That's indeed what Shorter's group found. "Virtual Iron will tell you their overhead is between one per cent and three per cent, but a three per cent difference on a 10-minute [database run] is not noticeable," Shorter says. "It’s just like native. The driving force for going to a virtualisation strategy was cost, but we’ve tested it, and performance is also a driving factor."
Shorter estimates he can run seven to 12 virtual servers per single-core processor node on existing systems. As the newspaper transitions to quad-core systems over the next year, he expects to be able to support around 30 virtual servers per physical node.
Jason Lochhead, principal architect at managed hosting provider Data Return, says the company is already seeing benefits from hardware-assisted virtualisation within the server infrastructure it offers its customers.
A year ago, Data Return introduced its Infinistructure utility computing platform intended to allow customers to maximise server utilisation and more economically create on-demand compute resources through the use of server virtualisation. Using HP servers based on AMD Opteron processors, Data Return has been able to create hundreds of virtual server instances for customers within its data centres in Dallas and Pleasanton in California.
"We don’t have as much wasted hardware capacity and have lowered power and cooling bills by consolidating these physical servers with the use of virtualised machines," Lochhead says. "It’s much cheaper, particularly when you’re talking about adding servers for redundancy rather than performance."
The hardware-assisted virtualisation capability within the AMD Opteron processors allows Data Return to run many more varieties of operating systems in both 32-bit and 64-bit versions on the same base hardware, he says. In the future, additional hardware-assisted abilities within Opteron are expected to include memory translation and virtualised access to I/O devices, he says.
"We’re enthusiastic about it," Lochhead says. "When we were first going down this road, virtualisation was pretty new, and customers were a little leery of accepting it. But when someone like AMD comes out and says they are putting these technologies into hardware, it’s a vote of confidence."
What the future holds
RedMonk analyst Michael Cote says adding hardware-assisted functions to replace or augment software capabilities will continue to increase this year and next, as mainstream microprocessor manufacturers attempt to differentiate their product lines.
In addition, Cote says, "These capabilities will continue to increase as more IT professionals gain a greater understanding of what is available and the potential benefits."
In most cases, rather than fully replacing traditional systems management software applications within a corporation, the new hardware-assisted capabilities will make that software operate more efficiently. Kevin Unbedacht, senior platforms strategist at Altiris, a provider of IT asset management software and services, says that Intel's new Active Management Technology (AMT) is a good example.
Altiris' software has traditionally been able to analyse only those systems that are on and running an operating system. If a system is off, or not operating properly, the Altiris software can't collect a full inventory analysis.
By using the AMT capability embedded within the chip set of VPro systems, however, the Altiris tracking and inventory software can detect systems even when they are off or not operating properly.
In addition, flash memory inside the VPro chip set stores system information each time the PC is booted, providing up-to-date information on the system status. The out-of-band alerts enabled by AMT can allow an IT department to make a single dispatch call, instead of the two that have been traditionally required for analysis and repair, he says.
The end result, Unbedacht says, is a hardware/software combo that can proactively monitor IT infrastructure instead of reacting only when something is wrong.
In addition, having basic management capabilities hardwired into silicon will make it simpler for new entrepreneurial systems management companies to add product offerings that can rapidly be adopted by IT professionals and integrated in enterprise-level applications, Cote says.
For its part, Intel calls its effort Embedded IT, and is attacking the problem with a variety of new or planned capabilities. Competitor AMD has similar efforts within its Trinity and Torrenza programs. (See the section "Management by hardware" below for more information about what the vendors are doing.)
Measuring success: not so fast
The biggest boost to processor performance in the last two years has been the move to multi-core processors. The migration from single-core to dual-core processors within the x86 market provided direct performance gains of 80 per cent or more, and the first quad-core processors from Intel are providing another 50 per cent improvement, says Nathan Brookwood, an analyst at Insight 64. How much hardware-assisted features or embedded IT will add to performance is debatable, with the real measure of worth to be determine by how the efforts improve such things as manageability.
"The ultimate test is whether it works for the IT professional for their specific application," Brookwood says. "Things like embedded IT are really designed to increase functionality rather than performance."
Markus Levy, an analyst who serves as president of The Multicore Association and the Embedded Microprocessor Benchmark Consortium, says the move to embed more hardware-assisted features will undoubtedly bring performance gains. But measuring any specific gain is a new challenge that industry groups are only beginning to address.
Increasing the clock speed of microprocessors has provided only minimal performance gains in the past few years as processor manufacturers have hit the wall in the trade-off between speed and the heat generated by the chips. Even the addition of multiple cores within processors running at lower clock speeds to reduce heat is expected to see diminishing returns as those chips move to eight or more cores, Levy explains.
In traditional architectures the use of additional cores will not necessarily help applications that require specific optimisation, he says, adding to the need for hardware-enabled assists. "When you are trying to do a specific function like security acceleration, adding another processor core can be an expensive piece of hardware, compared to enabling that capability by using only 100,000 or so gates inside the existing chip," Levy says.
Determining the level of performance enhancement that is associated with those hardware-assisted hooks and accelerators is a task the technology is just beginning to tackle.
"We’re going to have to have benchmarks that are specifically tailored toward the use of those features," Levy says. "It is also going to require that we think of performance in a different way. It is going to be pretty challenging to develop a benchmark suite that will work on everybody’s platform as they become increasingly custom."
Management by hardware
The past year has seen the advent of hardware-assisted features within mainstream x86-based microprocessors from Intel and AMD. Even as the those chip vendors have turned to multi-core implementations as the primary source for boosting performance, they are adding hardwired features into their processors and associated chip sets.
These features were previously left solely to software or were not addressed at all.
"We are looking hard at what technologies are right to be moved into silicon and placed within our platforms as opposed to technologies that need to stay in software," says Margaret Lewis, director of commercial solutions at AMD. "As a result, we are on the brink of a lot interesting new concepts in performance. It’s no longer simple. In many cases, it won’t be necessarily be how fast you complete a task, but how satisfied you are with the result."
AMD’s Trinity platform is intended to allow processors to handle virtualisation, security and management. One of the first commercialised efforts has been technology originally developed under the code name Pacifica, to allow hardware to more easily run multiple operating systems.
Also introduced in the past year was AMD’s Torrenza platform. Torrenza uses AMD’s existing interconnect technology to allow third parties to create application-specific coprocessors that can work alongside AMD processors in multi-socket systems.
For its part, Intel’s embedded IT capabilities include its already released Virtualization Technology, which like AMD’s Pacifica provides a hardware-enabled ability to more effectively create virtualised infrastructure installations. Also introduced by Intel is Active Management Technology (AMT), embedded in client-side processors. AMT allows IT managers to remotely access networked computing equipment -- even those that lack a working operating system or hard drive, or those that have been turned off.
Also in the works from Intel is I/O Acceleration Technology, a network accelerator that can break up the data-handling job among all the components in a server, including the processor, chip set, network controller and software. The distributed approach reduces the workload on the processors while accelerating the flow of data, Intel says.
Intel’s Trusted Execution Technology, originally code-named LeGrande Technology, is a set of hardware extensions to processors and chip sets that enhances security. The technology means to prevent software-based attacks and to protect the confidentiality and integrity of data stored or created on a client PC.
Darrell Dunn is a freelance reporter based in Fort Worth, Texas, USA, with 20 years of experience covering business technology and enterprise IT. Contact him at email@example.com.