Follow Us

We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

Four tips to avoid server failure in virtualisation projects

Common pitfalls of using virtual machines for mission-critical apps

Article comments

As virtualisation stretches deeper into the enterprise to include mission-critical and resource-intensive applications, IT executives are learning that double-digit physical-to-virtual server ratios are things of the past.

Virtualisation vendors may still be touting the potential of putting 20, 50 or even 100 virtual machines (VM) on a single physical machine. But IT managers and industry experts say those ratios are dangerous in production environments and can cause performance problems or, worse, outages.

"In test and development environments, companies could put upwards of 50 virtual machines on a single physical host. But when it comes to mission-critical and resource-intensive applications, that number tends to plummet to less than 15," says Andi Mann, vice president of research at Enterprise Management Associates in Boulder, Colorado.

In a 2009 study of 153 organisations with more than 500 end users, EMA found that, on average, enterprises were achieving 6:1 consolidation rates for applications such as ERP, CRM, email and databases.

The variance between the reality and the expectations, whether it's due to vendor hype or internal ROI issues, could spell trouble for IT teams. That's because the consolidation rate affects just about every aspect of a virtualisation project - budget, capacity and executive buy-in. "If you go into these virtualisation projects with a false expectation, you're going to get in trouble," Mann says.

Indeed, overestimating physical-to-virtual ratios can result in the need for more server hardware, rack space, cooling capacity and power consumption - all of which cost money. Worse yet, users could be affected by poorly performing applications. "If a company thinks they're only going to need 10 servers at the end of a virtualisation project and they actually need 15, it could have a significant impact on the overall cost of the consolidation and put them in the hole financially. Not a good thing, especially in this economy," says Charles King, president and principal analyst at consultancy Pund-IT.

Why is there a disconnect between virtualisation expectations and reality? King says that up to this point, many companies have focused on virtualising low-end, low-use, low-I/O applications such as test, development, log, file and print servers. "When it comes to edge-of-network, non-mission-critical applications that don't require high availability, you can stack dozens on a single machine," he says.

Bob Gill, an analyst at TheInfoPro, agrees. "Early on, people were virtualising systems that had a less-than-5% utilisation rate. These were the applications that, if they went down for an hour, no one got upset," he says.

That's not the case when applying virtualisation to mission-critical, resource-intensive applications - and virtualization vendors, on the whole, have been slow to explain this reality to customers, according to some analysts.

Once you consider applications with higher utilisation rates, greater security risks, and increased performance and availability demands, consolidation ratios drop off considerably. "These applications will compete for bandwidth, memory, CPU and storage," King says. Even on machines with two quad-core processors, highly transactional applications that have been virtualized will experience network bottlenecks and performance hits as they vie for the same server's pool of resources.

Here are four tips for avoiding server overload.

1. Start With Capacity Analysis

To combat the problem, IT teams have to rejigger their thinking and dial back everyone's expectations. The best place to start: a capacity analysis, says Kris Jmaeff, information security systems specialist at the Interior Health Authority, a British Columbia government agency.

Four years ago, the data center at Interior Health was growing at a rapid clip. There was a lot of pressure to virtualize the 500-server production environment to support a host of services, including DNS, Active Directory, Web servers, FTP, and many production application and database servers.

Before starting down that path, Jmaeff first used VMware tools to conduct an in-depth capacity analysis that monitored server hardware utilisation. (Similar tools are also available from Cirba, Hewlett-Packard, Microsoft, PlateSpin and Vizioncore, among others.) Rather than looking at his hardware environment piece by piece, he instead considered everything as a pool of resources. "Capacity planning should focus on the resources that a server can contribute to the virtual pool," Jmaeff says.

Already, the team has been able to consolidate 250 servers - 50% of the server farm - onto 12 physical hosts. And while Jmaeff's overall average data centre ratio is 20:1, hosts that hold more-demanding applications either require much lower ratios or require that he balance out resource-intensive applications.

Jmaeff uses a combination of VMware vCenter and IBM Director to monitor each VM for telltale signs of ratio imbalances such as spikes in RAM and CPU usage, or performance degradation. "We've definitely had to bump applications around and adjust our conversion rates according to server resource demand to create a more balanced workload," he says. If necessary, it's easy to clone servers and quickly spread the application load, he adds.


Share:

More from Techworld

More relevant IT news

Comments

Andy Bailey said: I would like to add a fifth tip For critical applications it is essential to deploy on fault tolerant hardware these servers have fully duplexed components so if something fails they carry on working no failover required they just keep on ticking In the interest of full disclosure I should point out I work for Stratus Technologies

Phil Maynard said: For those of you that would like to extend VMware HAFT amp vMotion to detect and automatically repair system and application failures you might want to check out vAppHA - httpwwwneverfailgroupcom




Send to a friend

Email this article to a friend or colleague:

PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.

Techworld White Papers

Choose – and Choose Wisely – the Right MSP for Your SMB

End users need a technology partner that provides transparency, enables productivity, delivers...

Download Whitepaper

10 Effective Habits of Indispensable IT Departments

It’s no secret that responsibilities are growing while budgets continue to shrink. Download this...

Download Whitepaper

Gartner Magic Quadrant for Enterprise Information Archiving

Enterprise information archiving is contributing to organisational needs for e-discovery and...

Download Whitepaper

Advancing the state of virtualised backups

Dell Software’s vRanger is a veteran of the virtualisation specific backup market. It was the...

Download Whitepaper

Techworld UK - Technology - Business

Innovation, productivity, agility and profit

Watch this on demand webinar which explores IT innovation, managed print services and business agility.

Techworld Mobile Site

Access Techworld's content on the move

Get the latest news, product reviews and downloads on your mobile device with Techworld's mobile site.

Find out more...

From Wow to How : Making mobile and cloud work for you

On demand Biztech Briefing - Learn how to effectively deliver mobile work styles and cloud services together.

Watch now...

Site Map

* *