Five lessons from a data centre's crisis of capacity
Hard lessons in disaster recover
By Robert Lemos | CIO US | Published: 14:43, 22 October 2009
1. Plan, don't react
The first problem Wescott needed to solve was the data centre group's habit of reacting to each small problem as it arose, rather than seeing the systematic issues and creating a plan to create a sustainable service. In addition to the 500 servers, the data center had some 33,000 cables connecting those servers to power, networking and security systems.
"We decided what the data center should look like and what its capacity should be," he says.
The group concluded that the current trajectory would result in 3,000 applications, each running on its own server, in 10 years. Now, the data center has 81 percent of applications virtualized - and average of 17 per server - and Wescott plans to reach the 90 percent mark.
Companies should focus on three areas to increase capacity, says IDC's Pucciarelli. Reducing the number of physical servers and running applications on virtual systems helps reduce power requirements, as does more efficient cooling systems and improvements in electrical distribution.
"That's typically the one-two-three that you go to when updating the data centre," he says.
Pucciarelli has encountered many companies that have replaced up to 50 servers with just two or three larger capacity systems and used virtualisation to run their applications.
2. Measure to manage
Data centre managers need ways to monitor the state of the data centre, but all too frequently they don't have the right tools, PNNL's Wescott says. Prior to the changes, Pacific Northwest National Labs had no way to measure the efficiency of its data centre. Power problems were discovered when the room went dark, or though a more seat-of-your-pants method.
"If there was too much amperage through our power supplies, the way I found out was to put my hand on the circuit breaker and if it was warm, then I knew we had a problem," he says. "That's proof that you need tools."
Now, PNNL has sensors in place on every fourth cabinet at the low, medium and high points to create a 3-D heat map of the server room. The data allowed Wescott to change the way he cools the data centre, increasing overall temperatures and applying cooling where he needed it.
"I think that is going to save me a lot of money, and wear and tear, on my air conditioners," he says, adding that current estimates are that the data centre will be 40 percent more efficient with cooling.
3. Take small steps
Radically reconfiguring the data centre without disrupting operations is a major problem, says Wescott. The manager advocates taking small steps to minimise outages, but left the decision to his managers, he says.
"I presented two choices to the management," Wescott says. "We take the entire campus for seven days and we go from scratch; the other is that we take an outage over a weekend every quarter."
By taking small steps, the group prepared to replace the data centre a row at a time. On the first three-day weekend, the 30-person team spent 14 hours a day in the data centre, replacing a row of server racks and testing the new configuration. Immediately, the data centre became more reliable and stable, Wescott says.
If management cannot agree to allowing a data centre outage, remind them that it's better to have a planned outage than a sudden, unplanned failure, he says.
"You can't paint the bottom of a boat as it is sailing across the ocean, but if you don't paint it, it's going to sink," says Wescott.