Follow Us

We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

3PAR's view of HP's so-called thin provisioning

It isn't thin but it might be chubby

Article comments

I talked to Craig Nunes, 3PAR's marketing VP, and asked him how he viewed HP's take on thin provisioning or, more properly, adaptive provisioning. Here is what he said. It is quite detailed and worth a careful read. He starts by setting a context, a sort of thin provisioning primer.

Thin provisioning primer

First of all, our desire is for our entire industry to go 'Thin.' Nothing would be better for the storage data centre and for the organisations that invest in them. That being said, we are very careful in applying the 'Thin' label. Some thin provisioning implementations are 'thin' in name only.

To be truly thin means to maximise utilisation efficiency while minimising administrative overhead, minimising risk and avoiding performance penalties. The way you maximise utilisation efficiency is to keep capacity in the single free space buffer as long as possible so that it does not get “pre-dedicated” to either volumes themselves, nor to the underlying constructs that support these thin provisioned volumes - whether they are called 3PAR Common Provisioning Groups (CPG), Hitachi Dynamic Pools (HDP) or NetApp Aggregates. If you can’t achieve this, your utilisation efficiency may be slightly better than traditional fat provisioning, but it will never reach anywhere close to the levels that can be achieved with true thin provisioning for reasons I will describe.

It should be autonomic

Now the only way you can make this process of capacity allocation to volumes as close to “just-in-time” as possible, and therefore achieve high utilisation efficiency, is for the whole process to be autonomic – i.e. it happens within the system automatically without any ongoing manual administrative work. If it is a manual process, the administrative overhead can be horrendous – creating RAID groups, creating logical disks, assigning logical disks to provisioning pools on an ongoing basis (not just once up front) etc.

This can take days or weeks to carefully plan and complete on each occasion that replenishment is necessary. If there is any level of manual work involved (beyond adding physical disks into the system), then not only are you increasing management complexity, you are also adding, as we shall see, significant risk. And to counter this risk, administrators will pre-dedicate larger amounts of storage capacity to the underlying pools, losing the very improvements in utilisation efficiency the customer was hoping to achieve.

When not autonomic, administrators not only have to concern themselves whether there is enough capacity left in the single free space buffer, they also have to worry about whether any of the provisioning pools/aggregates are going to run out of space too. If the customer needs to have differing pools servicing different groups of applications, then you have to have an administrator permanently watching the assigned capacity levels in all these individual pools/aggregates. It is important to have different pools/aggregates on a storage consolidation platform so that different service levels can be met (i.e. different RAID levels and disk types), and to ensure good performance for specific applications.

But with more provisioning pools/aggregates to watch, there is a greater danger that the administrator can miss that one of the pools is running out of capacity and needs to be manually replenished. The resulting risk is that the set of applications using that specific provisioning pool/aggregate runs out of space and suffers a series of application write failures. Not a pretty result for the customer.

Likelihood of over-provisioning

So what is the storage administrator’s likely response if the replenishment of the different provisioning pools/aggregates is a manual, not autonomic, process? The answer is that they will want to minimise the likelihood that the provisioning pools/aggregates can run out of space. They will pre-configure a lot more capacity into these structures than the applications actually need! This results in much poorer storage capacity utilisation. Rather than with fat provisioning where allocated-but-unused was locked up in storage volumes, administrators will now simply lock up unused capacity in a number of provisioning pools/aggregates instead.

So this type of implementation is not Thin Provisioning, it is really Chubby Provisioning. And Chubby Provisioning provides relatively small advantages in improved utilisation over traditional Fat Provisioning, but at the cost of significantly higher administrative workload. Furthermore it still holds the danger that a storage administrator might forget to accomplish a manual replenishment process in the future – leading to the risk of application failures. I believe we spoke of this briefly in London a couple of weeks ago.

So with that said, the example you used in your article (below) is really of a chubby implementation.

"Traditionally an application is allocated a LUN with the full amount of storage capacity needed, say 20GB. With thin provisioning it is actually given a fraction of this, say 5GB, but spoofed to believe it has the full 20GB. Then, when the 5GB is nearly filled up with data another 5GB will be given to it."

With thin provisioning, there is no provisioning of 5GB, then another 5GB, then another, then another. Certain vendors may try to pass this off as 'thin,' but it is not. If there is a 20GB LUN with 2GB of written data, 2GB of physical capacity is used. And as the app writes to new blocks, physical capacity is provisioned accordingly without any admin involvement.

Thin provisioning allocates autonomically in direct response to application writes. There is no planning, no administrative action, nothing. In fact, if the volume is expected to grow to 60GB over a three-year period, there is no penalty whatsoever for the administrator to provision 60GB once, up-front -- no real capacity is used but that to support the written data, and the need to manage the ongoing growth of the volume is eliminated.

On the subject of scale

One final point about thin provisioning which has to do with scale. As you can imagine, thin provisioning a disk array that supports 56 disk drives has much less value than a disk array with thousands of disk drives. You can over-provision the volumes to the application server but as the application data grows, it will run out of the needed performance and capacity on a frequent basis. New arrays will have to be purchased and installed and volumes will then need to be migrated from the one array to another for additional performance and capacity.

The addition of new arrays in a SAN and the associated data migrations require planning and application downtime, and therefore are to be avoided on a regular basis. I make this point because an array that is not capable of substantial scalability will provide only limited benefits as it relates to thin provisioning, which again is meant to maximise utilisation efficiency while minimising administrative overhead, minimising risk and avoiding performance penalties.

Back to HP

So back to the story on HP… What it appears HP has done is not 'thin' and not even 'chubby.' It is simply a Windows-only enhancement to manual, 'fat' provisioning with which HP is trying to claim the benefits of thin provisioning. Thin provisioning allows thin Windows Server volumes to consume physical capacity autonomically and online only as an application writes occur.

With HP's new feature, fat Windows Server volumes can be manually grown online by the storage administration staff. Fat and manual provisioning on a platform with as few as 56 disk drives versus thin and autonomic provisioning on a massively scalable array -- the differences are stark. Unfortunately, HP has completely missed the distinction. That, coupled with the fact that growing a volume online is really nothing new, tells me that HP customers will have to look elsewhere for a cure to their storage utilisation issues.


Share:

More from Techworld

More relevant IT news

Comments



Send to a friend

Email this article to a friend or colleague:

PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.

Techworld White Papers

Choose – and Choose Wisely – the Right MSP for Your SMB

End users need a technology partner that provides transparency, enables productivity, delivers...

Download Whitepaper

10 Effective Habits of Indispensable IT Departments

It’s no secret that responsibilities are growing while budgets continue to shrink. Download this...

Download Whitepaper

Gartner Magic Quadrant for Enterprise Information Archiving

Enterprise information archiving is contributing to organisational needs for e-discovery and...

Download Whitepaper

Advancing the state of virtualised backups

Dell Software’s vRanger is a veteran of the virtualisation specific backup market. It was the...

Download Whitepaper

Techworld UK - Technology - Business

Innovation, productivity, agility and profit

Watch this on demand webinar which explores IT innovation, managed print services and business agility.

Techworld Mobile Site

Access Techworld's content on the move

Get the latest news, product reviews and downloads on your mobile device with Techworld's mobile site.

Find out more...

From Wow to How : Making mobile and cloud work for you

On demand Biztech Briefing - Learn how to effectively deliver mobile work styles and cloud services together.

Watch now...

Site Map

* *