Top 5 ways to cut your storage needs
Reduce your need for expensive data centre space and media
By Robert L. Scheier | Computerworld US | Published: 09:30, 28 September 2010
With the economy still shaky and the need for storage exploding, almost every storage vendor claims it can reduce the amount of data you must store. Trimming your data footprint not only cuts costs for hardware, software, power and data centre space, but also eases the strain on networks and backup windows.
But how do you know which technique to use? First you have to understand how your business uses data and determine when the cost savings of data reduction are worth the resulting drop in performance.
The technique that's best for you depends not so much on the industry you're in as it does on the type of data you store. For example, deduplication often doesn't deliver significant savings for X-rays, engineering test data, video or music. But it can significantly reduce the cost of backing up virtual machines used as servers, for example. Here are five techniques to help reduce your stored data volume.
Related Articles on Techworld
Deduplication, the process of finding and eliminating duplicate pieces of data stored in different data sets, can reduce storage needs up to 90%. For example, through deduplication, you could ensure that you store only one copy of an attachment that was sent to hundreds of employees. Deduplication has become almost a requirement for backup, archiving and just about any form of secondary storage where speed of access is less important than reducing the data footprint.
Chris Watkis, IT director at health care advertising and marketing firm Grey Healthcare Group, is seeing reduction ratios as high as 72:1 for backup data, thanks to a deduplication process that uses FalconStor Software's Virtual Tape Library storage appliance. And cloud storage services vendor i365 is achieving 30:1 to 50:1 reductions in data on a mixed workload of Microsoft Exchange, SharePoint, SQL Server and VMware virtual machine files, says Chief Technology Officer David Allen.
Data can be deduped at the file or block level, with different products able to examine blocks of varying sizes. In most cases, the more fine grained assessment a system can do, the greater the space savings. But fine grained deduplication might take longer and therefore slow data access speeds.
Deduplication can be done preprocessing, or inline (as the data is being written to its target), or postprocessing, after the data has been stored on its target. Postprocessing is best if it's critical to meet backup windows with fast data movement, says Greg Schulz, senior analyst at The Server and StorageIO Group. But consider preprocessing if you have "time to burn" and need to reduce costs, he says.
While inline deduplication can cut the amount of data stored by a ratio of about 20:1, it isn't scalable, and it can hurt performance and force users to buy more servers to perform the deduplication, critics say. On the other hand, Schulz says that postprocessing deduplication requires more storage as a buffer, making that space unavailable for other uses.
For customers with multiple servers or storage platforms, enterprisewide deduplication saves money by eliminating duplicate copies of data stored on the various platforms. This is critical because most organisations create as many as 15 copies of the same data for use by applications such as data mining, ERP and customer relationship management systems, says Randy Chalfant, vice president of strategy at disk-based storage vendor Nexsan. Users might also want to consider a single deduplication system to make it easier for any application or user to "rehydrate" data (return it to its original form) as needed and avoid incompatibilities among multiple systems.
Schulz says primary deduplication products could perform in preprocessing mode until a certain performance threshold is hit, then switch to postprocessing.
Another option, policy-based deduplication, allows storage managers to choose which files should undergo deduplication, based on their size, importance or other criteria.
SFL Data, which gathers, stores, indexes, searches and provides data for companies and law firms involved in litigation, has found a balance between performance and data reduction. It's deploying Ocarina Networks' 2400 Storage Optimiser for "near online" storage of compressed and deduplicated files on a BlueArc Mercury 50 cluster that scales up to 2 petabytes of usable capacity, rehydrating those files as users require them.
"Rehydrating the files slows access time a bit, but it's far better than telling customers they have to wait two days" to access those files, says SFL's technical director, Ruth Townsend, noting that the company gets as much as 50% space savings through deduplication and file compression.