Follow Us

We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

Getting the most out of flash storage

The pain of disk storage and the pitfalls of applying conventional architectures to flash

Article comments

Over the past few years mainstream enterprises have been turning to NAND flash storage to boost speed and decrease latency, but some vendors still produce products that inhibit customers from achieving flash's full potential.

Solid-state storage offerings that integrate NAND flash as they would traditional disk systems put data far away from the CPU, often behind an outdated storage controller. No matter how fast the NAND is, this setup creates latency, ensuring the application sees only small improvements in actual throughput.

Let's take a step back and look at the pain of disk storage, the pitfalls of applying conventional architectures to flash, and how to achieve the full potential of NAND flash.

The pain

The speed limitations of disk drives compared to CPUs are well known. Less well known are the disk acrobatics administrators have to endure to configure drives for performance. This includes buying expensive Fibre Channel disk drives and configuring them in complex schemes that use only a portion of the drive platter to boost performance, which means adding stacks of disks with largely unused capacity that administrators must monitor for failure (not to mention the costs for power, cooling and space to house the systems).

But even with these acrobatics, disks often struggle to meet required performance levels due to the distance of external disk storage systems from the CPU. While CPUs and memory operate in microseconds, access to external disk-based systems happens in milliseconds - a thousandfold difference. Even when disk systems can pull data quickly, getting the data to and from the CPU has a long latency delay causing CPUs to spend a lot of time waiting for data. This negatively impacts application and database performance.

The pitfall

If you consider flash as a new form of media, like tape and disk drives are media, then implementing it the same way you implemented previous media technologies is only a small part of the way forward.

By itself, flash removes the part of the latency bottleneck caused by slow spinning disk drives, but it does nothing to resolve the delay in getting process-critical data to and from the CPU.

Storing data in a flash array puts process-critical data on the wrong side of the storage channel, far away from the server CPU that is processing application and database requests.

The result is a minimal performance gain and, in addition to adding more hardware, organisations must also implement complex and costly storage area network infrastructure, including host bus adapters, switches and monolithic arrays.

But most importantly, these architectures retain the traditional implementations of storage, as well as RAID, and SATA/SAS controllers - all optimized to spinning drives, not NAND flash silicon.

The potential

Increasingly, solid-state vendors have recognised that the key to realising improved performance is putting flash close to the CPU, and they are creating devices that use PCIe natively, without the inhibitors of outdated translation layers.

However, some of these devices hamstring performance by placing the flash under the control of legacy storage implementations of SATA or SAS controllers that were initially designed for disks. These protocols and data handling mechanisms were never intended to operate with NAND flash and do not do any justice to NAND flash capabilities. It's like putting a performance automobile engine into the body of a 25-year-old clunker.

The same thing goes for RAID controllers. Initially designed to aggregate the performance of multiple disks and protect from individual disk failures, conventional RAID mechanisms work well for spinning media. However, these mechanisms do not work well for NAND flash, because they inject too much latency.

The best mechanism to place flash in a server is referred to as native PCIe access, where legacy storage technologies are put aside, and a new cut-through architecture provides the most direct, accessible, and lowest latency path between the NAND flash and the host memory.

Keep in mind that CPUs never read information from storage; instead, everything must pass through system memory first. To assist in the process, native PCIe NAND flash devices present storage to the application or database like a disk drive, but they actually deliver the data to the system memory via Direct Memory Access, or DMA. This guarantees the lowest latency transactions between data storage and CPU processing.

By offering server CPUs unrestricted access to flash, native PCIe implementations increase application and database performance 10x. The difference between this cut-through approach and other solid-state offerings is the improvement to application throughput and not just raw media performance. Data placement in the server without legacy storage protocols allows applications to fully utilise server CPUs by not forcing them to wait for slow access to data.

Using flash as a disk or a cache

A native PCIe NAND flash device can be used as a disk drive or a caching device. Both provide significant advantages to conventional disk-based systems.

In disk drive mode, a NAND flash PCIe device can store data as if it were a disk drive itself. This is ideal for databases where the entire data set can be placed on one or more PCIe devices. NAND flash PCIe devices can be aggregated with host OS software or built-in volume management functionality, such as Oracle Automatic Storage Management (ASM). Using high- capacity native NAND flash PCIe devices, it is possible to get well over 10 terabytes in a single server - plenty of capacity to cover a broad market for this approach. Even if the entire data set cannot be placed in flash, most databases allow for the placement of active files such as index files or "hot" tables to be manually placed on a specific data store.

In caching mode, a NAND flash PCIe device can cache frequently accessed data without changing the existing external storage infrastructure. This is ideal where existing subsystem-based data protection and recovery mechanisms are in place.

Caching frequently accessed data locally within each server guarantees the maximum performance for active data while still retaining existing data stores. This combination is ideal for I/O-intensive applications on bare metal or for virtual environments. In many cases virtual environments suffer from inadequate I/O capabilities, or from I/O that can only be achieved at high costs. Caching frequently accessed virtual machine data locally on PCIe flash devices alleviates this pain.

Flash technology brings a lot to the table for speeding up enterprise applications and databases. But when flash is treated as just a new kind of disk drive, businesses miss the mark in delivering on its full potential. Native PCIe approaches that forgo legacy disk protocols and place process-critical data near the CPU to minimize latency deliver on flash's promise to the enterprise.

Fusion-io has pioneered a next-generation storage memory platform that significantly improves the processing capabilities within a data centre by relocating process-critical, or "active," data from centralised storage to the server where it is being processed, a methodology referred to as data decentralisation. Fusion-io's platform enables enterprises to increase the utilization, performance and efficiency of their data center resources and extract greater value from their information assets.


Share:

More from Techworld

More relevant IT news

Comments




Send to a friend

Email this article to a friend or colleague:

PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.

Techworld White Papers

Choose – and Choose Wisely – the Right MSP for Your SMB

End users need a technology partner that provides transparency, enables productivity, delivers...

Download Whitepaper

10 Effective Habits of Indispensable IT Departments

It’s no secret that responsibilities are growing while budgets continue to shrink. Download this...

Download Whitepaper

Gartner Magic Quadrant for Enterprise Information Archiving

Enterprise information archiving is contributing to organisational needs for e-discovery and...

Download Whitepaper

Advancing the state of virtualised backups

Dell Software’s vRanger is a veteran of the virtualisation specific backup market. It was the...

Download Whitepaper

Techworld UK - Technology - Business

Innovation, productivity, agility and profit

Watch this on demand webinar which explores IT innovation, managed print services and business agility.

Techworld Mobile Site

Access Techworld's content on the move

Get the latest news, product reviews and downloads on your mobile device with Techworld's mobile site.

Find out more...

From Wow to How : Making mobile and cloud work for you

On demand Biztech Briefing - Learn how to effectively deliver mobile work styles and cloud services together.

Watch now...

Site Map

* *