Flash runs past read cache

An IT industry analyst article published by SearchDataCenter.

Just because you can add a cache doesn’t mean you should. It is possible to have the wrong kind, so weigh your options before implementing memory-based cache for a storage boost.


article_Flash-runs-past-read-cache
Can you ever have too much cache?

[Cache is the new black…] As a performance optimizer, cache has never gone out of style, but today’s affordable flash and cheap memory are worn by every data center device.

Fundamentally, a classic read cache helps avoid long repetitive trips through a tough algorithm or down a relatively long input/output (I/O) channel. If a system does something tedious once, it temporarily stores the result in a read cache in case it is requested again.

Duplicate requests don’t need to come from the same client. For example, in a large virtual desktop infrastructure (VDI) scenario, hundreds of virtual desktops might want to boot from the same master image of an operating system. In a cache, every user gets a performance boost and saves the downstream system from a lot of duplicate I/O work.

The problem with using old-school, memory-based cache for writes is if you lose power, you lose the cache. Thus, [unless with battery backup] it is used only for read cache. Writes are set up to “write through” — new data must persist somewhere safe on the back end before the application continues.

Flash is nonvolatile random access memory (NVRAM) and is used as cache or as a tier of storage directly…

…(read the complete as-published article there)

Commodity storage has its place, but an all-flash architecture thrills

An IT industry analyst article published by SearchSolidStateStorage.

Some IT folks are trying to leverage commodity servers and disks with software-implemented storage services. But others want an all-flash architecture.


article_Commodity-storage-has-its-place-but-an-all-flash-architecture-thrills
Every day we hear of budget-savvy IT folks attempting to leverage commodity servers and disks by layering on software-implemented storage services. But at the same time, and at some of the same datacenters, highly optimized flash-fueled acceleration technologies are racing in with competitive performance and compelling price comparisons. Architecting IT infrastructure to balance cost vs. capability has never been easy, but the potential differences and tradeoffs in these storage approaches are approaching extremes. It’s easy to wonder: Is storage going commodity or custom?

One of the drivers for these trends has been with us since the beginning of computing: Moore’s famous law is still delivering ever-increasing CPU power. Today, we see the current glut of CPU muscle being recovered and applied to power up increasingly virtualized and software-implemented capabilities. Last year, for example, the venerable EMC VNX line touted a multi-year effort toward making its controllers operate “multi-core,” which is to say they’re now able to take advantage of plentiful CPU power with new software-based features. This trend also shows up in the current vendor race to roll out deduplication. Even if software-based dedupe requires significant processing, cheap extra compute is enabling wider adoption.

In cloud and object storage, economics trump absolute performance with capacity-oriented and software-implemented architectures popping up everywhere. Still, competitive latency matters for many workloads. When performance is one of the top requirements, optimized solutions that leverage specialized firmware and hardware have an engineered advantage.

For maximum performance, storage architects are shifting rapidly toward enterprise-featured solid-state solutions. Among vendors, the race is on to build and offer the best all-flash solution…

…(read the complete as-published article there)

Flash Gets Faster: Avalanche Technology Intends To Dominate

(Excerpt from original post on the Taneja Group News Blog)

Flash Summit 2014 runs this week, and we expect to hear a lot about how flash is poised to take over the IT storage world. One of the key new entrants to keep an eye out for is Avalanche Technology. While they are ultimately aiming to develop STT-MRAM solutions that could totally change the way IT infrastructure works, right now they are busy rolling out their new “solid-state” first array built from the ground up to intimately leverage available NAND flash.

…(read the full post)

Pernix Data FVP Now Cache Mashing RAM and Flash

(Excerpt from original post on the Taneja Group News Blog)

Performance acceleration solutions tend to either replace key infrastructure or augment what you have. PernixData FVP for VMware clusters is firmly in the second camp, today with a new release making even better use of total cluster resources to provide IO performance acceleration to “any VM, on any host, with any shared storage”.

…(read the full post)

New Team At Violin Memory Playing Flashier Microsoft Music

(Excerpt from original post on the Taneja Group News Blog)

Recently we caught up with Violin Memory and they are full of energetic plans to capitalize on their high performance flash arrays, elevating their game from a focus on bringing fast technology to market to one of addressing big market problems head-on. Today they are announcing a very interesting new solution that creates a whole new segment of storage – a MS Windows-specific file-serving flash array.

…(read the full post)

Stop counting megabytes; it’s all about application-aware storage now

An IT industry analyst article published by SearchStorage.

Raw capacity numbers are becoming less useful as deduplication, compression and application-aware storage provide more value than sheer capacity.


article_Stop-counting-megabytes-its-all-about-application-aware-storage-now
Whether clay pots, wooden barrels or storage arrays, vendors have always touted how much their wares can reliably store. And invariably, the bigger the vessel, the more impressive and costly it is, both to acquire and manage. The preoccupation with size as a measure of success implies that we should judge and compare offerings on sheer volume. But today, the relationship between physical storage media capacity and the effective value of the data “services” it delivers has become much more virtual and cloudy. No longer does a megabyte of effective storage mean a megabyte of real storage.

Most array vendors now incorporate capacity-optimizing features such as thin provisioning, compression and data deduplication. But now it looks like those vendors might just be selling you megabytes of data that aren’t really there. I agree that it’s the effective storage and resulting cost efficiency that counts, not what goes on under the hood or whether the actual on-media bits are virtual, compacted or shared. The type of engine and the gallons in the tank are interesting, but it’s the speed and distance you can go that matter.

Corporate data that includes such varied things as customer behavior logs, virtual machine images and corporate email that’s been globally deduped and compressed might deflate to a twentieth or less of its former glory. So when a newfangled flash array only has 10 TB of actual solid-state drives, but based on an expected minimum dedupe ratio is sold as a much larger effective 100+ TB, are we still impressed with the bigger number? We know our raw data is inherently “inflated” with too many copies and too little sharing. It should have always been stored “more” optimally.

But can we believe that bigger number? What’s hard to know, although perhaps it’s what we should be focusing on, is the reduction ratio we’ll get with our particular data set, as deflation depends highly on both the dedupe algorithm and the content…

…(read the complete as-published article there)

5 Ways Storage Is Evolving

An IT industry analyst article published by Virtualization Review.

Be sure to take advantage of these storage industry trends.

article_5-ways-storage-is-evolvingWith its acquisition of Virsto, VMware certainly understands storage as usual doesn’t cut it when it comes to dense, high-powered virtual environments. This technology addresses the so-called “I/O blender” effect that comes from mixing the I/O from many VMs into one stream on its way to external shared storage. It does this by journaling what looks like highly random I/O to flash. Then asynchronously sorts it out to a hard disk. This is more optimization, though, than game-changing storage strategy.

Here are five broad trends in the storage industry that you can take advantage of today.

  • TAKE 1 Flash
    Flash has certainly changed the storage game. There are many ways it’s applied — at the server (such as PCIe cards from Fusion-IO, EMC XtremIO), in the network (such as Astute), or in the array (such as pure flash and hybrid storage from just about everybody). To make the most of your flash investment, keep an eye on factors like where high performance will have the best impact on the applications for which it’s best suited.
  • TAKE 2 Hyperconvergence
    We’ve all seen pre-packaged “converged” racks of servers, storage, networking, and hypervisor platforms from vendors such as VCE, Dell, and HP. These can be great deals if you want a single source and low risk when building a virtual environment. However, the storage isn’t necessarily different than what you’d get if you built it yourself. In some ways, running a virtual storage appliance is a type of convergence that architecturally shifts the burden of hosting storage directly onto your hypervisors. Taking things a step further are hyperconvergence vendors like Simplivity, Nutanix and Scale Computing. These collapse compute, storage and hypervisor into modular building blocks that make scaling out a datacenter as easy as stacking Legos. Purpose-built storage services are tightly integrated and support optimized and highly cost-efficient VM operations.
  • TAKE 3 VM Centricity


…(read the complete as-published article there)

MRAM technology likely choice as post-flash solid-state storage

An IT industry analyst article published by SearchSolidStateStorage.

NAND flash-based storage is becoming a common alternative, but NAND flash could soon be replaced by newer forms of non-volatile memory like MRAM technology.

article_MRAM-technology-likely-choice-as-post-flash-solid-state-storageFlash storage is everywhere these days. It’s hard to have a discussion about IT infrastructure without someone talking about how flash storage can be leveraged to make server and storage architectures faster. It’s not necessarily cheaper, although a large increase in workload hosting density can provide cost justification. But it will certainly deliver higher performance at key points in the I/O stack in terms of outright latency; and with clever approaches to auto-tiering, write journaling and caching, higher throughputs are within easy reach.

But flash as a non-volatile random-access memory (nvRAM) technology has its problems. For one, it wears out. The most common type of flash is based on NAND transistors like static RAM (SRAM), but has an internal “insulation” layer that can hold an electric charge without external power. This is what makes it non-volatile; but writing to NAND flash requires a relatively large “charge pump” of voltage, which makes it slower than RAM and eventually wears it out. Perversely, wear-leveling algorithms designed to spread the damage evenly tend to increase overall write amplification, which in turn causes more total wear. And looking forward, because of the physics involved, flash is inherently constrained as to how much it can eventually shrink and how dense it can get.

Flash is constrained in terms of density, power and performance compared to active DRAM. This currently isn’t a problem, but as we continue to discover ways to creatively leverage flash to accelerate I/O, flash will ultimately give way to newer types of non-volatile memory that aren’t as limited. Perhaps the most promising technology today is a type of nvRAM based on magnetoresistance. Magnetoresistive random-access memory (MRAM) stores information as a magnetic orientation rather than as an electrical charge. This immediately provides a much higher read and write performance that is much closer to DRAM speeds than flash because bits are read by testing with voltage, not current, and written with a small current boost, not a huge charge. (Current DRAM latency is less than 10 nanoseconds [nsec]; MRAM is currently around 50nsec, and flash is much slower at 20 microseconds to 200 microseconds depending on read or write.)

…(read the complete as-published article there)

Excuse me, but I think your cache is showing…

(Excerpt from original post on the Taneja Group News Blog)

Everybody these days is adding flash-based SSD to their storage arrays.  Some are offering all flash storage for ultra-high performance.  And a few are popping flash storage right into the server as a very large, persistent cache.  But taking advantage of flash in these ways requires either hardware refresh or significant service disruption – or both.

GridIron offers a drop-in, non-disruptive way to immediately super-charge existing infrastructure. Their TurboCharger appliances logically plug into the middle of the SAN fabric where they can be installed (and removed) non-disruptively by taking advantage of I/O multi-pathing.  Once installed, they jump in to the data path as a virtual LUN fronting the real LUN on the back-end, providing a massive amount of SSD write-through cache that automatically adjusts to multiple workloads.  Because it’s in the SAN, TurboCharger can virtually “front” any underlying storage – even storage that is in turn further virtualized.

GridIron customers have generally faced serious data access challenges with large databases and in consolidated and virtualized environments that benefit from read-intensive IO acceleration. GridIron is now expanding its product line to help accelerate structured and unstructured “big data” access.   The OneAppliance all-Flash product line includes the FlashCube for offloading temp, log, and scratch space write-intensive workloads, and an iNode that combines massive flash and compute together for building high-performance compute clusters.

GridIron is clearly differentiating from other flash solutions in its direct and practical approach to bringing the power of flash to bear directly on the extreme data access and movement problems with big data. 

…(read the full post)