Flash storage market remains a tsunami

An IT industry analyst article published by SearchSolidStateStorage.


A few months ago, Taneja Group surveyed 694 enterprise IT folks (about half in management, half in architecture/operations) about their storage acceleration and performance needs, perceptions and plans. Of course, examining the role and future of flash storage was a big part of our analysis of the flash storage market.

One of the key questions we asked was if they each thought that all-flash arrays would be used for all tier 1 workloads in the enterprise data center by the end of 2017, less than two years out. We found that 18% agreed without qualification, while another 35% agreed but thought they might need more time to accommodate natural storage refresh cycles. Together, that’s a majority of 53% firmly in the all-flash future camp, while only 10% outright disagreed that all-flash was going to be the dominant future storage platform.

Of course “tier 1” can mean different things to different folks, but people generally agree that tier 1 is their primary application storage powering key business processes. We followed up with several vendors about their all-flash future footprint visions and, unsurprisingly, we found broader, more inclusive descriptions. In general, all-flash array vendors think that all tier 1 and tier 2 data center storage could be on all-flash, while vendors with wider portfolios — including traditional storage and hybrids — have naturally hedged their bets on the flash storage market to “let” the client obtain what they see as best fitting their needs.

…(read the complete as-published article there)

Figuring out the real price of flash technology

An IT industry analyst article published by SearchSolidStateStorage.

Sometimes comparing the costs of flash arrays is an apples-to-oranges affair — interesting, but not very helpful.


article_Figuring-out-the-real-price-of-flash-technology
We’re often told by hybrid and all-flash array vendors that their particular total cost of ownership (TCO) is effectively lower than the other guy’s. We’ve even heard vendors claim that by taking certain particulars into account, the per-gigabyte price of their flash solution is lower than that of spinning disk. Individually, the arguments sound compelling; but stack them side by side and you quickly run into apples-and-oranges issues.

Storage has a lot of factors that should be profiled and evaluated such as IOPS, latency, bandwidth, protection, reliability, consistency and so on, and these must match up with client workloads with unique read/write mixes, burstiness, data sizes, metadata overhead and quality of service/service-level agreement requirements. Standard benchmarks may be interesting, but the best way to evaluate storage is to test it under your particular production workloads; a sophisticated load gen and modeling tool like that from Load DynamiX can help with that process.

But as analysts, when we try to make industry-level evaluations hoping to compare apples to apples, we run into a host of half-hidden factors we’d like to see made explicitly transparent if not standardized across the industry. Let’s take a closer look…

…(read the complete as-published article there)

Commodity storage has its place, but an all-flash architecture thrills

An IT industry analyst article published by SearchSolidStateStorage.

Some IT folks are trying to leverage commodity servers and disks with software-implemented storage services. But others want an all-flash architecture.


article_Commodity-storage-has-its-place-but-an-all-flash-architecture-thrills
Every day we hear of budget-savvy IT folks attempting to leverage commodity servers and disks by layering on software-implemented storage services. But at the same time, and at some of the same datacenters, highly optimized flash-fueled acceleration technologies are racing in with competitive performance and compelling price comparisons. Architecting IT infrastructure to balance cost vs. capability has never been easy, but the potential differences and tradeoffs in these storage approaches are approaching extremes. It’s easy to wonder: Is storage going commodity or custom?

One of the drivers for these trends has been with us since the beginning of computing: Moore’s famous law is still delivering ever-increasing CPU power. Today, we see the current glut of CPU muscle being recovered and applied to power up increasingly virtualized and software-implemented capabilities. Last year, for example, the venerable EMC VNX line touted a multi-year effort toward making its controllers operate “multi-core,” which is to say they’re now able to take advantage of plentiful CPU power with new software-based features. This trend also shows up in the current vendor race to roll out deduplication. Even if software-based dedupe requires significant processing, cheap extra compute is enabling wider adoption.

In cloud and object storage, economics trump absolute performance with capacity-oriented and software-implemented architectures popping up everywhere. Still, competitive latency matters for many workloads. When performance is one of the top requirements, optimized solutions that leverage specialized firmware and hardware have an engineered advantage.

For maximum performance, storage architects are shifting rapidly toward enterprise-featured solid-state solutions. Among vendors, the race is on to build and offer the best all-flash solution…

…(read the complete as-published article there)

MRAM technology likely choice as post-flash solid-state storage

An IT industry analyst article published by SearchSolidStateStorage.

NAND flash-based storage is becoming a common alternative, but NAND flash could soon be replaced by newer forms of non-volatile memory like MRAM technology.

article_MRAM-technology-likely-choice-as-post-flash-solid-state-storageFlash storage is everywhere these days. It’s hard to have a discussion about IT infrastructure without someone talking about how flash storage can be leveraged to make server and storage architectures faster. It’s not necessarily cheaper, although a large increase in workload hosting density can provide cost justification. But it will certainly deliver higher performance at key points in the I/O stack in terms of outright latency; and with clever approaches to auto-tiering, write journaling and caching, higher throughputs are within easy reach.

But flash as a non-volatile random-access memory (nvRAM) technology has its problems. For one, it wears out. The most common type of flash is based on NAND transistors like static RAM (SRAM), but has an internal “insulation” layer that can hold an electric charge without external power. This is what makes it non-volatile; but writing to NAND flash requires a relatively large “charge pump” of voltage, which makes it slower than RAM and eventually wears it out. Perversely, wear-leveling algorithms designed to spread the damage evenly tend to increase overall write amplification, which in turn causes more total wear. And looking forward, because of the physics involved, flash is inherently constrained as to how much it can eventually shrink and how dense it can get.

Flash is constrained in terms of density, power and performance compared to active DRAM. This currently isn’t a problem, but as we continue to discover ways to creatively leverage flash to accelerate I/O, flash will ultimately give way to newer types of non-volatile memory that aren’t as limited. Perhaps the most promising technology today is a type of nvRAM based on magnetoresistance. Magnetoresistive random-access memory (MRAM) stores information as a magnetic orientation rather than as an electrical charge. This immediately provides a much higher read and write performance that is much closer to DRAM speeds than flash because bits are read by testing with voltage, not current, and written with a small current boost, not a huge charge. (Current DRAM latency is less than 10 nanoseconds [nsec]; MRAM is currently around 50nsec, and flash is much slower at 20 microseconds to 200 microseconds depending on read or write.)

…(read the complete as-published article there)