Enterprise SSDs: The Case for All-Flash Data Centers – EnterpriseStorageForum.com

An IT industry analyst article published by Enterprise Storage Forum.


article_enterprise-ssds-the-case-for-all-flash-data-centers
A new study found that some enterprises are experiencing significant benefits by converting their entire data centers to all-flash arrays.

by Mike Matchett, Sr. Analyst

Adding small amounts of flash as cache or dedicated storage is certainly a good way to accelerate a key application or two, but enterprises are increasingly adopting shared all-flash arrays to increase performance for every primary workload in the data center.

Flash is now competitively priced. All-flash array operations are simpler than when managing mixed storage, and the performance acceleration across-the-board produces visible business impact.

However, recent Taneja Group field research on all-flash data center adoption shows that successfully replacing traditional primary storage architectures with all-flash in the enterprise data center boils down to ensuring two key things: flash-specific storage engineering and mature enterprise-class storage features.

When looking for the best storage performance return on investment (ROI), it simply doesn’t work to replace HDDs with SSDs in existing traditional legacy storage arrays. Even though older generation arrays can be made faster in spots by inserting large amounts of underlying flash storage, there will be too many newly exposed overall performance bottlenecks to make it a worthwhile investment. After all, consistent IO performance (latency, IOPs, bandwidth) for all workloads is what makes all-flash a winning data center solution. It’s clear that to leverage a flash storage investment, IT requires flash-engineered designs that support flash IO speeds and volumes.

Even if all-flash performance is more than sufficient for some datacenter workloads, the cost per effective GB in a new flash engineered array can now handily beat sticking flash SSDs into older arrays, as well as readily undercutting large HDD spindle count solutions. A big part of this cost calculation stems from built-in wire speed (i.e. inline) capacity optimization features like deduplication and compression found in almost all flash engineered solutions. We also see increasing flash densities continuing to come to market (e.g., HPE and Netapp have already announced 16TB SSDs) with prices inevitably driving downwards. These new generations of flash are really bending flash “capacity” cost curves for the better.
All-Flash Field Research Results

Recently we had the opportunity to interview all-flash adopting storage managers with a variety of datacenter workloads and business requirements. We found that it was well understood that flash offered better performance. Once an all-flash solution was chosen architecturally, other factors like cost, resiliency, migration path and ultimately storage efficiency tended to drive vendor comparisons and acquisition decision-making. Here are a few interesting highlights from our findings:

Simplification – The deployment of all-flash represented an opportunity to consolidate and simplify heterogenous storage infrastructure and operations, with major savings just from environment simplification (e.g. reduction in number of arrays/spindles).
Consistency – The consistent IO at scale offered from an all-flash solution deployed across all tier 1 workloads greatly reduced IT storage management activities. In addition…(read the complete as-published article there)

We Can No Longer Contain Containers!

(Excerpt from original post on the Taneja Group News Blog)

Depsite naysayers (you know who you are!) I’ve been saying this is the year for containers, and half way into 2016 it’s looking like I’m right. The container community is maturing enterprise grade functionality, perhaps modeled on virtualization predecessors, extremely rapidly. ContainerX is one of those interesting solutions that fills in a lot of gaps for enterprises looking to stand up containers in production. In fact, they claim to be the “vSphere for Containers”.

…(read the full post)

Hyperconverged Storage Evolves – Or is it Pivoting When it Comes to Pivot3?

(Excerpt from original post on the Taneja Group News Blog)

Pivot3 recently acquired NexGen (Mar 2016). Many folks have been wondering what they are doing. Pivot3 has made a name in the surveillance/video vertical with bulletproof hyperconvergence based on highly reliable data protection(native erasure coding) and large scalability (no additional east/west traffic with scale) as a specialty. So what does NexGen IP bring?  For starters, multi-tier flash performance and enterprise storage features (like snapshots).

…(read the full post)

Server Side Is Where It’s At – Leveraging Server Resources For Performance

(Excerpt from original post on the Taneja Group News Blog)

If you want performance, especially in IO, you have to bring it to where the compute is happening. We’ve recently seen Datrium launch a smart “split” array solution in which the speedy (and compute intensive) bits of the logical array are hosted server-side, with persisted data served from a shared simplified controller and (almost-JBOD) disk shelf. Now Infinio has announced their new caching solution version 3.0 this week, adding tiered cache support for server-side SSD’s and other flash to their historically memory focused IO acceleration.

…(read the full post)

Unifying Big Data Through Virtualized Data Services – Iguaz.io Rewrites the Storage Stack

(Excerpt from original post on the Taneja Group News Blog)

One of the more interesting new companies to arrive on the big data storage scene is iguaz.io. The iguaz.io team has designed a whole new, purpose-built storage stack that can store and serve the same master data in multiple formats, at high performance and in parallel streaming speeds to multiple different kinds of big data applications. This promises to obliterate the current spaghetti data flows with many moving parts, numerous transformation and copy steps, and Frankenstein architectures required to currently stitch together increasingly complex big data workflows. We’ve seen enterprises need to build environments that commonly span from streaming ingest and real time processing through interactive query and into larger data lake and historical archive based analysis, and end up making multiple data copies in multiple storage formats in multiple storage services.

…(read the full post)