Survey Spotlights Top 5 Data Storage Pain Points

An IT industry analyst article published by Enterprise Storage Forum.


by Mike Matchett,

The Enterprise Storage Forum survey uncovered the biggest challenges storage professionals have with their existing storage infrastructure: aging gear, lack of capacity, high operations cost, security, maintenance burden. We’ll discuss which storage technologies available or coming soon might serve to ease those pain points.

Data storage has been around as long as computing, but based on the Enterprise Storage Forum survey, we have yet to solve all the problems. Entitled Data Storage Trends 2018, the survey reveals that storage professionals face no lack of serious concerns.

One of the interesting charts that jumped out at me is about the biggest challenge in operating current storage infrastructure. In essence, this is the “select your biggest pain” question. Let’s dive in.

Top Five Data Storage Challenges
Why are these ever-present data storage challenges? Why haven’t storage vendors researched technologies and nailed down solutions to solve them? This chart illustrates the leading pain points; we’ll look at the top five:

http://www.enterprisestorageforum.com/imagesvr_ce/9011/biggest%20challenge%20chart.png

1. Aging gear: Of course, no matter when you invest in new equipment, it starts aging immediately. And once deployed, storage, and the data stored on it tends to sit in the data center until it reaches some arbitrary vendor end-of-life (EOL) stage. With working storage the motto tends to be – “If it’s not broke, don’t fix it!”

Still, once something like storage is deployed, the capex is a sunk cost. Aging storage should probably be replaced long before full obsolescence comes along; significant attribute improvements are likely available on the market at any large storage’s “half-life.” These include better performance and agility, cheaper operating costs and upgrades, increased capacity and new features.

Here, I can’t blame storage vendors for lack of improved storage offerings. From flash engineered designs to software-defined agility, the storage landscape is full of opportunistic (and large ROI) “refresh” solutions. Proactive storage managers might think to replace their storage “ahead of time” as the scales tip in favor of new solutions, rather than sit back and wait for the traditional “five year” accounting-based storage refresh cycle.

2. Lack of Storage Capacity: Yes, data is still growing. In fact, data growth can be non-linear, which makes it hard to plan ahead. Unable to keep up with capacity demand, many organizations now rely on that elastic storage provider, cloud, hybrid cloud or even multi-cloud storage services – which can get pricey!

We may be doomed to suffer this pain point forever, but some newer storage technologies are being designed to scale-out “for a long time” with linear performance…(read the complete as-published article there)

Survey Results: Cloud Storage Takes Off, Flash Cools Off

An IT industry analyst article published by Enterprise Storage Forum.


article_cloud-storage-takes-off-flash-cools-off
By Mike Matchett,

The Enterprise Storage Survey results show that the biggest storage budget line item is cloud storage, although HDDs still hold more data. We explore why cloud is inevitably winning, and when the actual tipping point might come about.

Is on-premise storage dead? Is all storage inevitably moving to the cloud? If you work in IT these days, you are no doubt keeping a close eye on the massive changes afoot in storage infrastructure these days. Flash acceleration, hyperconvergence, cloud transformation – where is it all going and how soon will it get there?

We explored the past, present and future of enterprise storage technologies as part of our recent Storage Trends 2018 survey.

The Dominance of Cloud Storage
The short story is that cloud storage has now edged out the ubiquitous hard drive as the top budget line item in IT storage spending (see below). We are not sure if this is good news or bad news for IT, but it is clear that those cloud-heavy IT shops have to get on top of and actively manage their cloud storage spending.

storage survey

Despite having cloud move into the lead for slightly more than 21% of companies, the game is not over yet for on-premise storage solutions. Flash has still not run it’s full course and HDDs are still the top budget item today for almost as many companies (21%) as cloud.

New innovations in solid-state like NVMe are providing even greater acceleration to data center workloads even as SDD prices continue to drop. As silicon price drops, total spending inherently skews towards more expensive technologies – the footprint will grow even if the relative spend doesn’t keep pace…(read the complete as-published article there)

Enterprise SSDs: The Case for All-Flash Data Centers – EnterpriseStorageForum.com

An IT industry analyst article published by Enterprise Storage Forum.


article_enterprise-ssds-the-case-for-all-flash-data-centers
A new study found that some enterprises are experiencing significant benefits by converting their entire data centers to all-flash arrays.

by Mike Matchett, Sr. Analyst

Adding small amounts of flash as cache or dedicated storage is certainly a good way to accelerate a key application or two, but enterprises are increasingly adopting shared all-flash arrays to increase performance for every primary workload in the data center.

Flash is now competitively priced. All-flash array operations are simpler than when managing mixed storage, and the performance acceleration across-the-board produces visible business impact.

However, recent Taneja Group field research on all-flash data center adoption shows that successfully replacing traditional primary storage architectures with all-flash in the enterprise data center boils down to ensuring two key things: flash-specific storage engineering and mature enterprise-class storage features.

When looking for the best storage performance return on investment (ROI), it simply doesn’t work to replace HDDs with SSDs in existing traditional legacy storage arrays. Even though older generation arrays can be made faster in spots by inserting large amounts of underlying flash storage, there will be too many newly exposed overall performance bottlenecks to make it a worthwhile investment. After all, consistent IO performance (latency, IOPs, bandwidth) for all workloads is what makes all-flash a winning data center solution. It’s clear that to leverage a flash storage investment, IT requires flash-engineered designs that support flash IO speeds and volumes.

Even if all-flash performance is more than sufficient for some datacenter workloads, the cost per effective GB in a new flash engineered array can now handily beat sticking flash SSDs into older arrays, as well as readily undercutting large HDD spindle count solutions. A big part of this cost calculation stems from built-in wire speed (i.e. inline) capacity optimization features like deduplication and compression found in almost all flash engineered solutions. We also see increasing flash densities continuing to come to market (e.g., HPE and Netapp have already announced 16TB SSDs) with prices inevitably driving downwards. These new generations of flash are really bending flash “capacity” cost curves for the better.
All-Flash Field Research Results

Recently we had the opportunity to interview all-flash adopting storage managers with a variety of datacenter workloads and business requirements. We found that it was well understood that flash offered better performance. Once an all-flash solution was chosen architecturally, other factors like cost, resiliency, migration path and ultimately storage efficiency tended to drive vendor comparisons and acquisition decision-making. Here are a few interesting highlights from our findings:

Simplification – The deployment of all-flash represented an opportunity to consolidate and simplify heterogenous storage infrastructure and operations, with major savings just from environment simplification (e.g. reduction in number of arrays/spindles).
Consistency – The consistent IO at scale offered from an all-flash solution deployed across all tier 1 workloads greatly reduced IT storage management activities. In addition…(read the complete as-published article there)

Virtualizing Hadoop Impacts Big Data Storage

An IT industry analyst article published by Enterprise Storage Forum.

by Mike Matchett, Sr. Analyst, Taneja Group
article_virtualizing-hadoop-impacts-big-data-storage
Hadoop is soon coming to enterprise IT in a big way. VMware’s new vSphere Big Data Extensions (BDE) commercializes its open source Project Serengeti to make it dead easy for enterprise admins to spin and up down virtual Hadoop clusters at will.

Now that VMware has made it clear that Hadoop is going to be fully supported as a virtualized workload in enterprise vSphere environments, here at Taneja Group we expect a rapid pickup in Hadoop adoption across organizations of all sizes.

However, Hadoop is all about mapping parallel compute jobs intelligently over massive amounts of distributed data. Cluster deployment and operation are becoming very easy for the virtual admin. But in a virtual environment where storage can be effectively abstracted from compute clients, there are some important complexities and opportunities to consider when designing the underlying storage architecture. Some specific concerns with running Hadoop in a virtual environment include considering how to configure virtual data nodes, how to best utilize local hypervisor server DAS, and when to think about leveraging external SAN/NAS.

The main idea behind virtualizing Hadoop is to take advantage of deploying Hadoop scale-out nodes as virtual machines instead of as racked commodity physical servers. Clusters can be provisioned on-demand and elastically expanded or shrunk. Multiple Hadoop virtual nodes can be hosted on each hypervisor physical server, and as virtual machines can be easily allocated more or less resource for a given application. Hypervisor level HA/FT capabilities can be brought to bear on production Hadoop apps. VMware’s BDE even includes QoS algorithms that help prioritize clusters dynamically, shrinking lower-priority cluster sizes as necessary to ensure high-priority cluster service.

…(read the complete as-published article there)