Survey Spotlights Top 5 Data Storage Pain Points

An IT industry analyst article published by Enterprise Storage Forum.


by Mike Matchett,

The Enterprise Storage Forum survey uncovered the biggest challenges storage professionals have with their existing storage infrastructure: aging gear, lack of capacity, high operations cost, security, maintenance burden. We’ll discuss which storage technologies available or coming soon might serve to ease those pain points.

Data storage has been around as long as computing, but based on the Enterprise Storage Forum survey, we have yet to solve all the problems. Entitled Data Storage Trends 2018, the survey reveals that storage professionals face no lack of serious concerns.

One of the interesting charts that jumped out at me is about the biggest challenge in operating current storage infrastructure. In essence, this is the “select your biggest pain” question. Let’s dive in.

Top Five Data Storage Challenges
Why are these ever-present data storage challenges? Why haven’t storage vendors researched technologies and nailed down solutions to solve them? This chart illustrates the leading pain points; we’ll look at the top five:

http://www.enterprisestorageforum.com/imagesvr_ce/9011/biggest%20challenge%20chart.png

1. Aging gear: Of course, no matter when you invest in new equipment, it starts aging immediately. And once deployed, storage, and the data stored on it tends to sit in the data center until it reaches some arbitrary vendor end-of-life (EOL) stage. With working storage the motto tends to be – “If it’s not broke, don’t fix it!”

Still, once something like storage is deployed, the capex is a sunk cost. Aging storage should probably be replaced long before full obsolescence comes along; significant attribute improvements are likely available on the market at any large storage’s “half-life.” These include better performance and agility, cheaper operating costs and upgrades, increased capacity and new features.

Here, I can’t blame storage vendors for lack of improved storage offerings. From flash engineered designs to software-defined agility, the storage landscape is full of opportunistic (and large ROI) “refresh” solutions. Proactive storage managers might think to replace their storage “ahead of time” as the scales tip in favor of new solutions, rather than sit back and wait for the traditional “five year” accounting-based storage refresh cycle.

2. Lack of Storage Capacity: Yes, data is still growing. In fact, data growth can be non-linear, which makes it hard to plan ahead. Unable to keep up with capacity demand, many organizations now rely on that elastic storage provider, cloud, hybrid cloud or even multi-cloud storage services – which can get pricey!

We may be doomed to suffer this pain point forever, but some newer storage technologies are being designed to scale-out “for a long time” with linear performance…(read the complete as-published article there)

Survey Results: Cloud Storage Takes Off, Flash Cools Off

An IT industry analyst article published by Enterprise Storage Forum.


article_cloud-storage-takes-off-flash-cools-off
By Mike Matchett,

The Enterprise Storage Survey results show that the biggest storage budget line item is cloud storage, although HDDs still hold more data. We explore why cloud is inevitably winning, and when the actual tipping point might come about.

Is on-premise storage dead? Is all storage inevitably moving to the cloud? If you work in IT these days, you are no doubt keeping a close eye on the massive changes afoot in storage infrastructure these days. Flash acceleration, hyperconvergence, cloud transformation – where is it all going and how soon will it get there?

We explored the past, present and future of enterprise storage technologies as part of our recent Storage Trends 2018 survey.

The Dominance of Cloud Storage
The short story is that cloud storage has now edged out the ubiquitous hard drive as the top budget line item in IT storage spending (see below). We are not sure if this is good news or bad news for IT, but it is clear that those cloud-heavy IT shops have to get on top of and actively manage their cloud storage spending.

storage survey

Despite having cloud move into the lead for slightly more than 21% of companies, the game is not over yet for on-premise storage solutions. Flash has still not run it’s full course and HDDs are still the top budget item today for almost as many companies (21%) as cloud.

New innovations in solid-state like NVMe are providing even greater acceleration to data center workloads even as SDD prices continue to drop. As silicon price drops, total spending inherently skews towards more expensive technologies – the footprint will grow even if the relative spend doesn’t keep pace…(read the complete as-published article there)

Learn storage techniques for managing unstructured data use

An IT industry analyst article published by SearchStorage.


article_Learn-storage-techniques-for-managing-unstructured-data-use
Rearchitect storage to maximize unstructured data use at the global scale for larger data sets coming from big data analytics and other applications.

Mike Matchett
Small World Big Data

Back in the good old days, we mostly dealt with two storage tiers. We had online, high-performance primary storage directly used by applications and colder secondary storage used to tier less-valuable data out of primary storage. It wasn’t that most data lost value on a hard expiration date, but primary storage was pricey enough to constrain capacity, and we needed to make room for newer, more immediately valuable data.

We spent a lot of time trying to intelligently summarize and aggregate aging data to keep some kind of historical information trail online. Still, masses of detailed data were sent off to bed, out of sight and relatively offline. That’s all changing as managing unstructured data becomes a bigger concern. New services provide storage for big data analysis of detailed unstructured and machine data, as well as to support web-speed DevOps agility, deliver storage self-service and control IT costs. Fundamentally, these services help storage pros provide and maintain more valuable online access to ever-larger data sets.

Products for managing unstructured data may include copy data management (CDM), global file systems, hybrid cloud architectures, global data protection and big data analytics. These features help keep much, if not all, data available and productive.

Handling the data explosion

The underlying theme of many new storage offerings is to extend enterprise-quality IT management and governance across multiple tiers of global storage.

We’re seeing a lot of high-variety, high-volume and unstructured data. That’s pretty much everything other than highly structured database records. The new data explosion includes growing files and file systems, machine-generated data streams, web-scale application exhaust, endless file versioning, finer-grained backups and rollback snapshots to meet lower tolerances for data integrity and business continuity, and vast image and media repositories.

The public cloud is one way to deal with this data explosion, but it’s not always the best answer by itself. Elastic cloud storage services are easy to use to deploy large amounts of storage capacity. However, unless you want to create a growing and increasingly expensive cloud data dump, advanced storage management is required for managing unstructured data as well. The underlying theme of many new storage offerings is to extend enterprise-quality IT management and governance across multiple tiers of global storage, including hybrid and public cloud configurations.

If you’re architecting a new approach to storage, especially unstructured data storage at a global enterprise scale, here are seven advanced storage capabilities to consider:

Automated storage tiering. Storage tiering isn’t a new concept, but today it works across disparate storage arrays and vendors, often virtualizing in-place storage first. Advanced storage tiering products subsume yesterday’s simpler cloud gateways. They learn workload-specific performance needs and implement key quality of service, security and business cost control policies.

Much of what used to make up individual products, such as storage virtualizers, global distributed file systems, bulk data replicators, and migrators and cloud gateways, are converging into single-console unifying storage services. Enmotus and Veritas offer these simple-to-use services. This type of storage tiering enables unified storage infrastructure and provides a core service for many different types of storage management products.

Metadata at scale. There’s a growing focus on collecting and using storage metadata — data about stored data — when managing unstructured data. By properly aggregating and exploiting metadata at scale, storage vendors can better virtualize storage, optimize services, enforce governance policies and augment end-user analytical efforts.

Metadata concepts are most familiar in an object or file storage context. However, advanced block and virtual machine-level storage services are increasingly using metadata detail to help with tiering for performance. We also see metadata in data protection features. Reduxio’s infinite snapshots and immediate recovery based on timestamping changed blocks take advantage of metadata, as do change data capture techniques and N-way replication. When looking at heavily metadata-driven storage, it’s important to examine metadata protection schemes and potential bottlenecks. Interestingly, metadata-heavy approaches can improve storage performance because they usually allow for high metadata performance and scalability out of band from data delivery.

Storage analytics. You can use metadata and other introspective analytics about storage use gathered across enterprise storage, both offline and increasingly in dynamic optimizations. Call-home management is one example of how these analytics are used to better manage storage…(read the complete as-published article there)

Is demand for data storage or supply driving increased storage?

An IT industry analyst article published by SearchStorage.


article_Is-demand-for-data-storage-or-supply-driving-increased-storage
Figuring out whether we’re storing more data than ever because we’re producing more data or because constantly evolving storage technology lets us store more of it isn’t easy.

Mike Matchett
Small World Big Data

Whether you’re growing on-premises storage or your cloud storage footprint this year, it’s likely you’re increasing total storage faster than ever. Where we used to see capacity upgrade requests for proposals in terms of tens of terabytes growth, we now regularly see RFPs for half a petabyte or more. When it comes to storage size, huge is in.

Do we really need that much more data to stay competitive? Yes, probably. Can we afford extremely deep storage repositories? It seems that we can. However, these questions raise a more basic chicken-and-egg question: Are we storing more data because we’re making more data or because constantly evolving storage technology lets us?

Data storage economics
Looked at from a pricing perspective, the question becomes what’s driving price — more demand for data storage or more storage supply? I’ve heard economics professors say they can tell who really understands basic supply and demand price curve lessons when students ask this kind of question and consider a supply-side answer first. People tend to focus on demand-side explanations as the most straightforward way of explaining why prices fluctuate. I guess it’s easier to assume supply is a remote constant while envisioning all the possible changes in demand for data storage.

As we learn to wring more value out of our data, we want to both make and store more data.

But if storage supply is constant, given our massive data growth, then it should be really expensive. The massive squirreling away of data would instead be constrained by that high storage price (low availability). This was how it was years ago. Remember when traditional IT application environments struggled to fit into limited storage infrastructure that was already stretched thin to meet ever-growing demand?

Today, data capacities are growing large fast, and yet the price of storage keeps dropping (per unit of storage capacity). There’s no doubt supply is rising faster than demand for data storage. Technologies that bring tremendous supply-side benefits, such as the inherent efficiencies in shared cloud storage — and Moore’s law and clustered open source file systems like Hadoop Distributed File System and other technologies — have made bulk storage capacity so affordable that despite massive growth in demand for data storage, the price of storage continues to drop.

Endless data storage
When we think of hot new storage technologies, we tend to focus on primary storage advances such as flash and nonvolatile memory express. All so-called secondary storage comes, well, second. It’s true the relative value of a gigabyte of primary storage has greatly increased. Just compare the ROI of buying a whole bunch of dedicated, short-stroked HDDs as we did in the past to investing in a modicum of today’s fully deduped, automatically tiered and workload-shared flash.

It’s also worth thinking about flash storage in terms of impact on capacity, not just performance. If flash storage can serve a workload in one-tenth the time, it can also serve 10 similar workloads in the same time, providing an effective 10-times capacity boost.

But don’t discount the major changes that have happened in secondary storage…(read the complete as-published article there)

What’s the future of data storage technology and the IT pro?

An IT industry analyst article published by SearchConvergedInfrastructure.


article_Whats-the-future-of-data-storage-technology-and-the-IT-pro
Several enterprise data storage trends are all about getting rid of storage as an IT silo. That will have consequences for both the industry and IT pros who work in it.

Mike Matchett
Small World Big Data

Have you sensed a shift in storage these days? Maybe you’ve noticed a certain resignation among storage industry veterans contemplating the future of data storage technology. Or maybe when it comes time to refresh aging storage arrays, there’s less differentiation among competing products or no exciting new storage technologies — everyone has flash by now, right? — or flaming vendor wars to get excited about. Maybe many of your important storage needs are now met using a cloud service, a relatively no-name vendor product or even open source.

For many years, it’s been fun to watch the big storage vendors fight the good fight. They used to line up elbow-to-elbow in the front row at big shows like VMworld, vying for the biggest booth to show off their hottest products. This last year, it seemed storage has moved back a few rows. Market forces and trends such as software-defined and hyper-converged have changed large parts of the storage game, sure. But when the game shifted in the past, competitive storage vendors shifted with it. Maybe this is harder to do now that storage is getting embedded, integrated, converged and “cheapened” through cloud competition.

Many recent storage trends involve getting rid of storage as an IT silo, raising questions about the future of data storage technology. How can you sell Storage with a capital S if no one is buying stand-alone storage anymore? Are we coming to the end of storage as an important, first-class industry? The short answer is no.

But data? Lots of data
Accounting-focused industry reports show legacy storage-centric companies continue to suffer from thinning margins for their high-end hardware arrays. But the collective storage footprint in general is growing. With data volumes exploding from globalized applications, web-scale databases, big data analytics, online archiving and that little internet of things opportunity, all those new bits will have to go somewhere.

Maybe the job won’t even be called storage administrator in a few years, but rather something like chief data enabler.

All this new data simply can’t go into cheap and deep cold cloud storage. If data is worth having, as much business value as possible must be wrung out of it. And if it’s important data, it has to be governed, protected, secured and ultimately actively managed…(read the complete as-published article there)