Out on a data storage market limb: Six predictions for 2015

An IT industry analyst article published by SearchStorage.

Our crystal ball tells us this will be a year of change for the data storage market.


With another year just getting underway, we here at Taneja Group felt we needed a few analyst predictions to get things off on the right foot. The easiest predictions, and often the most likely ones, are that things will continue mostly as they are. But what fun is that? So, like any good fortune teller, we held hands around a crystal ball, gathered our prescient thoughts and with the help of the storage spirits came up with these six predictions for change in the data storage market for 2015.

  1. The overall traditional storage market will stay relatively flat despite huge growth in big data and the onrushing Internet of Things. Most new big data will be unstructured and big data architectures like Hadoop will still tend to leverage DAS for storage. In addition, many big data players are pushing the data lake or hub concept to land even bigger chunks of other enterprise data on big data clusters. While we do see some salvation in this space from vendors […] that enable big data analysis to leverage traditional enterprise storage, it won’t be enough to make a big dent in 2015.We’ve also noticed that many storage shops have yet to take advantage of the emerging capacity optimizations now available (e.g., thin provisioning, linked clones, global deduplication, inline compression and so on) in recent versions of competitive arrays that are becoming table stakes for new acquisition decisions. Hybrid arrays, in particular, are bringing flash-enabling space efficiencies across their full complement of storage tiers, and most arrays these days are turning at least hybrid.
  2. Speaking of flash, there are too many all-flash array (AFA) vendors and not enough differentiation. During 2012/2013 the first AFA vendors had the market to themselves, but with all the big players rolling out full-fledged flash offerings, opportunities are declining. With [many vendors] all pushing their own solutions (both AFA and hybrid), the remaining independent vendors will have a harder time finding a niche where they can survive. We also expect to see a new round of very high-end performance storage architectures in 2015[…]As a related trend, we anticipate that hybrid-based Tier-1 arrays will lose ground to AFAs in general, as the cost of flash drops and flash performance proves valuable to most if not all Tier-1 I/O. In virtualization environments, this trend will be somewhat lessened by the rise in popularity of server-side flash and/or memory caching/tiering solutions.
  3. Data protection and other add-on storage capabilities will become more directly baked into storage solutions. We expect to see more traditional arrays follow the examples of

…(To read the complete six item prediction see as-published article over there)

Figuring out the real price of flash technology

An IT industry analyst article published by SearchSolidStateStorage.

Sometimes comparing the costs of flash arrays is an apples-to-oranges affair — interesting, but not very helpful.


article_Figuring-out-the-real-price-of-flash-technology
We’re often told by hybrid and all-flash array vendors that their particular total cost of ownership (TCO) is effectively lower than the other guy’s. We’ve even heard vendors claim that by taking certain particulars into account, the per-gigabyte price of their flash solution is lower than that of spinning disk. Individually, the arguments sound compelling; but stack them side by side and you quickly run into apples-and-oranges issues.

Storage has a lot of factors that should be profiled and evaluated such as IOPS, latency, bandwidth, protection, reliability, consistency and so on, and these must match up with client workloads with unique read/write mixes, burstiness, data sizes, metadata overhead and quality of service/service-level agreement requirements. Standard benchmarks may be interesting, but the best way to evaluate storage is to test it under your particular production workloads; a sophisticated load gen and modeling tool like that from Load DynamiX can help with that process.

But as analysts, when we try to make industry-level evaluations hoping to compare apples to apples, we run into a host of half-hidden factors we’d like to see made explicitly transparent if not standardized across the industry. Let’s take a closer look…

…(read the complete as-published article there)

Commodity storage has its place, but an all-flash architecture thrills

An IT industry analyst article published by SearchSolidStateStorage.

Some IT folks are trying to leverage commodity servers and disks with software-implemented storage services. But others want an all-flash architecture.


article_Commodity-storage-has-its-place-but-an-all-flash-architecture-thrills
Every day we hear of budget-savvy IT folks attempting to leverage commodity servers and disks by layering on software-implemented storage services. But at the same time, and at some of the same datacenters, highly optimized flash-fueled acceleration technologies are racing in with competitive performance and compelling price comparisons. Architecting IT infrastructure to balance cost vs. capability has never been easy, but the potential differences and tradeoffs in these storage approaches are approaching extremes. It’s easy to wonder: Is storage going commodity or custom?

One of the drivers for these trends has been with us since the beginning of computing: Moore’s famous law is still delivering ever-increasing CPU power. Today, we see the current glut of CPU muscle being recovered and applied to power up increasingly virtualized and software-implemented capabilities. Last year, for example, the venerable EMC VNX line touted a multi-year effort toward making its controllers operate “multi-core,” which is to say they’re now able to take advantage of plentiful CPU power with new software-based features. This trend also shows up in the current vendor race to roll out deduplication. Even if software-based dedupe requires significant processing, cheap extra compute is enabling wider adoption.

In cloud and object storage, economics trump absolute performance with capacity-oriented and software-implemented architectures popping up everywhere. Still, competitive latency matters for many workloads. When performance is one of the top requirements, optimized solutions that leverage specialized firmware and hardware have an engineered advantage.

For maximum performance, storage architects are shifting rapidly toward enterprise-featured solid-state solutions. Among vendors, the race is on to build and offer the best all-flash solution…

…(read the complete as-published article there)

Stop counting megabytes; it’s all about application-aware storage now

An IT industry analyst article published by SearchStorage.

Raw capacity numbers are becoming less useful as deduplication, compression and application-aware storage provide more value than sheer capacity.


article_Stop-counting-megabytes-its-all-about-application-aware-storage-now
Whether clay pots, wooden barrels or storage arrays, vendors have always touted how much their wares can reliably store. And invariably, the bigger the vessel, the more impressive and costly it is, both to acquire and manage. The preoccupation with size as a measure of success implies that we should judge and compare offerings on sheer volume. But today, the relationship between physical storage media capacity and the effective value of the data “services” it delivers has become much more virtual and cloudy. No longer does a megabyte of effective storage mean a megabyte of real storage.

Most array vendors now incorporate capacity-optimizing features such as thin provisioning, compression and data deduplication. But now it looks like those vendors might just be selling you megabytes of data that aren’t really there. I agree that it’s the effective storage and resulting cost efficiency that counts, not what goes on under the hood or whether the actual on-media bits are virtual, compacted or shared. The type of engine and the gallons in the tank are interesting, but it’s the speed and distance you can go that matter.

Corporate data that includes such varied things as customer behavior logs, virtual machine images and corporate email that’s been globally deduped and compressed might deflate to a twentieth or less of its former glory. So when a newfangled flash array only has 10 TB of actual solid-state drives, but based on an expected minimum dedupe ratio is sold as a much larger effective 100+ TB, are we still impressed with the bigger number? We know our raw data is inherently “inflated” with too many copies and too little sharing. It should have always been stored “more” optimally.

But can we believe that bigger number? What’s hard to know, although perhaps it’s what we should be focusing on, is the reduction ratio we’ll get with our particular data set, as deflation depends highly on both the dedupe algorithm and the content…

…(read the complete as-published article there)

How the future of data storage will stack up

An IT industry analyst article published by SearchStorage.

In the near future, data storage won’t be a passive player as it integrates more closely with applications and workloads.


article_How-the-future-of-data-storage-will-stack-up
In a previous column, [we] wrote that the concept of LUNs is dead or at least dying as the primary way storage will be managed in the future. This has become evident with the emergence and increased adoption of products offering advanced virtual machine (VM)-centric storage. Shifting the focus from LUNs to VMs changes the storage game for VM administrators who can continue to work with constructs they understand directly, storage folks who have to elevate their service offerings, and even those pesky end users who might benefit from increased performance and availability (and hopefully lower costs).

You could view the end of the LUN as a consequence of industry commoditization of low-level array functionality as storage vendors compete to offer better, higher-level products; or you might chalk it up to a highly competitive marketplace where the most efficient and effective IT can help win the day. Either way, we think it’s inevitable that storage solutions will keep evolving up the stack. The big question is what comes next. What are the next valuable levels as one climbs the storage stack? Let’s start with familiar storage types and work up to some possible future storage solutions…

…(read the complete as-published article there)