Out on a data storage market limb: Six predictions for 2015

An IT industry analyst article published by SearchStorage.

Our crystal ball tells us this will be a year of change for the data storage market.

With another year just getting underway, we here at Taneja Group felt we needed a few analyst predictions to get things off on the right foot. The easiest predictions, and often the most likely ones, are that things will continue mostly as they are. But what fun is that? So, like any good fortune teller, we held hands around a crystal ball, gathered our prescient thoughts and with the help of the storage spirits came up with these six predictions for change in the data storage market for 2015.

  1. The overall traditional storage market will stay relatively flat despite huge growth in big data and the onrushing Internet of Things. Most new big data will be unstructured and big data architectures like Hadoop will still tend to leverage DAS for storage. In addition, many big data players are pushing the data lake or hub concept to land even bigger chunks of other enterprise data on big data clusters. While we do see some salvation in this space from vendors […] that enable big data analysis to leverage traditional enterprise storage, it won’t be enough to make a big dent in 2015.We’ve also noticed that many storage shops have yet to take advantage of the emerging capacity optimizations now available (e.g., thin provisioning, linked clones, global deduplication, inline compression and so on) in recent versions of competitive arrays that are becoming table stakes for new acquisition decisions. Hybrid arrays, in particular, are bringing flash-enabling space efficiencies across their full complement of storage tiers, and most arrays these days are turning at least hybrid.
  2. Speaking of flash, there are too many all-flash array (AFA) vendors and not enough differentiation. During 2012/2013 the first AFA vendors had the market to themselves, but with all the big players rolling out full-fledged flash offerings, opportunities are declining. With [many vendors] all pushing their own solutions (both AFA and hybrid), the remaining independent vendors will have a harder time finding a niche where they can survive. We also expect to see a new round of very high-end performance storage architectures in 2015[…]As a related trend, we anticipate that hybrid-based Tier-1 arrays will lose ground to AFAs in general, as the cost of flash drops and flash performance proves valuable to most if not all Tier-1 I/O. In virtualization environments, this trend will be somewhat lessened by the rise in popularity of server-side flash and/or memory caching/tiering solutions.
  3. Data protection and other add-on storage capabilities will become more directly baked into storage solutions. We expect to see more traditional arrays follow the examples of

…(To read the complete six item prediction see as-published article over there)

‘Software-defined’ to define data center of the future

An IT industry analyst article published by SearchDataCenter.

Software-defined means many — sometimes conflicting — things to many people. At the core, it means letting the application control its resources.

Is there a real answer for how “software” can define “data center” underneath the software-defined hype?

Vendors bombard IT pros with the claim that whatever they are selling is a “software-defined solution.” Each of these solutions claims to actually define what “software-defined” means in whatever category that vendor serves. It’s all very clever individually, but doesn’t make much sense collectively.

We suffered through something similar with cloud washing, in which every bit of IT magically became critical for every cloud adoption journey. But at least in all that cloudiness, there was some truth. We all at least think we know what cloud means. The future cloud is likely a hybrid in which most IT solutions still play a role. But this rush to claim the software-defined high ground is turning increasingly bizarre. Even though VMware seems to be leading the pack with their Software-Defined Data Center (SDDC) concept, no one seems to agree on what software-defined actually means. The term is in danger of becoming meaningless.

Before the phrase gets discredited completely, let’s look at what it could mean, as with software-defined networking (SDN). In the networking space, the fundamental shift that SDN brought was to enable IT to dynamically and programmatically define and shape not only logical network layers, but also to manipulate the underlying physical network by remote controlling switches (and other components).

Once infrastructure becomes remotely programmable, essentially definable through software, it creates a new dynamic agility. No longer do networking changes bring whole systems to a grinding halt, manually moving cables and reconfiguring switches and host bus adapters one by one. Instead of an all-hands-on-deck-over-the-weekend effort to migrate from static state A to static state B, SDN enables networks to be effectively redefined remotely, on the fly.

This remote programmability brings third-party intelligence and optimization into the picture (a potential use for all that machine-generated big data you’re piling up)…

…(read the complete as-published article there)

What is Software Defined Storage?  EMC ViPR announced at EMCWorld 2013

(Excerpt from original post on the Taneja Group News Blog)

Here at EMC World 2013, one of the biggest themes is “software defined” storage. Much like the vague overuse of “cloud” as a marketing description, the term “software defined” is being abused by many. But after getting more details, we think EMC has got it right with the new ViPR storage architecture.

…(read the full post)