How do trends in primary storage affect secondary storage?

I caught up with Steve Pao at Igneous recently to chat again about the rise of secondary storage. Primary storage is great and all, but flash is everywhere – it’s old news.  In secondary storage, we are seeing a lot happening with scale and functionality and hybridization and built-in data protection services.

How do trends in primary storage affect secondary storage? (here with full transcript)

Is demand for data storage or supply driving increased storage?

An IT industry analyst article published by SearchStorage.


article_Is-demand-for-data-storage-or-supply-driving-increased-storage
Figuring out whether we’re storing more data than ever because we’re producing more data or because constantly evolving storage technology lets us store more of it isn’t easy.

Mike Matchett
Small World Big Data

Whether you’re growing on-premises storage or your cloud storage footprint this year, it’s likely you’re increasing total storage faster than ever. Where we used to see capacity upgrade requests for proposals in terms of tens of terabytes growth, we now regularly see RFPs for half a petabyte or more. When it comes to storage size, huge is in.

Do we really need that much more data to stay competitive? Yes, probably. Can we afford extremely deep storage repositories? It seems that we can. However, these questions raise a more basic chicken-and-egg question: Are we storing more data because we’re making more data or because constantly evolving storage technology lets us?

Data storage economics
Looked at from a pricing perspective, the question becomes what’s driving price — more demand for data storage or more storage supply? I’ve heard economics professors say they can tell who really understands basic supply and demand price curve lessons when students ask this kind of question and consider a supply-side answer first. People tend to focus on demand-side explanations as the most straightforward way of explaining why prices fluctuate. I guess it’s easier to assume supply is a remote constant while envisioning all the possible changes in demand for data storage.

As we learn to wring more value out of our data, we want to both make and store more data.

But if storage supply is constant, given our massive data growth, then it should be really expensive. The massive squirreling away of data would instead be constrained by that high storage price (low availability). This was how it was years ago. Remember when traditional IT application environments struggled to fit into limited storage infrastructure that was already stretched thin to meet ever-growing demand?

Today, data capacities are growing large fast, and yet the price of storage keeps dropping (per unit of storage capacity). There’s no doubt supply is rising faster than demand for data storage. Technologies that bring tremendous supply-side benefits, such as the inherent efficiencies in shared cloud storage — and Moore’s law and clustered open source file systems like Hadoop Distributed File System and other technologies — have made bulk storage capacity so affordable that despite massive growth in demand for data storage, the price of storage continues to drop.

Endless data storage
When we think of hot new storage technologies, we tend to focus on primary storage advances such as flash and nonvolatile memory express. All so-called secondary storage comes, well, second. It’s true the relative value of a gigabyte of primary storage has greatly increased. Just compare the ROI of buying a whole bunch of dedicated, short-stroked HDDs as we did in the past to investing in a modicum of today’s fully deduped, automatically tiered and workload-shared flash.

It’s also worth thinking about flash storage in terms of impact on capacity, not just performance. If flash storage can serve a workload in one-tenth the time, it can also serve 10 similar workloads in the same time, providing an effective 10-times capacity boost.

But don’t discount the major changes that have happened in secondary storage…(read the complete as-published article there)

Secondary Storage in a Primary Role !?

Hey all! – This is the first of what I hope will be many little topical quick video segments, working with Dave Littman over at Truth In IT to get them recorded, produced and published.

In this one we discuss what’s going in with secondary storage these days, and how it’s perhaps more interesting than all that commodity-ish “all-flash” primary storage out there

Can secondary storage play a primary role in the datacenter?

(gotta love the Youtube freeze frame that catches us both with eyes closed!)

Cloud-based environment: The new normal for IT shops

An IT industry analyst article published by SearchServerVirtualization.


article_Cloud-based-environment-The-new-normal-for-IT-shops
The sky is the limit as new cloud management tools and evolutions in storage help make hybrid and multicloud IT a viable option for organizations with on-prem data centers.

Mike Matchett
Small World Big Data

Doubts about a cloud-based environment being little more than a passing fancy are vanishing. Plenty of real enterprises are not only comfortable releasing key workloads to public clouds, but are finding that hybrid operations at scale offer significant economic, productivity and competitive advantages over traditional on-premises data centers.

In fact, many of the big announcements at VMworld 2017 highlighted how mainstream businesses are now building and consuming hybrid and multicloud IT.
NSX all around

VMware has accelerated its transition from hypervisor vendor to cloud-management tool provider. Its virtual networking product, NSX, is not only a big source of revenue for VMware, but it also underpins many newer offerings, such as AppDefense, VMware Cloud on AWS and Network Insight. Basically, NSX has become the glue, the ether that fills VMware’s multicloud management business.

By shifting the center of its universe from hypervisor to the network between and underneath everything, VMware can now provide command and control over infrastructure and applications running in data centers, clouds, mobile devices and even out to the brave new internet of things (IoT) edge.

More MaaS, please
VMware rolled out seven management as a service (MaaS) offerings. MaaS describes a sales model in which a vendor delivers systems management functionality as a remote, subscription utility service. MaaS is ideal for systems management tasks across multiple clouds and complex hybrid infrastructures.

One of the motivations for MaaS is that the IT administrator doesn’t need to install or maintain on-premises IT management tools. Another is that the MaaS vendor gains an opportunity to mine big data aggregated across their entire customer pool, which should enable it to build deeply intelligent services.

Four of these new services are based on existing vRealize Operations technologies that VMware has repackaged for SaaS-style delivery. We’ve also heard that there are more MaaS products on the way.

It’s important for vendors to offer MaaS services — such as call home and remote monitoring — as the inevitable future consumption model for all systems management. There isn’t a single organization that benefits from employing an expert to maintain its internal, complex systems management tool. And with mobile, distributed and hybrid operations, most existing on-premises management products fall short of covering the whole enterprise IT architecture. I have no doubt the future is MaaS, a model that is bound to quickly attract IT shops that want to focus less on maintaining management tools and more on efficiently operating hybrid, multicloud architectures.

Storage evolves
The VMworld show floor has been a real storage showcase in recent years, with vendors fighting for more attention and setting up bigger, flashier booths. But it seemed this year that the mainline storage vendors pulled back a bit. This could be because software-defined storage products such as VMware vSAN are growing so fast or that the not-so-subtle presence of Dell EMC storage has discouraged others from pushing as hard at this show. Or it could be that in this virtual hypervisor market, hyper-convergence (and open convergence too) is where it’s at these days.

If cloud-based environments and hybrid management are finally becoming just part of normal IT operations, then what’s the next big thing?

Maybe it’s that all the past storage hoopla stemmed from flash storage crashing its way through the market. Competition on the flash angle is smoothing out now that everyone has flash-focused storage products. This year, nonvolatile memory express, or NVMe, was on everyone’s roadmap, but there was very little NVMe out there ready to roll. I’d look to next year as the big year for NVMe vendor positioning. Who will get it first? Who will be fastest? Who will be most cost-efficient? While there is some argument that NVMe isn’t going to disrupt the storage market as flash did, I expect similar first-to-market vendor competitions.

Data protection, on the other hand, seems to be gaining. Cohesity and other relatively new vendors have lots to offer organizations with a large virtual and cloud-based environment. While secondary storage hasn’t always seemed sexy, scalable and performant secondary storage can make all the difference in how well the whole enterprise IT effort works. Newer scale-out designs can keep masses of secondary data online and easily available for recall or archive, restore, analytics and testing. Every day, we hear of new machine learning efforts to use bigger and deeper data histories.

These storage directions — hyper-convergence, faster media and scale-out secondary storage — all support a more distributed and hybrid approach to data center architectures…(read the complete as-published article there)

Secondary data storage: A massively scalable transformation

An IT industry analyst article published by SearchStorage.


article_Secondary-data-storage-A-massively-scalable-transformation
Capitalize on flash with interactive, online secondary data storage architectures that make a lot more data available for business while maximizing flash investment.

Mike Matchett
Small World Big Data

We all know flash storage is fast, increasingly affordable and quickly beating out traditional spinning disk for primary storage needs. It’s like all our key business applications have been magically upgraded to perform 10 times faster!

In the data center, modern primary storage arrays now come with massive flash caching, large flash tiers or are all flash through and through. Old worries about flash wearing out have been largely forgotten. And there are some new takes on storage designs, such as Datrium’s, that make great use of less-expensive server-side flash. Clearly, spending money on some kind of flash, if not all flash, can be a great IT investment.

Yet, as everyone builds primary storage with flash, there is less differentiation among those flashy designs. At some point, “really fast” is fast enough for now, assuming you aren’t in financial trading.

Rather than argue whose flash is faster, more reliable, more scalable or even cheaper, the major enterprise IT storage concern is shifting toward getting the most out of whatever high-performance primary storage investment gets made. Chasing ever-greater performance can be competitively lucrative, but universally, we see business demand for larger operational data sets growing quickly. Flash or not, primary storage still presents an ever-present capacity-planning challenge.

A new ‘big data’ opportunity
The drive to optimize shiny new primary storage pushes IT folks to use it as much as possible with suitable supporting secondary data storage. As this is literally a new “big data” opportunity, there is a correspondingly big change happening in the secondary storage market. Old-school backup storage designed solely as an offline data protection target doesn’t provide the scale, speed and interactive storage services increasingly demanded by today’s self-service-oriented users.

We’re seeing a massive trend toward interactive, online, secondary storage architectures. Instead of dumping backups, snapshots and archives into slow, near-online or essentially offline deep storage tiers, organizations are finding it’s worthwhile to keep large volumes of second-tier data in active use. With this shift to online secondary data storage, end users can quickly find and recover their own data like they do with Apple’s Time Machine on their Macs. And organizations can profitably mine and derive valuable insights from older, colder, larger data sets, such as big data analytics, machine learning and deep historical search.

If that sounds like a handy convergence of backup and archive, you’re right. There’s increasingly less difference between data protection backup and recovery and retention archiving…(read the complete as-published article there)