Future of data storage technology: Transformational trends for 2018

An IT industry analyst article published by SearchStorage.


article_Future-of-data-storage-technology-Transformational-trends-for-2018
Risk-averse enterprises finally accepted the cloud in 2017, and we didn’t even notice. Expect the same for these data storage technology trends in the new year.

Mike Matchett
Small World Big Data

Sometimes big changes sneak up on you, especially when you’re talking about the future of data storage technology. For example, when exactly did full-on cloud adoption become fully accepted by all those risk-averse organizations, understaffed IT shops and disbelieving business executives? I’m not complaining, but the needle of cloud acceptance tilted over sometime in the recent past without much ado. It seems everyone has let go of their fear of cloud and hybrid operations as risky propositions. Instead, we’ve all come to accept the cloud as something that’s just done.

Sure, cloud was inevitable, but I’d still like to know why it finally happened now. Maybe it’s because IT consumers expect information technology will provide whatever they want on demand. Or maybe it’s because everything IT implements on premises now comes labeled as private cloud. Influential companies, such as IBM, Microsoft and Oracle, are happy to help ease folks formerly committed to private infrastructure toward hybrid architectures that happen to use their respective cloud services.

In any case, I’m disappointed I didn’t get my invitation to the “cloud finally happened” party. But having missed cloud’s big moment, I’m not going to let other obvious yet possibly transformative trends sneak past as they go mainstream with enterprises in 2018. So when it comes to the future of data storage technology, I’ll be watching the following:

Containers arose out of a long-standing desire to find a better way to package applications. This year we should see enterprise-class container management reach maturity parity with virtual machine management — while not holding back any advantages containers have over VMs. Expect modern software-defined resources, such as storage, to be delivered mostly in containerized form. When combined with dynamic operational APIs, these resources will deliver highly flexible programmable infrastructures. This approach should enable vendors to package applications and their required infrastructure as units that can be redeployed — that is, blueprinted or specified in editable and versionable manifest files — enabling full environment and even data center-level cloud provisioning. Being able to deploy a data center on demand could completely transform disaster recovery, to name one use case.

Everyone is talking about AI, but it is machine learning that’s slowly permeating through just about every facet of IT management. Although there’s a lot of hype, it’s worth figuring out how and where carefully applied machine learning could add significant value. Most machine learning is conceptually made up of advanced forms of pattern recognition. So think about where using the technology to automatically identify complex patterns would reduce time and effort. We expect the increasing availability of machine learning algorithms to give rise to new storage management processes. These algorithms can produce storage management processes that can learn and adjust operations and settings to optimize workload services, quickly identify and fix the root causes of abnormalities, and broker storage infrastructure and manage large-scale data to minimize cost.

Management as a service (MaaS) is gaining traction, when looking at the future of data storage technology. First, every storage array seemingly comes with built-in call home support replete with management analytics and performance optimization. I predict that the interval for most remote vendor management services to quickly drop from today’s daily batch to five-minute streaming. I also expect cloud-hosted MaaS offerings are the way most shops will manage their increasingly hybrid architectures, and many will start to shift away from the burdens of on-premises management software…(read the complete as-published article there)

Cloud-based environment: The new normal for IT shops

An IT industry analyst article published by SearchServerVirtualization.


article_Cloud-based-environment-The-new-normal-for-IT-shops
The sky is the limit as new cloud management tools and evolutions in storage help make hybrid and multicloud IT a viable option for organizations with on-prem data centers.

Mike Matchett
Small World Big Data

Doubts about a cloud-based environment being little more than a passing fancy are vanishing. Plenty of real enterprises are not only comfortable releasing key workloads to public clouds, but are finding that hybrid operations at scale offer significant economic, productivity and competitive advantages over traditional on-premises data centers.

In fact, many of the big announcements at VMworld 2017 highlighted how mainstream businesses are now building and consuming hybrid and multicloud IT.
NSX all around

VMware has accelerated its transition from hypervisor vendor to cloud-management tool provider. Its virtual networking product, NSX, is not only a big source of revenue for VMware, but it also underpins many newer offerings, such as AppDefense, VMware Cloud on AWS and Network Insight. Basically, NSX has become the glue, the ether that fills VMware’s multicloud management business.

By shifting the center of its universe from hypervisor to the network between and underneath everything, VMware can now provide command and control over infrastructure and applications running in data centers, clouds, mobile devices and even out to the brave new internet of things (IoT) edge.

More MaaS, please
VMware rolled out seven management as a service (MaaS) offerings. MaaS describes a sales model in which a vendor delivers systems management functionality as a remote, subscription utility service. MaaS is ideal for systems management tasks across multiple clouds and complex hybrid infrastructures.

One of the motivations for MaaS is that the IT administrator doesn’t need to install or maintain on-premises IT management tools. Another is that the MaaS vendor gains an opportunity to mine big data aggregated across their entire customer pool, which should enable it to build deeply intelligent services.

Four of these new services are based on existing vRealize Operations technologies that VMware has repackaged for SaaS-style delivery. We’ve also heard that there are more MaaS products on the way.

It’s important for vendors to offer MaaS services — such as call home and remote monitoring — as the inevitable future consumption model for all systems management. There isn’t a single organization that benefits from employing an expert to maintain its internal, complex systems management tool. And with mobile, distributed and hybrid operations, most existing on-premises management products fall short of covering the whole enterprise IT architecture. I have no doubt the future is MaaS, a model that is bound to quickly attract IT shops that want to focus less on maintaining management tools and more on efficiently operating hybrid, multicloud architectures.

Storage evolves
The VMworld show floor has been a real storage showcase in recent years, with vendors fighting for more attention and setting up bigger, flashier booths. But it seemed this year that the mainline storage vendors pulled back a bit. This could be because software-defined storage products such as VMware vSAN are growing so fast or that the not-so-subtle presence of Dell EMC storage has discouraged others from pushing as hard at this show. Or it could be that in this virtual hypervisor market, hyper-convergence (and open convergence too) is where it’s at these days.

If cloud-based environments and hybrid management are finally becoming just part of normal IT operations, then what’s the next big thing?

Maybe it’s that all the past storage hoopla stemmed from flash storage crashing its way through the market. Competition on the flash angle is smoothing out now that everyone has flash-focused storage products. This year, nonvolatile memory express, or NVMe, was on everyone’s roadmap, but there was very little NVMe out there ready to roll. I’d look to next year as the big year for NVMe vendor positioning. Who will get it first? Who will be fastest? Who will be most cost-efficient? While there is some argument that NVMe isn’t going to disrupt the storage market as flash did, I expect similar first-to-market vendor competitions.

Data protection, on the other hand, seems to be gaining. Cohesity and other relatively new vendors have lots to offer organizations with a large virtual and cloud-based environment. While secondary storage hasn’t always seemed sexy, scalable and performant secondary storage can make all the difference in how well the whole enterprise IT effort works. Newer scale-out designs can keep masses of secondary data online and easily available for recall or archive, restore, analytics and testing. Every day, we hear of new machine learning efforts to use bigger and deeper data histories.

These storage directions — hyper-convergence, faster media and scale-out secondary storage — all support a more distributed and hybrid approach to data center architectures…(read the complete as-published article there)

Smarter storage starts with analytics

An IT industry analyst article published by SearchStorage.


article_Smarter-storage-starts-with-analytics
Storage smartens up to keep pace with data-intensive business applications embedding operational analytics capabilities.

Mike Matchett

The amount of data available to today’s enterprise is staggering. Yet the race to collect and mine even more data to gain competitive insight, deeply optimize business processes and better inform strategic decision-making is accelerating. Fueled by these new data-intensive capabilities, traditional enterprise business applications primarily focused on operational transactions are now quickly converging with advanced big data analytics to help organizations grow increasingly (albeit artificially) intelligent.

To help IT keep pace with data-intensive business applications that are now embedding operational analytics, data center infrastructure is also evolving rapidly. In-memory computing, massive server-side flash, software-defined resources and scale-out platforms are a few of the recent growth areas reshaping today’s data centers. In particular, we are seeing storage infrastructure, long considered the slow-changing anchor of the data center, transforming faster than ever. You might say that we’re seeing smarter storage.

Modern storage products take full advantage of newer silicon technologies, growing smarter with new inherent analytics, embedding hybrid cloud tiering and (often) converging with or hosting core data processing directly. Perhaps the biggest recent change in storage isn’t with hardware or algorithms at all, but with how storage can now best be managed.

For a long time, IT shops had no option but to manage storage by deploying and learning a unique storage management tool for each type of vendor product in use. This wastes significant time implementing, integrating and supporting one-off instances of complex vendor-specific management tools. But as managing data about business data (usage, performance, security and so on, see “Benefits of analytical supercharging”) grows, simply managing a metrics database now becomes a huge challenge as well. Also, with trends like the internet of things proliferating the baking of streaming sensors into everything, key systems metadata is itself becoming much more prolific and real-time.

It can take a significant data science investment to harvest the desired value out of it.

Perhaps the biggest recent change in storage isn’t with hardware or algorithms at all, but with how storage can now best be managed.

Storage analytics ‘call home’

So while I’m all for DIY when it comes to unique integration of analytics with business processes and leveraging APIs to create custom widgets or reports, I’ve seen too many enterprises develop their own custom in-house storage management tools, only for those eventually becoming as expensive and onerous to support and keep current as if they had just licensed one of those old-school “Big 4” enterprise management platforms (i.e., BMC, CA, Hewlett Packard Enterprise [HPE] and IBM). In these days of cloud-hosted software as a service (SaaS) business applications, it makes sense that such onerous IT management tasks should be subscribed out to and provided by a remote expert service provider.

Remote storage management on a big scale really started with the augmented vendor support “call home” capability pioneered by NetApp years ago. Log and event files from on-premises arrays are bundled up and sent daily back to the vendor’s big data database “in the cloud.” Experts then analyze incoming data from all participating customers with big data analysis tools (e.g., Cassandra, HBase and Spark) to learn from their whole pool of end-user deployments.
Benefits of analytical supercharging

Smarter infrastructure with embedded analytical intelligence can help IT do many things better, and in some cases even continue to improve with automated machine learning. Some IT processes already benefitting from analytical supercharging include the following:

  • Troubleshooting. Advanced analytics can provide predictive alerting to warn of potential danger in time to avoid it, conduct root cause analyses when something does go wrong to identify the real problem that needs to be addressed and make intelligent recommendations for remediation.
  • Resource optimization. By learning what workloads require for good service and how resources are used over time, analytics can help tune and manage resource allocations to both ensure application performance and optimize infrastructure utilization.
  • Operations automation. Smarter storage systems can learn (in a number of ways) how to best automate key processes and workflows, and then optimally manage operational tasks at large scale — effectively taking over many of today’s manual DevOps functions.
  • Brokerage. Cost control and optimization will become increasingly important and complex as truly agile hybrid computing goes mainstream. Intelligent algorithms will be able to make the best cross-cloud brokering and dynamic deployment decisions.
  • Security. Analytical approaches to securing enterprise networks and data are key to processing the massive scale and nonstop stream of global event and log data required today to find and stop malicious intrusion, denial of service and theft of corporate assets.

That way, the array vendor can deliver valuable proactive advice and recommendations based on data any one organization simply couldn’t generate on its own. With this SaaS model, IT doesn’t have to manage their own historical database, operate a big data analysis platform or find the data science resources to analyze it. And the provider can gain insight into general end-user behavior, study actual feature usage and identify sales and marketing opportunities.

Although it seems every storage vendor today offers call home support, you can differentiate between them. Some look at customer usage data at finer-grained intervals, even approaching real-time stream-based monitoring. Some work hard on improving visualization and reporting. And others intelligently mine collected data to train machine learning models and feedback smarter operational advice to users…(read the complete as-published article there)

What options exist for IT infrastructure management services?

An IT industry analyst article published by SearchITOperations.


article_What-options-exist-for-IT-infrastructure-management-services
What kinds of as-a-service IT management options are available? Are IT management services only coming from startups, or do established management software vendors have options?

Mike Matchett

Various companies offer IT infrastructure management services hosted and operated in a remote, multi-tenant cloud. This as-a-service model provides core IT management services to private, on-premises data centers, remote offices, rented infrastructure in colocation or other infrastructure as a service hosting, or some hybrid combination of these deployments.

As an early example, when Exablox launched, it targeted IT shops generally seeking to squeeze the most out of constrained storage budgets — organizations that would gladly give up the pain and cost of installing and operating on-premises storage management in favor of just using a cloud-hosted storage management service. This approach radically evolved call-home support based on daily data dumps into online operational IT management as a service.

At that time, some businesses were dismissive of the idea that IT infrastructure management services would not require operational software they could directly host in their own data centers. Some forward-thinking startups, such as VM management provider CloudPhysics and the deeper infrastructure-focused Galileo Performance Explorer, noted that large companies would consider remote performance management tooling, as it’s based on machine data and log files with little risk of exposing corporate secrets. And performance management activities don’t sit in the direct operational workflow.…(read the complete as-published article there)

IT management as a service is coming to a data center near you

An IT industry analyst article published by SearchITOperations.


article_IT-management-as-a-service-is-coming-to-a-data-center-near-you
IT management as a service uses big data analytics and vendors’ expertise to ease the IT administration and optimization process. IT orgs must trust the flow of log and related data into an offsite, multi-tenant cloud.

Mike Matchett

IT management as a service finally breaks through. Where does it go from here?

Perhaps the über IT trend isn’t about hailing a ride from within the data center, but adopting and migrating to newer generations of tools that ease the cost and pain of managing infrastructure.

It’s not efficient for each IT shop to individually develop and maintain siloed expertise in managing every vendor-specific component. The physical — and financial — limits of IT shops are by and large why cloud service providers continue to gain ground.

Today, there is an inexorable transition toward commoditized physical equipment with differentiating software-defined capabilities floated in on top. Using commodity hardware offers direct CapEx benefits. However, by taking advantage of software resources — and virtualization — to pre-integrate multiple infrastructure layers, converged and hyper-converged platforms also eliminate significant IT time and labor required by traditional, siloed architectures. In freeing up IT, the converged and hyper-converged options also improve overall agility and help IT groups transition from equipment caretakers to business enhancers.

In a similar bid to lower management OpEx pain, IT operations and management solutions are slowly and inexorably increasing inherent automation. Policy-based approaches help an IT organization address scale and focus on building the right services for their users instead of remaining stuck in low-level, tedious and often reactive “per-thing” configuration and management. And much of the appeal of cloud computing is based on offloading IT by offering end-user self-service capabilities.

But even in running a hyper-converged or hybrid cloud data center, there are still plenty of IT hours spent thanklessly on internally facing operations and management tasks. Operating a cloud, a cluster, a hybrid operation — even just maintaining the actual management tools that run the operations and automation — can still be a huge chore. Similar to how many businesses now use the cloud as a source of easy, catalog-driven, self-service, elastic, utility-priced application computing, IT is starting to look to the cloud for IT management as a service.

The broadening acceptance of public cloud services is inverting the traditional IT management paradigm, moving management services into the cloud while preserving on-premises — or hybrid — computing and infrastructure. This has been a long, slow battle due to ingrained IT tradition, security fears and worries about losing control; there’s a reluctance to let go of the private management stack. But the drive to make IT more efficient and productive is now taking priority.

We are seeing the inevitable acceptance and widespread adoption of remote, cloud-hosted IT management services, from remote performance management to hybrid cloud provisioning and brokering and even on-premises “cluster” operations. These services can be referred to collectively as IT management as a service, or IT MaaS…(read the complete as-published article there)