Cloud-based environment: The new normal for IT shops

An IT industry analyst article published by SearchServerVirtualization.


article_Cloud-based-environment-The-new-normal-for-IT-shops
The sky is the limit as new cloud management tools and evolutions in storage help make hybrid and multicloud IT a viable option for organizations with on-prem data centers.

Mike Matchett
Small World Big Data

Doubts about a cloud-based environment being little more than a passing fancy are vanishing. Plenty of real enterprises are not only comfortable releasing key workloads to public clouds, but are finding that hybrid operations at scale offer significant economic, productivity and competitive advantages over traditional on-premises data centers.

In fact, many of the big announcements at VMworld 2017 highlighted how mainstream businesses are now building and consuming hybrid and multicloud IT.
NSX all around

VMware has accelerated its transition from hypervisor vendor to cloud-management tool provider. Its virtual networking product, NSX, is not only a big source of revenue for VMware, but it also underpins many newer offerings, such as AppDefense, VMware Cloud on AWS and Network Insight. Basically, NSX has become the glue, the ether that fills VMware’s multicloud management business.

By shifting the center of its universe from hypervisor to the network between and underneath everything, VMware can now provide command and control over infrastructure and applications running in data centers, clouds, mobile devices and even out to the brave new internet of things (IoT) edge.

More MaaS, please
VMware rolled out seven management as a service (MaaS) offerings. MaaS describes a sales model in which a vendor delivers systems management functionality as a remote, subscription utility service. MaaS is ideal for systems management tasks across multiple clouds and complex hybrid infrastructures.

One of the motivations for MaaS is that the IT administrator doesn’t need to install or maintain on-premises IT management tools. Another is that the MaaS vendor gains an opportunity to mine big data aggregated across their entire customer pool, which should enable it to build deeply intelligent services.

Four of these new services are based on existing vRealize Operations technologies that VMware has repackaged for SaaS-style delivery. We’ve also heard that there are more MaaS products on the way.

It’s important for vendors to offer MaaS services — such as call home and remote monitoring — as the inevitable future consumption model for all systems management. There isn’t a single organization that benefits from employing an expert to maintain its internal, complex systems management tool. And with mobile, distributed and hybrid operations, most existing on-premises management products fall short of covering the whole enterprise IT architecture. I have no doubt the future is MaaS, a model that is bound to quickly attract IT shops that want to focus less on maintaining management tools and more on efficiently operating hybrid, multicloud architectures.

Storage evolves
The VMworld show floor has been a real storage showcase in recent years, with vendors fighting for more attention and setting up bigger, flashier booths. But it seemed this year that the mainline storage vendors pulled back a bit. This could be because software-defined storage products such as VMware vSAN are growing so fast or that the not-so-subtle presence of Dell EMC storage has discouraged others from pushing as hard at this show. Or it could be that in this virtual hypervisor market, hyper-convergence (and open convergence too) is where it’s at these days.

If cloud-based environments and hybrid management are finally becoming just part of normal IT operations, then what’s the next big thing?

Maybe it’s that all the past storage hoopla stemmed from flash storage crashing its way through the market. Competition on the flash angle is smoothing out now that everyone has flash-focused storage products. This year, nonvolatile memory express, or NVMe, was on everyone’s roadmap, but there was very little NVMe out there ready to roll. I’d look to next year as the big year for NVMe vendor positioning. Who will get it first? Who will be fastest? Who will be most cost-efficient? While there is some argument that NVMe isn’t going to disrupt the storage market as flash did, I expect similar first-to-market vendor competitions.

Data protection, on the other hand, seems to be gaining. Cohesity and other relatively new vendors have lots to offer organizations with a large virtual and cloud-based environment. While secondary storage hasn’t always seemed sexy, scalable and performant secondary storage can make all the difference in how well the whole enterprise IT effort works. Newer scale-out designs can keep masses of secondary data online and easily available for recall or archive, restore, analytics and testing. Every day, we hear of new machine learning efforts to use bigger and deeper data histories.

These storage directions — hyper-convergence, faster media and scale-out secondary storage — all support a more distributed and hybrid approach to data center architectures…(read the complete as-published article there)

Secondary data storage: A massively scalable transformation

An IT industry analyst article published by SearchStorage.


article_Secondary-data-storage-A-massively-scalable-transformation
Capitalize on flash with interactive, online secondary data storage architectures that make a lot more data available for business while maximizing flash investment.

Mike Matchett
Small World Big Data

We all know flash storage is fast, increasingly affordable and quickly beating out traditional spinning disk for primary storage needs. It’s like all our key business applications have been magically upgraded to perform 10 times faster!

In the data center, modern primary storage arrays now come with massive flash caching, large flash tiers or are all flash through and through. Old worries about flash wearing out have been largely forgotten. And there are some new takes on storage designs, such as Datrium’s, that make great use of less-expensive server-side flash. Clearly, spending money on some kind of flash, if not all flash, can be a great IT investment.

Yet, as everyone builds primary storage with flash, there is less differentiation among those flashy designs. At some point, “really fast” is fast enough for now, assuming you aren’t in financial trading.

Rather than argue whose flash is faster, more reliable, more scalable or even cheaper, the major enterprise IT storage concern is shifting toward getting the most out of whatever high-performance primary storage investment gets made. Chasing ever-greater performance can be competitively lucrative, but universally, we see business demand for larger operational data sets growing quickly. Flash or not, primary storage still presents an ever-present capacity-planning challenge.

A new ‘big data’ opportunity
The drive to optimize shiny new primary storage pushes IT folks to use it as much as possible with suitable supporting secondary data storage. As this is literally a new “big data” opportunity, there is a correspondingly big change happening in the secondary storage market. Old-school backup storage designed solely as an offline data protection target doesn’t provide the scale, speed and interactive storage services increasingly demanded by today’s self-service-oriented users.

We’re seeing a massive trend toward interactive, online, secondary storage architectures. Instead of dumping backups, snapshots and archives into slow, near-online or essentially offline deep storage tiers, organizations are finding it’s worthwhile to keep large volumes of second-tier data in active use. With this shift to online secondary data storage, end users can quickly find and recover their own data like they do with Apple’s Time Machine on their Macs. And organizations can profitably mine and derive valuable insights from older, colder, larger data sets, such as big data analytics, machine learning and deep historical search.

If that sounds like a handy convergence of backup and archive, you’re right. There’s increasingly less difference between data protection backup and recovery and retention archiving…(read the complete as-published article there)

What’s a Software Defined Data Center? – Pensa Aims Really High

This week Pensa came out of their stealthy development phase to announce the launch of their company and their Pensa Maestro cloud-based (SaaS) platform, accessible today through an initial service offering called Pensa Lab. The technology here has great opportunity, and importantly the team at Pensa is firming up with the best folks (I used to work for Tom Joyce).

I’m not sure we analysts have firmed out all the words to easily describe what they do yet, but basically Pensa provides a way to define the whole data center in code, validate it as a model, and then pull a trigger and aim it at some infrastructure to automatically deploy it. Data centers on demand!  Of course, doing all the background tranfigurations to validate and actually deploy this über level of complexity and scale requires big smarts – a large part of the magic here is some cleverly applied ML algorithms to drive required transformations, ensure policies and set up SDN configurations.

What is Software Defined?

So let’s back up a bit and explore some of the technologies involved – one of the big benefits of software and software-defined resources is that they can be spun up dynamically (and readily converged within compute hosts with applications and other software defined resources). These software-side “resources” are usually provisioned and configured through “editable model/manifest files/templates” – so-called “infrastructure as code”. Because they are implemented in software they are often also dynamically re-configurable and remotely programmable through API’s.

Application Blueprinting for DevOps

On the other side of the IT fence, applications are increasingly provisioned and deployed dynamically via recipes or catalog-style automation, which in turn rely on internal application “blueprint” or container manifest files that can drive automated configuration and deployment of application code and needed resources, like private network connections, storage volumes and specific data sets. This idea is most visible in new containerized environments, but we also see application blueprinting coming on strong for legacy hypervisor environments and bare metal provisioning solutions too.

Truly Software Defined Data Centers

If you put these two ideas together – SD and application blueprinting, you might envision a truly software defined data center describable fully in code. With some clever discovery solutions, you can imagine that an existing data center could be explored and captured/documented into a model file describing a complete blueprint for both infrastructure and applications (and the enterprise services that wrap around them). Versions of that data center “file” could be edited as desired (e.g. to make a test or dev version perhaps), with the resulting data center models deployable at will on some other actual infrastructure – like “another” public cloud.

Automation of this scenario requires an intelligent translation of high-level blueprint service and resource requirements into practical provisioning and operational configurations on specifically target infrastructure. But imagine being able to effectively snapshot your current data center top to bottom, and them be able to deploy a full, complete copy on demand for testing, replication or even live DR  (we might call this a “live re-inflation DR” (or LR-DR) scenario).

Of course, today’s data center is increasingly hybrid/multi-cloud consisting of a mix of physical, virtual machines and containerized apps and corporate data. But through emerging cutting-edge IT capabilities like hybrid-supporting software defined networking and storage, composable bare metal provisioning, virtualizing hypervisors and cloud-orchestration stacks, container systems, PaaS, and hybrid cloud storage services (e.g. HPE’s Cloud Volumes), it’s becoming possible to not just blueprint and dynamically deploy applications, but soon the whole data center around them.

There is no way that VMware, whose tagline has been SDDC for some time, will roll over and cede the territory here completely to Pensa (or any other startup). But Pensa now has a live service out there today – and that could prove disruptive to the whole enterprise IT  market.