Survey Results: Cloud Storage Takes Off, Flash Cools Off

An IT industry analyst article published by Enterprise Storage Forum.


article_cloud-storage-takes-off-flash-cools-off
By Mike Matchett,

The Enterprise Storage Survey results show that the biggest storage budget line item is cloud storage, although HDDs still hold more data. We explore why cloud is inevitably winning, and when the actual tipping point might come about.

Is on-premise storage dead? Is all storage inevitably moving to the cloud? If you work in IT these days, you are no doubt keeping a close eye on the massive changes afoot in storage infrastructure these days. Flash acceleration, hyperconvergence, cloud transformation – where is it all going and how soon will it get there?

We explored the past, present and future of enterprise storage technologies as part of our recent Storage Trends 2018 survey.

The Dominance of Cloud Storage
The short story is that cloud storage has now edged out the ubiquitous hard drive as the top budget line item in IT storage spending (see below). We are not sure if this is good news or bad news for IT, but it is clear that those cloud-heavy IT shops have to get on top of and actively manage their cloud storage spending.

storage survey

Despite having cloud move into the lead for slightly more than 21% of companies, the game is not over yet for on-premise storage solutions. Flash has still not run it’s full course and HDDs are still the top budget item today for almost as many companies (21%) as cloud.

New innovations in solid-state like NVMe are providing even greater acceleration to data center workloads even as SDD prices continue to drop. As silicon price drops, total spending inherently skews towards more expensive technologies – the footprint will grow even if the relative spend doesn’t keep pace…(read the complete as-published article there)

Nutanix hyper-converged Flows into multi-cloud and more

An IT industry analyst article published by Search Converged Infrastructure.


With the introduction of Flow, Beam and Era, Nutanix has made a clear statement that it’s aiming for a comprehensive Nutanix enterprise cloud OS.

Mike Matchett
Small World Big Data

What do you do after you’ve successfully hyper-converged multiple stacks of complex physical infrastructure to simplify IT and then built out a full enterprise production-quality hypervisor offering a cost-effective alternative to VMware? If you are Nutanix, you take on the next big challenge and help IT get into the cloud. Nutanix is no longer a leading hyper-converged infrastructure vendor with a suite of Nutanix hyper-converged products. It has now set its sights higher to help IT hyper-converge horizontally across both hybrid and multi-cloud locations.

At the Nutanix .NEXT 2018 conference, Nutanix unveiled a raft of cloud-related offerings designed to simplify IT at the next level up and out from the physical data center. Stepping outside the Nutanix hyper-converged environment, these new products and services include Nutanix Flow for policy-based network segmentation and security, Nutanix Era for automated database cloning and migration operations, and Nutanix Beam for multi-cloud cost modeling and compliance.

Nutanix Flow
The first step toward successful enterprise multi-cloud IT operations requires tackling network concerns. Setting up production networks and then assuring data center network security are hard enough, but the complexity and risk multiply exponentially when you add in both hybrid cloud and multiple cloud segments.

Flow is Nutanix’s answer to complex networking, offering distributed microsegmentation, provisioning, security and an ecosystem of third-party network services in a cloudlike manner. Microsegmentation isn’t exactly a new idea…(read the complete as-published article there)

Emerging PaaS model puts cloud app dev into the fast lane

An IT industry analyst article published by SearchCloudComputing.


article_Emerging-PaaS-model-puts-cloud-app-dev-into-the-fast-lane
As they grapple with application backlogs and a shortage of seasoned, business-savvy developers, enterprises will increasingly look to drag-and-drop programming options.

Mike Matchett
Small World Big Data

Any complex organization wants to move faster, increase efficiency and lower its costs. That’s never been easy. There are too many moving parts — spread across layers of heterogeneous, hybrid IT — and too much inadequate expertise to accomplish everything.

I’ve never heard anyone say they ran out of applications to build before they ran out of good application developers to build them.

It’s no wonder that the get-to-the-cloud message, with its push-button, pay-someone-else-to-manage-it vision has finally penetrated almost every organization. With the cloud-first mantra these days, the CIO might as well be thought of as the cloud information officer. However, in today’s highly internetworked, hybrid world, IaaS is no longer the big cloud opportunity.

Where IT can help the business gain real competitive advantage is now up the stack with some form of PaaS model, such as high-productivity application PaaS. To be competitive, companies want to build and deploy new applications quickly. PaaS promises to enable developers to build better apps and deploy them faster without IT infrastructure friction, thereby unleashing pent-up productivity.

Switching to PaaS, however, can be hard, much like the move to Agile development methods. Using PaaS assumes you have a bevy of highly experienced and web-savvy developers willing to work on relatively plebeian business processes — and PaaS alone won’t solve all your problems.

Backlogged application development
Great business-savvy application developers are rare. In fact, I’ve never heard anyone say they ran out of applications to build before they ran out of good application developers to build them. And it’s not just developers. An organization’s application backlog problem could get worse for a number of reasons:

  • App dev bottleneck. How many people really grasp top-notch, web-scale coding practices and know the business? Among them, how many know about scalable databases, machine learning algorithms and also have the patience to provide internal customer support?
  • Data swampiness. Some of today’s most valuable data is big, bulky, barely structured, increasingly real-time and growing. Put it all in a data lake and maybe you can make some use of it, but only if you can sort out what’s relevant, what’s compliant and what’s true. Even harder, most new apps will want to naturally span and combine both structured and unstructured data sources.
  • Creativity cost. It takes a good idea and dedicated resources to make a great new business app work well. And it requires a culture that approves of investing in projects that might not always produce results. The biggest returns come from taking the biggest risks, which usually means more money on the line.
  • Ticking time. Ask everyone within your organization for application ideas, and you’ll be sure to compile a huge backlog. Many of those applications are impractical for the simple reason that, by the time developers finish, the app’s window of competitive value will have disappeared. Who needs another outdated application? It’s hard enough to maintain the ones already in use.

PaaS adoption can be a very good thing, helping enable and accelerate development on a number of fronts. But for many of the above reasons, the PaaS model itself won’t help everyone take advantage of all the potential new application opportunities…(read the complete as-published article there)

Serverless technology obfuscates workflows, performance data

An IT industry analyst article published by SearchITOperations.


article_Serverless-technology-obfuscates-workflows-performance-data
Serverless and microservices reshape the application stack into something that looks like a swath of stars in the sky. How do you find a slow, misconfigured component in this interconnected galaxy?

Mike Matchett
Small World Big Data

I’m hearing that IT infrastructure is dead. And who needs it anymore, really? The future is about moving up the stack to microservices and serverless technology, as we continue to abstract, embed and automate away all the complexities of explicit infrastructure layers, such as storage arrays and physical servers.

On-premises, Capex-style IT is shrinking, while rented and remotely managed hardware and cloud transformation set new standards for modern IT. All the cool kids use end-to-end orchestration, advanced machine learning, real-time management data streams, microservices architecture and insanely scalable container environments. And now we even have serverless computing, sometimes called function as a service (FaaS).

But can we have computing without the server? And where did the server go?

Serving more with serverless technology
There is a certain hazard in my life that comes from telling non-IT people that, as an IT industry analyst, I explore and explain technology. I’m asked all the time, even by my mom, questions like, “I suppose you can explain what the cloud is?

I tend to bravely charge in, and, after a lot of at-bats with this question, I’ve got the first 25 seconds down: “It’s like running all your favorite applications and storing all your data on somebody else’s servers that run somewhere else — you just rent it while you use it.” Then I lose them with whatever I say next, usually something about the internet and virtualization.

The same is mostly true with serverless computing. We are just moving one more level up the IT stack. Of course, there is always a server down in the stack somewhere, but you don’t need to care about it anymore. With serverless technology in the stack, you pay for someone else to provide and operate the servers for you.

We submit our code (functions) to the service, which executes it for us according to whatever event triggers we set. As clients, we don’t have to deal with machine instances, storage, execution management, scalability or any other lower-level infrastructure concerns.

The event-driven part is a bit like how stored procedures acted in old databases, or the way modern webpages call in JavaScript functions, hooked to and fired off in response to various clicks and other web events. In fact, AWS Lambda, a popular serverless computing service, executes client JavaScript functions, likely running Node.js in the background in some vastly scalable way.

Look ma, no server!
We need to tackle several issues to ready serverless technology for primetime enterprise use. The first is controlling complexity…(read the complete as-published article there)

Learn storage techniques for managing unstructured data use

An IT industry analyst article published by SearchStorage.


article_Learn-storage-techniques-for-managing-unstructured-data-use
Rearchitect storage to maximize unstructured data use at the global scale for larger data sets coming from big data analytics and other applications.

Mike Matchett
Small World Big Data

Back in the good old days, we mostly dealt with two storage tiers. We had online, high-performance primary storage directly used by applications and colder secondary storage used to tier less-valuable data out of primary storage. It wasn’t that most data lost value on a hard expiration date, but primary storage was pricey enough to constrain capacity, and we needed to make room for newer, more immediately valuable data.

We spent a lot of time trying to intelligently summarize and aggregate aging data to keep some kind of historical information trail online. Still, masses of detailed data were sent off to bed, out of sight and relatively offline. That’s all changing as managing unstructured data becomes a bigger concern. New services provide storage for big data analysis of detailed unstructured and machine data, as well as to support web-speed DevOps agility, deliver storage self-service and control IT costs. Fundamentally, these services help storage pros provide and maintain more valuable online access to ever-larger data sets.

Products for managing unstructured data may include copy data management (CDM), global file systems, hybrid cloud architectures, global data protection and big data analytics. These features help keep much, if not all, data available and productive.

Handling the data explosion

The underlying theme of many new storage offerings is to extend enterprise-quality IT management and governance across multiple tiers of global storage.

We’re seeing a lot of high-variety, high-volume and unstructured data. That’s pretty much everything other than highly structured database records. The new data explosion includes growing files and file systems, machine-generated data streams, web-scale application exhaust, endless file versioning, finer-grained backups and rollback snapshots to meet lower tolerances for data integrity and business continuity, and vast image and media repositories.

The public cloud is one way to deal with this data explosion, but it’s not always the best answer by itself. Elastic cloud storage services are easy to use to deploy large amounts of storage capacity. However, unless you want to create a growing and increasingly expensive cloud data dump, advanced storage management is required for managing unstructured data as well. The underlying theme of many new storage offerings is to extend enterprise-quality IT management and governance across multiple tiers of global storage, including hybrid and public cloud configurations.

If you’re architecting a new approach to storage, especially unstructured data storage at a global enterprise scale, here are seven advanced storage capabilities to consider:

Automated storage tiering. Storage tiering isn’t a new concept, but today it works across disparate storage arrays and vendors, often virtualizing in-place storage first. Advanced storage tiering products subsume yesterday’s simpler cloud gateways. They learn workload-specific performance needs and implement key quality of service, security and business cost control policies.

Much of what used to make up individual products, such as storage virtualizers, global distributed file systems, bulk data replicators, and migrators and cloud gateways, are converging into single-console unifying storage services. Enmotus and Veritas offer these simple-to-use services. This type of storage tiering enables unified storage infrastructure and provides a core service for many different types of storage management products.

Metadata at scale. There’s a growing focus on collecting and using storage metadata — data about stored data — when managing unstructured data. By properly aggregating and exploiting metadata at scale, storage vendors can better virtualize storage, optimize services, enforce governance policies and augment end-user analytical efforts.

Metadata concepts are most familiar in an object or file storage context. However, advanced block and virtual machine-level storage services are increasingly using metadata detail to help with tiering for performance. We also see metadata in data protection features. Reduxio’s infinite snapshots and immediate recovery based on timestamping changed blocks take advantage of metadata, as do change data capture techniques and N-way replication. When looking at heavily metadata-driven storage, it’s important to examine metadata protection schemes and potential bottlenecks. Interestingly, metadata-heavy approaches can improve storage performance because they usually allow for high metadata performance and scalability out of band from data delivery.

Storage analytics. You can use metadata and other introspective analytics about storage use gathered across enterprise storage, both offline and increasingly in dynamic optimizations. Call-home management is one example of how these analytics are used to better manage storage…(read the complete as-published article there)