Server Powered Storage: Intelligent Storage Arrays Gain Server Superpowers

An IT industry analyst article published by Infostor.

At Taneja Group we are seeing a major trend within IT to leverage server and server-side resources to the maximum extent possible. Servers themselves have become commodities, and dense memory, server-side flash, even compute power continue to become increasingly powerful and cost-friendly. Many datacenters already have a glut of CPU that will only increase with newer generations of faster, larger-cored chips, denser packaging and decreasing power requirements. Disparate solutions from in-memory databases (e.g. SAP HANA) to VMware’s NSX are taking advantage of this rich excess by separating out and moving functionality that used to reside in external devices (i.e. SANs and switches) up onto the server.

Within storage we see two hot trends – hyperconvergence and software defined – getting most of the attention lately. But when we peel back the hype, we find that both are really enabled by this vastly increasing server power – in particular server resources like CPU, memory and flash are getting denser, cheaper and more powerful to the point where they are capable of hosting sophisticated storage processing capabilities directly. Where traditional arrays built on fully centralized, fully shared hardware might struggle with advanced storage functions at scale, server-side storage tends to scale functionality naturally with co-hosted application workloads. The move towards “server-siding” everything is so talked about that it seems inevitable that traditional physical array architectures are doomed.

…(read the complete as-published article there)

Get the most from cloud-based storage services

An IT industry analyst article published by SearchStorage.

We have been hearing about the inevitable transition to the cloud for IT infrastructure since before the turn of the century. But, year after year, storage shops quickly become focused on only that year’s prioritized initiatives, which tend to be mostly about keeping the lights on and costs low. A true vision-led shift to cloud-based storage services requires explicit executive sponsorship from the business side of an organization. But unless you cynically count the creeping use of shadow IT as an actual strategic directive to do better as an internal service provider, what gets asked of you is likely — and unfortunately — to perform only low-risk tactical deployments or incremental upgrades.

Not exactly the stuff of business transformations.

Cloud adoption at a level for maximum business impact requires big executive commitment. That amount of commitment is, quite frankly, not easy to generate.

…(read the complete as-published article there)

CI and disaggregated server tech can converge after all

An IT industry analyst article published by SearchDataCenter.

I’ve talked about the inevitability of infrastructure convergence, so it might seem like I’m doing a complete 180 degree turn by introducing the opposite trend of infrastructure: aggregation. Despite appearances, disaggregated server technology isn’t really the opposite of convergence. In fact, disaggregated and converged servers work together.

In this new trend, physical IT components come in larger and denser pools for maximum cost efficiency. At the same time, compute-intensive functionality, such as data protection, that was once tightly integrated with the hardware is pulled out and hosted separately to optimize performance and use cheaper components.

Consider today’s cloud architects building hyper-scale infrastructures; instead of buying monolithic building blocks, they choose to pool massive amounts of dense commodity resources.

…(read the complete as-published article there)

Evaluating hyper-converged architectures: Five key CIO considerations

An IT industry analyst article published by SearchCio.

Plain IT convergence offers IT organizations a major convenience — integrated and pre-assembled stacks of heterogeneous vendor infrastructure, including servers, storage and networking gear, that help accelerate new deployments and quickly support fast-growing applications.

But IT hyper-convergence goes farther to integrate IT infrastructure into simple modular appliances. Where pre-converged racks of infrastructure can provide good value to enterprises that would otherwise buy and assemble component vendor equipment themselves, hyper-converged architectures present a larger opportunity to not only simplify IT infrastructure and save on capital expenditures (CAPEX) but also help transform IT staff from internally focused legacy data center operators into increasingly agile, business-facing service providers.

With hyper-converged architectures, IT organizations can shift focus towards helping accelerate and enable business operations and applications, because they don’t spend as much time on, for example, silo troubleshooting, stack integration and testing, and traditional data protection tasks. The decision to adopt hyper-converged architectures is therefore something that business folks will see and appreciate directly through increased IT agility, cloud-like IT services, realistic BC/DR, and a greatly improved IT cost basis.

…(read the complete as-published article there)

Assimilate converged IT infrastructure into the data center

An IT industry analyst article published by SearchDataCenter.

I feel like the Borg from Star Trek when I proclaim that “IT convergence is inevitable.”

Converged IT infrastructure, the tight vendor integration of multiple IT resources like servers and storage, is a good thing, a mark of forward progress. And resistance to convergence is futile. It is a great way to simplify and automate the complexities between two (or more) maturing domains and drive cost-efficiencies, reliability improvements, and agility. As the operations and management issues for any set of resources becomes well understood, new solutions will naturally evolve that internally converge them into a more unified integrated single resource. Converged solutions are faster to deploy, simpler to manage, and easier for vendors to support.

Some resistance to converge does happen within some IT organizations. Siloed staff might suffer — convergence threatens domain subject matter experts by embedding their fiefdoms inside larger realms. That’s not the first time that has happened, and there is always room for experts to dive deep under the covers to work through levels of complexity when things inevitably go wrong. That makes for more impactful and satisfying jobs. And let’s be honest — converged IT is far less threatening than the public cloud.

…(read the complete as-published article there)

Scale-out architecture and new data protection capabilities in 2016

An IT industry analyst article published by SearchDataCenter.

January was a time to make obvious predictions and short-lived resolutions. Now is the time for intelligent analysis of the shark-infested waters of high tech. The new year is an auspicious time for new startups to come out of the shadows. But what is just shiny and new, and what will really impact data centers?

From application-focused resource management to scale-out architecture, here are a few emerging trends  that will surely impact the data center.

…(read the complete as-published article there)

Will container virtualization be the biggest data center trend of 2016?

An IT industry analyst article published by SearchServerVirtualization.

It’s hard to predict what the biggest thing to hit the data center will be in 2016. Big data? Hyper-convergence? Hybrid cloud? I’ve decided that this is the year that containers will arrive in a big way — much earlier and faster than many expect, catching unprepared IT shops by surprise.

Unlike other technologies like big data that require vision and forward investment, containers are a natural next step for application packaging, deployment and hosting that don’t require massive shifts in mindset or vision. It’s just quicker and easier to develop and deploy an application in a container than it is to build a virtual appliance. Containerized architectures also have the compelling operational and financial benefits of cheaper or free licensing, more efficient use of physical resources, better scalability and ultimately service reliability. Looking ahead, container virtualization will help organizations take better advantage of hybrid or cross-cloud environments.

Server virtualization was also a great idea when it first came out with significant advantages over physical hosting, but it still took many years for it to mature (remember how long it was before anyone hosted an important database in a VM?). The same has been true for private or hybrid clouds, new storage technologies and even big data. But even though container virtualization  is just out of the gate, it has gotten farther down the maturity road by leveraging the roadmap laid out by server virtualization. And you can get a jumpstart by using trusted hypervisors like VMware vSphere Integrated Containers to shepherd in containers while the native container world polishes up its rougher edges. Because containers are sleeker and slimmer than VMs (they are essentially just processes), they will slip into the data center even if IT isn’t looking or paying attention (and even if IT doesn’t want them yet).

…(read the complete as-published article there)

What’s the future of data storage in 2016?

An IT industry analyst article published by SearchStorage.

It’s hard to make stunning predictions on the future of data storage that are certain to come true, but it’s that time of year and I’m going to step out on that limb again. I’ll review my predictions from last year as I go — after all, how much can you trust me if I’m not on target year after year? (Yikes!)

Last year, I said the total data storage market would stay flat despite big growth in unstructured data. I’d have to say that seems to be true, if not actually dropping. Despite lots of new entrants in the market, the average vendor margin in storage is narrowing with software-defined variants showing up everywhere, open-source alternatives nibbling at the edges, commodity-based appliances becoming the rule, and ever-cheaper “usable” flash products improving performance and density at the same time.

…(read the complete as-published article there)

Hyperconvergence for ROBOs and the Datacenter — Virtualization Review

An IT industry analyst article published by Virtualization Review.

Convergence is a happy word to a lot of busy IT folks working long hours still standing up large complex stacks of infrastructure (despite having virtualized their legacy server sprawl), much less trying to deploy and manage mini-data centers out in tens, hundreds, or even thousands of remote or branch offices (ROBOs).

Most virtualized IT shops need to run lean and mean, and many find it challenging to integrate and operate all the real equipment that goes into the main datacenter: hypervisors, compute clusters, SANs, storage arrays, IP networks, load balancers, WAN optimizers, cloud gateways, backup devices and more. From a logical perspective, when you multiply the number of heterogeneous components by a number of remote locations, the “scale” of IT to manage climbs very fast. If you factor together the number of possible locations and interactions, the challenges of managing at scale can grow non-linearly (i.e., exponentially).

…(read the complete as-published article there)

Can your cluster management tools pass muster?

An IT industry analyst article published by SearchDataCenter.

A big challenge for IT is managing big clusters effectively, especially with bigger data, larger mashed-up workflows, and the need for more agile operations.

Cluster designs are everywhere these days. Popular examples include software-defined storage, virtual infrastructure, hyper-convergence, public and private clouds, and, of course, big data. Clustering is the scale-out way to architect infrastructure to use commodity resources like servers and JBODs. Scale-out designs can gain capacity and performance incrementally, reaching huge sizes cost-effectively compared to most scale-up infrastructure.

Big clusters are appealing because they support large-scale convergence and consolidation initiatives that help optimize overall CapEx. So why haven’t we always used cluster designs for everyday IT infrastructure? Large cluster management and operations are quite complex, especially when you start mixing workloads and tenants. If you build a big cluster, you’ll want to make sure it gets used effectively, and that usually means hosting multiple workloads. As soon as that happens, IT has trouble figuring out how to prioritize or share resources fairly. This has never been easy — the total OpEx in implementing, provisioning, and optimally managing shared clustered architectures is often higher than just deploying fully contained and individually assigned scale-up products.

…(read the complete as-published article there)