Server Powered Storage: Intelligent Storage Arrays Gain Server Superpowers

An IT industry analyst article published by Infostor.


At Taneja Group we are seeing a major trend within IT to leverage server and server-side resources to the maximum extent possible. Servers themselves have become commodities, and dense memory, server-side flash, even compute power continue to become increasingly powerful and cost-friendly. Many datacenters already have a glut of CPU that will only increase with newer generations of faster, larger-cored chips, denser packaging and decreasing power requirements. Disparate solutions from in-memory databases (e.g. SAP HANA) to VMware’s NSX are taking advantage of this rich excess by separating out and moving functionality that used to reside in external devices (i.e. SANs and switches) up onto the server.

Within storage we see two hot trends – hyperconvergence and software defined – getting most of the attention lately. But when we peel back the hype, we find that both are really enabled by this vastly increasing server power – in particular server resources like CPU, memory and flash are getting denser, cheaper and more powerful to the point where they are capable of hosting sophisticated storage processing capabilities directly. Where traditional arrays built on fully centralized, fully shared hardware might struggle with advanced storage functions at scale, server-side storage tends to scale functionality naturally with co-hosted application workloads. The move towards “server-siding” everything is so talked about that it seems inevitable that traditional physical array architectures are doomed.

…(read the complete as-published article there)

Accelerate the Edge: The New Distributed Enterprise

An IT industry analyst article published by Infostor.


article_accelerate-the-edge-the-new-distributed-enterprise
by Mike Matchett

Here at Taneja Group, we think enterprises ought to be agile and flexible, quickly leveraging their forward deployed remote and branch offices as a competitive weapon to gain market opportunity with the presence that only a physical office brings. Unfortunately, we’ve seen that as a business expands its footprint into new regions it can also suffer major growing pains with slow unwieldy deployment and an increasingly costly IT burden.

The traditional approach to remote and branch office IT relies on locally deployed infrastructure and admin skills. Unfortunately, this often makes these “edge of network” offices cost-inefficient, inflexible, painful to support, and left operating at high risk of downtime and data loss if not complete operational failure. In many ways “field” deployed servers and storage act as big anchors holding back potential business velocity.

A big part of this problem isn’t just managing physical resources – it’s also about controlling data. For governance, integrity, and business continuity reasons, getting a complete handle on data that is primarily maintained in remote offices far from a data center presents IT with a significant challenge. Data tends to spread out at a big expense when attempting to also distribute suitable IT controls and management.

What the new agile distributed enterprise needs are solutions that consolidate data under centralized IT control while supporting business productivity at the business “edges” – and even helping accelerate it.

At the CIO level, the biggest challenges stem from having critical corporate data exist in multiple remote locations. Maintaining current data at branch offices is a key business enabler helping drive local business decisions, but at the same time corporate analysis of centralized data can provide strategic business intelligence. This data spread also has obvious security and cost of capacity concerns, but perhaps most concerning is providing for appropriate data protection and disaster recovery.

There are many solutions in the market aimed at backing up remote systems, but ask anyone who has had to restore a remote office with them and you will likely hear that it’s a long, suspenseful process, far from guaranteed. And the point of remote offices is that they aren’t located in secure raised-floor facilities, but are found in places where they can readily suffer power and connectivity problems, not to mention common environmental and location hazards (e.g. broken pipes, spilled coffee, or untrained staff “help”).

This challenge is only increasing for businesses where the main action is in their branch offices. There is constant pressure to increasingly distribute IT processing for local “point of the sword” productivity, agility, and innovation. Unfortunately, physical IT resources and staff are hard to parcel out effectively or efficiently – there can be a serious mismatch in distributed capabilities between IT and the business, what IT folks might call branch office “impedance”.

[We] see four evolving practices that can not only alleviate this misalignment at scale, but can actually magnify IT capabilities at the remote branch…

…(read the complete as-published article there)

Extreme Enterprise Applications Drive Parallel File System Adoption

An IT industry analyst article published by Infostor.

By Mike Matchett, Sr. Analyst and Consultant,
article_extreme-enterprise-applications-drive-parallel-file-system-adoption
With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services. “Extreme” applications like massive voice and image processing or complex financial analysis modeling that can push storage systems to their limits. Examples of some high visibility solutions include large-scale image pattern recognition applications and financial risk management based on high-speed decision-making.

These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential.

Every day we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems—the kind that most enterprise data centers (or cloud providers) have racks and racks of—simply can’t handle the performance requirements.

There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency or high availability—maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution.

The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.

The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise data centers to support the exploitation of new extreme applications.

…(read the complete as-published article there)

Are You Making Money With Your Object Storage?

An IT industry analyst article published by Infostor.

by Mike Matchett, Sr. Analyst and Consultant
article_are-you-making-money-with-your-object-storage-1
Object storage has long been pigeon-holed as a necessary overhead expense for long-term archive storage—a data purgatory one step before tape or deletion. We have seen many IT shops view object storage as something exotic they have to implement to meet government regulations rather than as a competitive strategic asset that can help their businesses make money.

Normally, when companies invest in high-end IT assets like enterprise-class storage, they hope to recoup those investments in big ways. For example, they might accelerate the performance of market competitive applications or efficiently consolidate data centers. Maybe they are even starting to analyze big data to find better ways to run the business.

These kinds of “money-making” initiatives have been mainly associated with file and block types of storage—the primary storage commonly used to power databases, host office productivity applications, and build pools of shared resources for virtualization projects.

But that’s about to change.

If you’ve intentionally dismissed or just overlooked object storage, it is time to take deeper look. Today’s object storage provides brilliant capabilities for enhancing productivity, creating global platforms and developing new revenue streams.

…(read the complete as-published article there)