Do ROI Calculators Produce Real ROI?

(Excerpt from original post on the Taneja Group News Blog)

As both a vendor product marketer and now an analyst, I’ve often been asked to help produce an “official” ROI (or the full TCO) calculator for some product. I used to love pulling out Excel and chaining together pages of cascading formulas.  But I’m getting older and wiser.  Now I see that ROI calculators are by and large just big rat holes. In fact I was asked again this week and, instead of quickly replying “yes, if you have enough money” and spinning out some rehashed spreadsheet (like some other IT analyst firms), I spent some time thinking about why the time and money spent producing detailed ROI calculators is usually a wasted investment, if not a wasted opportunity (to do better).

…(read the full post)

Hyperconverged Storage Evolves – Or is it Pivoting When it Comes to Pivot3?

(Excerpt from original post on the Taneja Group News Blog)

Pivot3 recently acquired NexGen (Mar 2016). Many folks have been wondering what they are doing. Pivot3 has made a name in the surveillance/video vertical with bulletproof hyperconvergence based on highly reliable data protection(native erasure coding) and large scalability (no additional east/west traffic with scale) as a specialty. So what does NexGen IP bring?  For starters, multi-tier flash performance and enterprise storage features (like snapshots).

…(read the full post)

We Can No Longer Contain Containers!

(Excerpt from original post on the Taneja Group News Blog)

Depsite naysayers (you know who you are!) I’ve been saying this is the year for containers, and half way into 2016 it’s looking like I’m right. The container community is maturing enterprise grade functionality, perhaps modeled on virtualization predecessors, extremely rapidly. ContainerX is one of those interesting solutions that fills in a lot of gaps for enterprises looking to stand up containers in production. In fact, they claim to be the “vSphere for Containers”.

…(read the full post)

Unifying Big Data Through Virtualized Data Services – Iguaz.io Rewrites the Storage Stack

(Excerpt from original post on the Taneja Group News Blog)

One of the more interesting new companies to arrive on the big data storage scene is iguaz.io. The iguaz.io team has designed a whole new, purpose-built storage stack that can store and serve the same master data in multiple formats, at high performance and in parallel streaming speeds to multiple different kinds of big data applications. This promises to obliterate the current spaghetti data flows with many moving parts, numerous transformation and copy steps, and Frankenstein architectures required to currently stitch together increasingly complex big data workflows. We’ve seen enterprises need to build environments that commonly span from streaming ingest and real time processing through interactive query and into larger data lake and historical archive based analysis, and end up making multiple data copies in multiple storage formats in multiple storage services.

…(read the full post)

Server Side Is Where It’s At – Leveraging Server Resources For Performance

(Excerpt from original post on the Taneja Group News Blog)

If you want performance, especially in IO, you have to bring it to where the compute is happening. We’ve recently seen Datrium launch a smart “split” array solution in which the speedy (and compute intensive) bits of the logical array are hosted server-side, with persisted data served from a shared simplified controller and (almost-JBOD) disk shelf. Now Infinio has announced their new caching solution version 3.0 this week, adding tiered cache support for server-side SSD’s and other flash to their historically memory focused IO acceleration.

…(read the full post)

Agile Big Data Clusters: DriveScale Enables Bare Metal Cloud

(Excerpt from original post on the Taneja Group News Blog)

We’ve been writing recently about the hot, potentially inevitable, trend, towards a dense IT infrastructure in which components like CPU cores and disks are not only commoditized, but deployed in massive stacks or pools (with fast matrixing switches between them). Then a layered provisioning solution can dynamically compose any desired “physical” server or cluster out of those components. Conceptually this becomes the foundation for a bare-metal cloud. DriveScale today announces their agile architecture with this approach, aimed first at solving big data multi-cluster operational challenges. 

…(read the full post)

Data in Space: SANs Now Include Satellite Array Networks

(Excerpt from original post on the Taneja Group News Blog)

All you storage geeks and science fiction fans rejoice! If Cloud Constellation gets its way, you’ll soon be able to directly hybridize your dreary earthbound data center storage with actually above-the-clouds storage. Yep, protect your sensitive data by replicating it to true satellite storage. Only James Bond with a spare Shuttle would be able to hack those things. Just how far fetched is this idea?

…(read the full post)

Scaling All Flash to New Heights – DDN Flashscale All Flash Array Brings HPC to the Data Center

(Excerpt from original post on the Taneja Group News Blog)

It’s time to start thinking about massive amounts of flash in the enterprise data center. I mean PBs of flash for the biggest, baddest, fastest data-driven applications out there. This amount of flash requires an HPC-capable storage solution brought down and packaged for enterprise IT management. Which is where Data Domain Networks (aka DDN) is stepping up. Perhaps too quietly, they have been hard at work pivoting their high-end HPC portfolio into the enterprise space. Today they are rolling out a massively scalable new flash-centric Flashscale 14KXi storage array that will help them offer complete, comprehensive single-vendor big data workflow solutions – from the fastest scratch through the biggest throughput parallel file systems into the largest distributed object storage archives.

…(read the full post)

Hyperconverged Supercomputers For the Enterprise Data Center

(Excerpt from original post on the Taneja Group News Blog)

Last month NVIDIA, our favorite GPU vendor, dived into the converged appliance space. In fact we might call their new NVIDIA DGX-1 a hyperconverged supercomputer in a 4U box. Designed to support the application of GPU’s to Deep Learning (i.e. compute intensive deeply layered neural networks that need to train and run in operational timeframes over big data), this beast has 8 new Tesla P100 GPUs inside on an embedded NVLink mesh, pre-integrated with flash SSDs, decent memory, and an optimized container-hosting deep learning software stack. The best part? The price is surprisingly affordable, and can replace the 250+ server cluster you might otherwise need for effective Deep Learning.

…(read the full post)

Server Powered Storage: Intelligent Storage Arrays Gain Server Superpowers

An IT industry analyst article published by Infostor.


At Taneja Group we are seeing a major trend within IT to leverage server and server-side resources to the maximum extent possible. Servers themselves have become commodities, and dense memory, server-side flash, even compute power continue to become increasingly powerful and cost-friendly. Many datacenters already have a glut of CPU that will only increase with newer generations of faster, larger-cored chips, denser packaging and decreasing power requirements. Disparate solutions from in-memory databases (e.g. SAP HANA) to VMware’s NSX are taking advantage of this rich excess by separating out and moving functionality that used to reside in external devices (i.e. SANs and switches) up onto the server.

Within storage we see two hot trends – hyperconvergence and software defined – getting most of the attention lately. But when we peel back the hype, we find that both are really enabled by this vastly increasing server power – in particular server resources like CPU, memory and flash are getting denser, cheaper and more powerful to the point where they are capable of hosting sophisticated storage processing capabilities directly. Where traditional arrays built on fully centralized, fully shared hardware might struggle with advanced storage functions at scale, server-side storage tends to scale functionality naturally with co-hosted application workloads. The move towards “server-siding” everything is so talked about that it seems inevitable that traditional physical array architectures are doomed.

…(read the complete as-published article there)