Scaling All Flash to New Heights – DDN Flashscale All Flash Array Brings HPC to the Data Center

(Excerpt from original post on the Taneja Group News Blog)

It’s time to start thinking about massive amounts of flash in the enterprise data center. I mean PBs of flash for the biggest, baddest, fastest data-driven applications out there. This amount of flash requires an HPC-capable storage solution brought down and packaged for enterprise IT management. Which is where Data Domain Networks (aka DDN) is stepping up. Perhaps too quietly, they have been hard at work pivoting their high-end HPC portfolio into the enterprise space. Today they are rolling out a massively scalable new flash-centric Flashscale 14KXi storage array that will help them offer complete, comprehensive single-vendor big data workflow solutions – from the fastest scratch through the biggest throughput parallel file systems into the largest distributed object storage archives.

…(read the full post)

Extreme Enterprise Applications Drive Parallel File System Adoption

An IT industry analyst article published by Infostor.

By Mike Matchett, Sr. Analyst and Consultant,
article_extreme-enterprise-applications-drive-parallel-file-system-adoption
With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services. “Extreme” applications like massive voice and image processing or complex financial analysis modeling that can push storage systems to their limits. Examples of some high visibility solutions include large-scale image pattern recognition applications and financial risk management based on high-speed decision-making.

These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential.

Every day we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems—the kind that most enterprise data centers (or cloud providers) have racks and racks of—simply can’t handle the performance requirements.

There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency or high availability—maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution.

The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.

The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise data centers to support the exploitation of new extreme applications.

…(read the complete as-published article there)