Massive IOPS with Drops In Price-Performance – Early 2015 SPC-2 Standings

(Excerpt from original post on the Taneja Group News Blog)

There is something quite interesting happening on the SPC-2 top 10 results page. Right now, HP (with XP7) and Kaminario, a hot all-flash solution, are leading the performance list. But the next three entries belong to Oracle’s ZFS Storage ZS3 and ZS4 lineup. And when you scroll down a bit, the biggest surprise is that the top 10 price/performance storage leader is now Oracle!

…(read the full post)

InfiniBand Updates Specs Preparing for 10000 Node Exascale Clusters

(Excerpt from original post on the Taneja Group News Blog)

We’ve long been fans of InfiniBand, watching as new generations of enterprise class scale-out clusters and storage solutions learn from the HPC world how to achieve really high-speed interconnection. InfiniBand itself may never win the popular market race against Ethernet, but newer generations of Ethernet are looking more and more like InfiniBand. And parts of the IB world, namely RDMA and RoCE, have swept into datacenters almost unaware (e.g. look under the hood of SMB 3.0).

…(read the full post)

Are all software-defined storage vendors the same?

An IT industry analyst article published by SearchDataCenter.

What does software-defined storage mean? How is it implemented? And why does it matter for data center managers?


Software-defined storage is an overused and under-defined term, but the general idea is to implement storage services such that they can be dynamically reconfigured as needed, usually through the use of software-based virtual nodes.

Generally, software-defined versions of storage have extremely competitive price points, and deploy on virtual machines and/or commodity hardware.

Some software-defined storage vendors take aim only at the virtualization environment. Many offerings available today […] are cross-platform, global and more grid-like solutions that can be provisioned and grow across an organization as needed. The best of these freely scaling offerings offer built-in global replication, a single namespace and advanced analytics (for a storage system) like content indexing, distributed big data style processing and sophisticated usage and auditing management.

Several storage trends — scale-out, object-based, cloud-ready and software-defined — deserve your attention…

…(read the complete as-published article there)

New approaches to scalable storage

An IT industry analyst article published by SearchDataCenter.

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.


Unrelenting data growth has spawned new scalable storage designs.

We’ve all read the storage reports about overwhelming data growth. It’s certainly a big and growing challenge that deserves attention, but I’ll skip the part where I scare you into thinking we’re about to be overwhelmed under a deluge of data. We tend to store about as much data as we can, no matter how much data there might be. There has always been more data than we could keep. That’s why even the earliest data center storage systems implemented quotas, archives and data summarization.

The new challenge today is effectively mining business value out of the huge amount of newly useful data, with even more coming fast in all areas of IT storage: block, file, object, and big data. If you want to stay competitive, you’ll likely have to tackle some data storage scaling projects soon. Newer approaches to large-scale storage can help.
Scaling storage out into space

The first thing to consider is the difference between scale-up and scale-out approaches. Traditional storage systems are based on the scale-up principle, in which you incrementally grow storage capacity by simply adding more disks under a relatively fixed number of storage controllers (or small cluster of storage controllers, with one to four high availability pairs being common). If you exceed the system capacity (or performance drops off), you add another system alongside it.

Scale-up storage approaches are still relevant, especially in flash-first and high-end hybrid platforms, where latency and IOPS performance are important. A large amount of dense flash can serve millions of IOPS from a small footprint. Still, larger capacity scale-up deployments can create difficult challenges — rolling out multiple scale-up systems tends to fragment the storage space, creates a management burden and requires uneven CapEx investment.

In response, many scalable storage designs have taken a scale-out approach. In scale-out designs, capacity and performance throughput grow incrementally by adding more storage nodes to a networked system cluster. Scale-up designs are often interpreted as having limited vertical growth, whereas scale-out designs imply a relatively unconstrained horizontal growth. Each node can usually service client I/O requests, and depending on how data is spread and replicated internally, each node may access any data in the cluster. As a single cluster can grow to very large scale, system management remains unified (as does the namespace in most cases). This gives scale-out designs a smoother CapEx growth path and a more overall linear performance curve.

Another trend that helps address storage scalability is a shift from hierarchical file systems towards object storage…

…(read the complete as-published article there)