Massive IOPS with Drops In Price-Performance – Early 2015 SPC-2 Standings

(Excerpt from original post on the Taneja Group News Blog)

There is something quite interesting happening on the SPC-2 top 10 results page. Right now, HP (with XP7) and Kaminario, a hot all-flash solution, are leading the performance list. But the next three entries belong to Oracle’s ZFS Storage ZS3 and ZS4 lineup. And when you scroll down a bit, the biggest surprise is that the top 10 price/performance storage leader is now Oracle!

…(read the full post)

InfiniBand Updates Specs Preparing for 10000 Node Exascale Clusters

(Excerpt from original post on the Taneja Group News Blog)

We’ve long been fans of InfiniBand, watching as new generations of enterprise class scale-out clusters and storage solutions learn from the HPC world how to achieve really high-speed interconnection. InfiniBand itself may never win the popular market race against Ethernet, but newer generations of Ethernet are looking more and more like InfiniBand. And parts of the IB world, namely RDMA and RoCE, have swept into datacenters almost unaware (e.g. look under the hood of SMB 3.0).

…(read the full post)

Are all software-defined storage vendors the same?

An IT industry analyst article published by SearchDataCenter.

What does software-defined storage mean? How is it implemented? And why does it matter for data center managers?


Software-defined storage is an overused and under-defined term, but the general idea is to implement storage services such that they can be dynamically reconfigured as needed, usually through the use of software-based virtual nodes.

Generally, software-defined versions of storage have extremely competitive price points, and deploy on virtual machines and/or commodity hardware.

Some software-defined storage vendors take aim only at the virtualization environment. Many offerings available today […] are cross-platform, global and more grid-like solutions that can be provisioned and grow across an organization as needed. The best of these freely scaling offerings offer built-in global replication, a single namespace and advanced analytics (for a storage system) like content indexing, distributed big data style processing and sophisticated usage and auditing management.

Several storage trends — scale-out, object-based, cloud-ready and software-defined — deserve your attention…

…(read the complete as-published article there)

New approaches to scalable storage

An IT industry analyst article published by SearchDataCenter.

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.


Unrelenting data growth has spawned new scalable storage designs.

We’ve all read the storage reports about overwhelming data growth. It’s certainly a big and growing challenge that deserves attention, but I’ll skip the part where I scare you into thinking we’re about to be overwhelmed under a deluge of data. We tend to store about as much data as we can, no matter how much data there might be. There has always been more data than we could keep. That’s why even the earliest data center storage systems implemented quotas, archives and data summarization.

The new challenge today is effectively mining business value out of the huge amount of newly useful data, with even more coming fast in all areas of IT storage: block, file, object, and big data. If you want to stay competitive, you’ll likely have to tackle some data storage scaling projects soon. Newer approaches to large-scale storage can help.
Scaling storage out into space

The first thing to consider is the difference between scale-up and scale-out approaches. Traditional storage systems are based on the scale-up principle, in which you incrementally grow storage capacity by simply adding more disks under a relatively fixed number of storage controllers (or small cluster of storage controllers, with one to four high availability pairs being common). If you exceed the system capacity (or performance drops off), you add another system alongside it.

Scale-up storage approaches are still relevant, especially in flash-first and high-end hybrid platforms, where latency and IOPS performance are important. A large amount of dense flash can serve millions of IOPS from a small footprint. Still, larger capacity scale-up deployments can create difficult challenges — rolling out multiple scale-up systems tends to fragment the storage space, creates a management burden and requires uneven CapEx investment.

In response, many scalable storage designs have taken a scale-out approach. In scale-out designs, capacity and performance throughput grow incrementally by adding more storage nodes to a networked system cluster. Scale-up designs are often interpreted as having limited vertical growth, whereas scale-out designs imply a relatively unconstrained horizontal growth. Each node can usually service client I/O requests, and depending on how data is spread and replicated internally, each node may access any data in the cluster. As a single cluster can grow to very large scale, system management remains unified (as does the namespace in most cases). This gives scale-out designs a smoother CapEx growth path and a more overall linear performance curve.

Another trend that helps address storage scalability is a shift from hierarchical file systems towards object storage…

…(read the complete as-published article there)

Signs it may be time to adopt a hybrid cloud strategy

An IT industry analyst article published by SearchCloudStorage.

The cloud is gaining traction, but the public cloud raises security concerns. Learn why a hybrid cloud strategy can offer businesses more benefits.


If you’re like most data storage professionals, you’re likely faced with the prospect of phasing cloud storage into your traditional storage environment. Many companies are reluctant to move into public cloud storage for obvious reasons — loss of control, oversight, security and concerns about how the cloud impacts compliance requirements, to name just a few. But the public cloud also offers compelling economics and elastic computing opportunities that have some businesses wanting to seize the potential benefits.

A sign that it may be time to adopt a hybrid cloud strategy is when business folks start contracting directly with public cloud providers for shadow IT services. Some of the reasons public clouds are attractive to business folks, assuming it’s not simply the friction of having to work with an underfunded internal IT group, include:

  • Economic elasticity. Cloud services are available under a number of on-demand agreements. All of those shift the IT budget from periodic large Capex investments to smoother Opex payments. It’s possible that over time it may be more expensive from a TCO perspective to use large amounts of public cloud services, but the ability to continually adjust the volume of services needed while paying for essentially only what you use makes a lot of sense in the face of unpredictable business environments.
  • Agility and quickness. Massive amounts of resources can be spun up in minutes when needed, as opposed to the days, weeks or months required for IT to procure, stage and deliver new infrastructure. At the same time, these resources can be shifted, almost on-demand, as needs change.
  • Broad functionality. Today’s public clouds offer any range or level of cloud outsourcing desired, including low-level infrastructure, container-like development platforms, fully functional applications and complete subscription business services.

The sensible cloud storage strategy is a hybrid approach in which IT retains control of cloud consumption and integrates it with on-premises resources as appropriate.

But there’s another side to the story. When business essentially goes outside the IT department to contract with public cloud services, problems can arise. That’s when issues of governance and control surface, including lack of compliance oversight, loss of data management control and potential security risks…

…(read the complete as-published article there)

How to become an internal hybrid cloud service provider

An IT industry analyst article published by SearchCloudStorage.

Working on a hybrid cloud project? Mike Matchett explains the steps an organization should take to become an internal hybrid cloud service provider.


One major key to success with a hybrid cloud project is to ensure that IT fundamentally transitions into an internal hybrid cloud service provider. This means understanding that business users are now your clients. You must now proactively create and offer services instead of the traditional reactive stance of working down an endless queue of end-user requests. It becomes critical to track and report on the service quality delivered and the business service utilization. Without those metrics, it’s impossible to optimize the services for price, performance, surge capacity, availability or whatever factor might be important to the overall business.

Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, it’s essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model – public, private, or hybrid – is right for you.

Hallmarks of a successful hybrid organization include:

  • A renewed focus on implementing higher levels of automation, spurred by the need to provide clients ways to provision and scale services in an agile manner. This automation usually extends to other parts of IT, like helping to build non-disruptive maintenance processes.
  • An effective process monitoring and management scheme that works as cloud scales to help ensure service-level agreements.
  • Clients aware of what they are consuming and using, even if they’re not actually seeing a bill for the services.

Perhaps the first step is to evaluate the involved workloads and their data sets to look for good hybrid opportunities. If you find that workloads are currently fine or require specialized support, it might be best to leave them alone for now and focus instead on workloads that are based on common platforms.

Next, it’s imperative to address the following implementation concerns before letting real data travel across hybrid boundaries…

…(read the complete as-published article there)