Hyperconverged Storage Evolves – Or is it Pivoting When it Comes to Pivot3?

(Excerpt from original post on the Taneja Group News Blog)

Pivot3 recently acquired NexGen (Mar 2016). Many folks have been wondering what they are doing. Pivot3 has made a name in the surveillance/video vertical with bulletproof hyperconvergence based on highly reliable data protection(native erasure coding) and large scalability (no additional east/west traffic with scale) as a specialty. So what does NexGen IP bring?  For starters, multi-tier flash performance and enterprise storage features (like snapshots).

…(read the full post)

Server Side Is Where It’s At – Leveraging Server Resources For Performance

(Excerpt from original post on the Taneja Group News Blog)

If you want performance, especially in IO, you have to bring it to where the compute is happening. We’ve recently seen Datrium launch a smart “split” array solution in which the speedy (and compute intensive) bits of the logical array are hosted server-side, with persisted data served from a shared simplified controller and (almost-JBOD) disk shelf. Now Infinio has announced their new caching solution version 3.0 this week, adding tiered cache support for server-side SSD’s and other flash to their historically memory focused IO acceleration.

…(read the full post)

Scaling All Flash to New Heights – DDN Flashscale All Flash Array Brings HPC to the Data Center

(Excerpt from original post on the Taneja Group News Blog)

It’s time to start thinking about massive amounts of flash in the enterprise data center. I mean PBs of flash for the biggest, baddest, fastest data-driven applications out there. This amount of flash requires an HPC-capable storage solution brought down and packaged for enterprise IT management. Which is where Data Domain Networks (aka DDN) is stepping up. Perhaps too quietly, they have been hard at work pivoting their high-end HPC portfolio into the enterprise space. Today they are rolling out a massively scalable new flash-centric Flashscale 14KXi storage array that will help them offer complete, comprehensive single-vendor big data workflow solutions – from the fastest scratch through the biggest throughput parallel file systems into the largest distributed object storage archives.

…(read the full post)

Filling In With Flash – Tintri Offers Smaller All Flash For Hungry VMs

(Excerpt from original post on the Taneja Group News Blog)

In 2015 we finally saw VVOLs start to roll out, yet VVOL support varies widely and so far hasn’t been as impressive as we’d have thought. Perhaps VMware’s own Virtual SAN stole some of their own show, but more likely spotty VVOL enhancements just haven’t leveled the playing field with enterprise grade VM aware storage like that from Tintri. And in fact Tintri is still running away with the ball having rolled out fast all-flash solutions earlier this year (at 72 and 36TB effective capacity).

…(read the full post)

Moving to all-flash? Think about your data storage infrastructure

An IT industry analyst article published by SearchStorage.


Everyone is now onboard with flash. All the key storage vendors have at least announced entry into the all-flash storage array market, with most having offered hybrids — solid-state drive-pumped traditional arrays — for years. As silicon storage gets cheaper and denser, it seems inevitable that data centers will migrate from spinning disks to “faster, better and cheaper” options, with non-volatile memory poised to be the long-term winner.

But the storage skirmish today seems to be heading toward the total cost of ownership end of things, where two key questions must be answered:

  • How much performance is needed, and how many workloads in the data center have data with varying quality of service (QoS) requirements or data that ages out?
  • Are hybrid arrays a better choice to handle mixed workloads through advanced QoS and auto-tiering features?

All-flash proponents argue that cost and capacity will continue to drop for flash compared to hard disk drives (HDDs), and that no workload is left wanting with the ability of all-flash to service all I/Os at top performance. Yet we see a new category of hybrids on the market that are designed for flash-level performance and then fold in multiple tiers of colder storage. The argument there is that data isn’t all the same and its value changes over its lifetime. Why store older, un-accessed data on a top tier when there are cheaper, capacity-oriented tiers available?

It’s misleading to lump together hybrids that are traditional arrays with solid-state drives (SSDs) added and the new hybrids that might be one step evolved past all-flash arrays. And it can get even more confusing when the old arrays get stuffed with nothing but flash and are positioned as all-flash products. To differentiate, some industry wags like to use the term “flash-first” to describe newer-generation products purpose-built for flash speeds. That still could cause some confusion when considering both hybrids and all-flash designs. It may be more accurate to call the flash-first hybrids “flash-converged.” By being flash-converged, you can expect to buy one of these new hybrids with nothing but flash inside and get all-flash performance.

We aren’t totally convinced that the future data center will have just a two-tier system with flash on top backed by tape (or a remote cold cloud), but a “hot-cold storage” future is entirely possible as intermediate tiers of storage get, well, dis-intermediated. We’ve all predicted the demise of 15K HDDs for a while; can all the other HDDs be far behind as QoS controls get more sophisticated in handling the automatic mixing of hot and cold to create any temperature storage you might need? …

…(read the complete as-published article there)

Compressing Data for Performance: Pernix’s Latest Release Squeezes Into RAM

(Excerpt from original post on the Taneja Group News Blog)

As machines are available with ever more memory in them, we’ve been seeing that memory put to a lot of good uses lately. Today Pernix Data released FVP 2.5 which updates their big 2.0 release that brought memory into their server-side storage acceleration solution along with the use of flash. Imagine if you could pool server memory across the virtual cluster and use it for a very fast, and protected persistent storage tier. That’s pretty much what Pernix FVP does with their Distributed Fault Tolerant Memory (DFTM) design.

…(read the full post)

A new gen for NexGen?

(Excerpt from original post on the Taneja Group News Blog)

NexGen was one of the first real flash/hybrid with QoS storage solutions, and it leveraged PCIe flash (i.e. Fusion-IO cards) to great effect. Which we suppose had something to do with why Fusion-IO bought them up a couple of years ago. But whatever plans were in the works were likely messed up when SanDisk in-turn bought Fusion-IO because we haven’t heard form NexGen folks in awhile – not a good sign for a storage solution. Well, SanDisk has now spun NexGen back out on its own. While it may be sink or swim time for the NexGen team, we think it’s a good opportunity for all involved. 

…(read the full post)

Out on a data storage market limb: Six predictions for 2015

An IT industry analyst article published by SearchStorage.

Our crystal ball tells us this will be a year of change for the data storage market.


With another year just getting underway, we here at Taneja Group felt we needed a few analyst predictions to get things off on the right foot. The easiest predictions, and often the most likely ones, are that things will continue mostly as they are. But what fun is that? So, like any good fortune teller, we held hands around a crystal ball, gathered our prescient thoughts and with the help of the storage spirits came up with these six predictions for change in the data storage market for 2015.

  1. The overall traditional storage market will stay relatively flat despite huge growth in big data and the onrushing Internet of Things. Most new big data will be unstructured and big data architectures like Hadoop will still tend to leverage DAS for storage. In addition, many big data players are pushing the data lake or hub concept to land even bigger chunks of other enterprise data on big data clusters. While we do see some salvation in this space from vendors […] that enable big data analysis to leverage traditional enterprise storage, it won’t be enough to make a big dent in 2015.We’ve also noticed that many storage shops have yet to take advantage of the emerging capacity optimizations now available (e.g., thin provisioning, linked clones, global deduplication, inline compression and so on) in recent versions of competitive arrays that are becoming table stakes for new acquisition decisions. Hybrid arrays, in particular, are bringing flash-enabling space efficiencies across their full complement of storage tiers, and most arrays these days are turning at least hybrid.
  2. Speaking of flash, there are too many all-flash array (AFA) vendors and not enough differentiation. During 2012/2013 the first AFA vendors had the market to themselves, but with all the big players rolling out full-fledged flash offerings, opportunities are declining. With [many vendors] all pushing their own solutions (both AFA and hybrid), the remaining independent vendors will have a harder time finding a niche where they can survive. We also expect to see a new round of very high-end performance storage architectures in 2015[…]As a related trend, we anticipate that hybrid-based Tier-1 arrays will lose ground to AFAs in general, as the cost of flash drops and flash performance proves valuable to most if not all Tier-1 I/O. In virtualization environments, this trend will be somewhat lessened by the rise in popularity of server-side flash and/or memory caching/tiering solutions.
  3. Data protection and other add-on storage capabilities will become more directly baked into storage solutions. We expect to see more traditional arrays follow the examples of

…(To read the complete six item prediction see as-published article over there)

Figuring out the real price of flash technology

An IT industry analyst article published by SearchSolidStateStorage.

Sometimes comparing the costs of flash arrays is an apples-to-oranges affair — interesting, but not very helpful.


article_Figuring-out-the-real-price-of-flash-technology
We’re often told by hybrid and all-flash array vendors that their particular total cost of ownership (TCO) is effectively lower than the other guy’s. We’ve even heard vendors claim that by taking certain particulars into account, the per-gigabyte price of their flash solution is lower than that of spinning disk. Individually, the arguments sound compelling; but stack them side by side and you quickly run into apples-and-oranges issues.

Storage has a lot of factors that should be profiled and evaluated such as IOPS, latency, bandwidth, protection, reliability, consistency and so on, and these must match up with client workloads with unique read/write mixes, burstiness, data sizes, metadata overhead and quality of service/service-level agreement requirements. Standard benchmarks may be interesting, but the best way to evaluate storage is to test it under your particular production workloads; a sophisticated load gen and modeling tool like that from Load DynamiX can help with that process.

But as analysts, when we try to make industry-level evaluations hoping to compare apples to apples, we run into a host of half-hidden factors we’d like to see made explicitly transparent if not standardized across the industry. Let’s take a closer look…

…(read the complete as-published article there)

Flash runs past read cache

An IT industry analyst article published by SearchDataCenter.

Just because you can add a cache doesn’t mean you should. It is possible to have the wrong kind, so weigh your options before implementing memory-based cache for a storage boost.


article_Flash-runs-past-read-cache
Can you ever have too much cache?

[Cache is the new black…] As a performance optimizer, cache has never gone out of style, but today’s affordable flash and cheap memory are worn by every data center device.

Fundamentally, a classic read cache helps avoid long repetitive trips through a tough algorithm or down a relatively long input/output (I/O) channel. If a system does something tedious once, it temporarily stores the result in a read cache in case it is requested again.

Duplicate requests don’t need to come from the same client. For example, in a large virtual desktop infrastructure (VDI) scenario, hundreds of virtual desktops might want to boot from the same master image of an operating system. In a cache, every user gets a performance boost and saves the downstream system from a lot of duplicate I/O work.

The problem with using old-school, memory-based cache for writes is if you lose power, you lose the cache. Thus, [unless with battery backup] it is used only for read cache. Writes are set up to “write through” — new data must persist somewhere safe on the back end before the application continues.

Flash is nonvolatile random access memory (NVRAM) and is used as cache or as a tier of storage directly…

…(read the complete as-published article there)