What’s a Multi-cloud Really?  Some Insider Notes from VMworld 2017

(Excerpt from original post on the Taneja Group News Blog)

As comfortable 65-70 degree weather blankets New England here as we near end of summer, flying into Las Vegas for VMworld at 110 degrees seemed like dropping into hell. Last time I was in that kind of heat I was stepping off a C-130 into the Desert Shield/Desert Storm theater of operations. At least here, as everyone still able to breathe immediately says -“at least it’s a dry heat.”

…(read the full post)

Actual Hybrid of Enterprise Storage and Public Cloud? Oracle creates a Cloud Converged System

(Excerpt from original post on the Taneja Group News Blog)

What’s a Cloud Converged system? It is really what us naive people thought hybrid storage was all about all along.  Yet until now no high performance enterprise class storage ever actually delivered it.  But now, Oracle’s latest ZFS Storage Appliance, the ZS5, comes natively integrated with Oracle Cloud storage. What does that mean? On-premise ZS5 Storage Object pools now extend organically into Oracle Cloud storage (which is also made up of ZS storage) – no gateway or third party software required.
 
Oracle has essentially brought enterprise hybrid cloud storage to market, no integration required. I’m not really surprised that Oracle has been able to roll this out, but I am a little surprised that they are leading the market in this area.
 
Why hasn’t Dell EMC come up with a straightforward hybrid cloud leveraging their enterprise storage and cloud solutions? Despite having all the parts, they failed to actually produce the long desired converged solution – maybe due to internal competition between infrastructure and cloud divisions? Well, guess what. Customers want to buy hybrid storage, not bundles or bunches of parts and disparate services that could be integrated (not to mention wondering who supports the resulting stack of stuff).
 
Some companies so married to their legacy solutions that they, like NetApp for example, don’t even offer their own cloud services – maybe they were hoping this cloud thing would just blow over? Maybe all those public cloud providers would stick with web 2.0 apps and wouldn’t compete for enterprise GB dollars?
 
(Microsoft does have StorSimple which may have pioneered on-prem storage integrated with cloud tiering (to Azure). However, StorSimple is not a high performance, enterprise class solution (capable of handling PBs+ with massive memory accelerated performance). And it appears that Microsoft is no longer driving direct sales of StorSimple, apparently positioning it now only as one of many on-ramps to herd SME’s fully into Azure.)
 
We’ve reported on the Oracle ZFS Storage Appliance itself before. It has been highly augmented over the years. The Oracle ZFS Storage Appliance is a great filer on its own, competing favorably on price and performance with all the major NAS vendors. And it provides extra value with all the Oracle Database co-engineering poured into it.  And now that it’s inherently cloud enabled, we think for some folks it’s likely the last storage NAS they will ever need to invest in (if you’ll want more performance, you will likely move to in-memory solutions, and if you want more capacity – well that’s what the cloud is for!).
 
Oracle’s Public Cloud is made up of – actually built out of – Oracle ZFS Storage Appliances. That means the same storage is running on the customer’s premise as in the public cloud they are connected with. Not only does this eliminate a whole raft of potential issues, but solving any problems that might arise is going to be much simpler – (and less likely to happen given the scale of Oracle’s own deployment of their own hardware first).
 
Compare this to NetApp’s offering to run a virtual image of NetApp storage in a public cloud that only layers up complexity and potential failure points. We don’t see many taking the risk of running or migrating production data into that kind of storage. Their NPS co-located private cloud storage is perhaps a better offering, but the customer still owns and operates all the storage – there is really no public cloud storage benefit like elasticity or utility pricing.
 
Other public clouds and on-prem storage can certainly be linked with products like Attunity CloudBeam, or additional cloud gateways or replication solutions.  But these complications are exactly what Oracle’s new offering does away with.
 
There is certainly a core vendor alignment of on-premises Oracle storage with an Oracle Cloud subscription, and no room for cross-cloud brokering at this point. But a ZFS Storage Appliance presents no more technical lock-in than any other NAS (other than the claim that they are more performant at less cost, especially for key workloads that run Oracle Database.), nor does Oracle Cloud restrict the client to just Oracle on-premise storage.
 
And if you are buying into the Oracle ZFS family, you will probably find that the co-engineering benefits with Oracle Database (and Oracle Cloud) makes the set of them all that much more attractive (technically and financially). I haven’t done recent pricing in this area, but I think we’d find that while there may be cheaper cloud storage prices per vanilla GB out there, looking at the full TCO for an enterprise GB, hybrid features and agility could bring Oracle Cloud Converged Storage to the top of the list.

…(read the full post)

Kudu Might Be Invasive: Cloudera Breaks Out Of HDFS

(Excerpt from original post on the Taneja Group News Blog)

For the IT crowd just now getting to used to the idea of big data’s HDFS (Hadoop’s Distributed File System) and it’s peculiarities, there is another alternative open source big data file system coming from Cloudera called Kudu. Like HDFS, Kudu is designed to be hosted across a scale-out cluster of commodity systems, but specifically intended to support more low-latency analytics.

…(read the full post)

Integrate cloud tiering with on-premises storage

An IT industry analyst article published by SearchCloudStorage.


Cloud and on-premises storage are increasingly becoming integrated. This means cloud tiering is just another option available to storage administrators. Organizations aren’t likely to move 100% of their data into cloud services, but most will want to take advantage of cloud storage benefits for at least some data. The best approaches to using cloud storage in a hybrid fashion create a seamless integration between on-premises storage resources and the cloud. The cloud tiering integration can be accomplished with purpose-built software, cloud-enabled applications or the capabilities built into storage systems or cloud gateway products.

This may be the year that public cloud adoption finally moves beyond development projects and Web 2.0 companies and enters squarely into the mainstream of IT. Cloud service providers can offer tremendous advantages in terms of elasticity, agility, scalable capacity and utility pricing. Of course, there remain some unavoidable concerns about security, competitiveness, long-term costs and performance. Also, not all applications or workloads are cloud-ready and most organizations are not able to operate fully in a public cloud. However, these concerns lead to what we are seeing in practice as a hybrid cloud approach, attempting to combine the best of both worlds.

Taneja Group research supports that view, determining that only about 10% of enterprise IT organizations are even considering moving wholesale into public clouds. The vast majority of IT shops continue to envision future architectures with cloud and on-premises infrastructure augmented by hyperconverged products, at least within the next 3-5 years. Yet, in those same shops, increasing storage consolidation, virtualization and building out cloud services are the top IT initiatives planned out for the next 18 months. These initiatives lean toward using available public cloud capabilities where it makes sense — supporting Web apps and mobile users, collaboration and sharing, deep archives, off-site backups, DRaaS and even, in some cases, as a primary storage tier.

The amount of data that many IT shops will have to store, manage, protect and help process, by many estimates, is predicted to double every year for the foreseeable future. Given very real limits on data centers, staffing and budget, it will become increasingly harder to deal with this data growth completely in-house.

…(read the complete as-published article there)

Moving to all-flash? Think about your data storage infrastructure

An IT industry analyst article published by SearchStorage.


Everyone is now onboard with flash. All the key storage vendors have at least announced entry into the all-flash storage array market, with most having offered hybrids — solid-state drive-pumped traditional arrays — for years. As silicon storage gets cheaper and denser, it seems inevitable that data centers will migrate from spinning disks to “faster, better and cheaper” options, with non-volatile memory poised to be the long-term winner.

But the storage skirmish today seems to be heading toward the total cost of ownership end of things, where two key questions must be answered:

  • How much performance is needed, and how many workloads in the data center have data with varying quality of service (QoS) requirements or data that ages out?
  • Are hybrid arrays a better choice to handle mixed workloads through advanced QoS and auto-tiering features?

All-flash proponents argue that cost and capacity will continue to drop for flash compared to hard disk drives (HDDs), and that no workload is left wanting with the ability of all-flash to service all I/Os at top performance. Yet we see a new category of hybrids on the market that are designed for flash-level performance and then fold in multiple tiers of colder storage. The argument there is that data isn’t all the same and its value changes over its lifetime. Why store older, un-accessed data on a top tier when there are cheaper, capacity-oriented tiers available?

It’s misleading to lump together hybrids that are traditional arrays with solid-state drives (SSDs) added and the new hybrids that might be one step evolved past all-flash arrays. And it can get even more confusing when the old arrays get stuffed with nothing but flash and are positioned as all-flash products. To differentiate, some industry wags like to use the term “flash-first” to describe newer-generation products purpose-built for flash speeds. That still could cause some confusion when considering both hybrids and all-flash designs. It may be more accurate to call the flash-first hybrids “flash-converged.” By being flash-converged, you can expect to buy one of these new hybrids with nothing but flash inside and get all-flash performance.

We aren’t totally convinced that the future data center will have just a two-tier system with flash on top backed by tape (or a remote cold cloud), but a “hot-cold storage” future is entirely possible as intermediate tiers of storage get, well, dis-intermediated. We’ve all predicted the demise of 15K HDDs for a while; can all the other HDDs be far behind as QoS controls get more sophisticated in handling the automatic mixing of hot and cold to create any temperature storage you might need? …

…(read the complete as-published article there)

New approaches to scalable storage

An IT industry analyst article published by SearchDataCenter.

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.


Unrelenting data growth has spawned new scalable storage designs.

We’ve all read the storage reports about overwhelming data growth. It’s certainly a big and growing challenge that deserves attention, but I’ll skip the part where I scare you into thinking we’re about to be overwhelmed under a deluge of data. We tend to store about as much data as we can, no matter how much data there might be. There has always been more data than we could keep. That’s why even the earliest data center storage systems implemented quotas, archives and data summarization.

The new challenge today is effectively mining business value out of the huge amount of newly useful data, with even more coming fast in all areas of IT storage: block, file, object, and big data. If you want to stay competitive, you’ll likely have to tackle some data storage scaling projects soon. Newer approaches to large-scale storage can help.
Scaling storage out into space

The first thing to consider is the difference between scale-up and scale-out approaches. Traditional storage systems are based on the scale-up principle, in which you incrementally grow storage capacity by simply adding more disks under a relatively fixed number of storage controllers (or small cluster of storage controllers, with one to four high availability pairs being common). If you exceed the system capacity (or performance drops off), you add another system alongside it.

Scale-up storage approaches are still relevant, especially in flash-first and high-end hybrid platforms, where latency and IOPS performance are important. A large amount of dense flash can serve millions of IOPS from a small footprint. Still, larger capacity scale-up deployments can create difficult challenges — rolling out multiple scale-up systems tends to fragment the storage space, creates a management burden and requires uneven CapEx investment.

In response, many scalable storage designs have taken a scale-out approach. In scale-out designs, capacity and performance throughput grow incrementally by adding more storage nodes to a networked system cluster. Scale-up designs are often interpreted as having limited vertical growth, whereas scale-out designs imply a relatively unconstrained horizontal growth. Each node can usually service client I/O requests, and depending on how data is spread and replicated internally, each node may access any data in the cluster. As a single cluster can grow to very large scale, system management remains unified (as does the namespace in most cases). This gives scale-out designs a smoother CapEx growth path and a more overall linear performance curve.

Another trend that helps address storage scalability is a shift from hierarchical file systems towards object storage…

…(read the complete as-published article there)

Violin Bares New Gen of All-Flash Teeth

(Excerpt from original post on the Taneja Group News Blog)

Recently I hosted a second all-flash vendor panel on our Taneja Group BrightTalk channel. Violin Memory and IBM squared off (and agreed on many points) on the current good use cases, applications, and futures of the high-performance end of the flash storage market, with Netapp providing some balanced commentary preparing for their upcoming all-flash solution (FlashRay). Violin’s perspective was that efficiently designed all-flash systems are starting to be so cost-competitive  they are taking over tier 1 AND tier 2 workloads in the datacenter, and by the way, a competitive data center has already completely transitioned to their all-flash storage.

…(read the full post)

A new gen for NexGen?

(Excerpt from original post on the Taneja Group News Blog)

NexGen was one of the first real flash/hybrid with QoS storage solutions, and it leveraged PCIe flash (i.e. Fusion-IO cards) to great effect. Which we suppose had something to do with why Fusion-IO bought them up a couple of years ago. But whatever plans were in the works were likely messed up when SanDisk in-turn bought Fusion-IO because we haven’t heard form NexGen folks in awhile – not a good sign for a storage solution. Well, SanDisk has now spun NexGen back out on its own. While it may be sink or swim time for the NexGen team, we think it’s a good opportunity for all involved. 

…(read the full post)