Scale-out architecture and new data protection capabilities in 2016

An IT industry analyst article published by SearchDataCenter.


January was a time to make obvious predictions and short-lived resolutions. Now is the time for intelligent analysis of the shark-infested waters of high tech. The new year is an auspicious time for new startups to come out of the shadows. But what is just shiny and new, and what will really impact data centers?

From application-focused resource management to scale-out architecture, here are a few emerging trends  that will surely impact the data center.

…(read the complete as-published article there)

Kudu Might Be Invasive: Cloudera Breaks Out Of HDFS

(Excerpt from original post on the Taneja Group News Blog)

For the IT crowd just now getting to used to the idea of big data’s HDFS (Hadoop’s Distributed File System) and it’s peculiarities, there is another alternative open source big data file system coming from Cloudera called Kudu. Like HDFS, Kudu is designed to be hosted across a scale-out cluster of commodity systems, but specifically intended to support more low-latency analytics.

…(read the full post)

Will container virtualization be the biggest data center trend of 2016?

An IT industry analyst article published by SearchServerVirtualization.


It’s hard to predict what the biggest thing to hit the data center will be in 2016. Big data? Hyper-convergence? Hybrid cloud? I’ve decided that this is the year that containers will arrive in a big way — much earlier and faster than many expect, catching unprepared IT shops by surprise.

Unlike other technologies like big data that require vision and forward investment, containers are a natural next step for application packaging, deployment and hosting that don’t require massive shifts in mindset or vision. It’s just quicker and easier to develop and deploy an application in a container than it is to build a virtual appliance. Containerized architectures also have the compelling operational and financial benefits of cheaper or free licensing, more efficient use of physical resources, better scalability and ultimately service reliability. Looking ahead, container virtualization will help organizations take better advantage of hybrid or cross-cloud environments.

Server virtualization was also a great idea when it first came out with significant advantages over physical hosting, but it still took many years for it to mature (remember how long it was before anyone hosted an important database in a VM?). The same has been true for private or hybrid clouds, new storage technologies and even big data. But even though container virtualization  is just out of the gate, it has gotten farther down the maturity road by leveraging the roadmap laid out by server virtualization. And you can get a jumpstart by using trusted hypervisors like VMware vSphere Integrated Containers to shepherd in containers while the native container world polishes up its rougher edges. Because containers are sleeker and slimmer than VMs (they are essentially just processes), they will slip into the data center even if IT isn’t looking or paying attention (and even if IT doesn’t want them yet).

…(read the complete as-published article there)

What’s the future of data storage in 2016?

An IT industry analyst article published by SearchStorage.


It’s hard to make stunning predictions on the future of data storage that are certain to come true, but it’s that time of year and I’m going to step out on that limb again. I’ll review my predictions from last year as I go — after all, how much can you trust me if I’m not on target year after year? (Yikes!)

Last year, I said the total data storage market would stay flat despite big growth in unstructured data. I’d have to say that seems to be true, if not actually dropping. Despite lots of new entrants in the market, the average vendor margin in storage is narrowing with software-defined variants showing up everywhere, open-source alternatives nibbling at the edges, commodity-based appliances becoming the rule, and ever-cheaper “usable” flash products improving performance and density at the same time.

…(read the complete as-published article there)

Hyperconvergence for ROBOs and the Datacenter — Virtualization Review

An IT industry analyst article published by Virtualization Review.


article_hyperconvergence-for-robos-and-the-datacenter-1
Convergence is a happy word to a lot of busy IT folks working long hours still standing up large complex stacks of infrastructure (despite having virtualized their legacy server sprawl), much less trying to deploy and manage mini-data centers out in tens, hundreds, or even thousands of remote or branch offices (ROBOs).

Most virtualized IT shops need to run lean and mean, and many find it challenging to integrate and operate all the real equipment that goes into the main datacenter: hypervisors, compute clusters, SANs, storage arrays, IP networks, load balancers, WAN optimizers, cloud gateways, backup devices and more. From a logical perspective, when you multiply the number of heterogeneous components by a number of remote locations, the “scale” of IT to manage climbs very fast. If you factor together the number of possible locations and interactions, the challenges of managing at scale can grow non-linearly (i.e., exponentially).

…(read the complete as-published article there)

Filling In With Flash – Tintri Offers Smaller All Flash For Hungry VMs

(Excerpt from original post on the Taneja Group News Blog)

In 2015 we finally saw VVOLs start to roll out, yet VVOL support varies widely and so far hasn’t been as impressive as we’d have thought. Perhaps VMware’s own Virtual SAN stole some of their own show, but more likely spotty VVOL enhancements just haven’t leveled the playing field with enterprise grade VM aware storage like that from Tintri. And in fact Tintri is still running away with the ball having rolled out fast all-flash solutions earlier this year (at 72 and 36TB effective capacity).

…(read the full post)

Time To Use The Force, IT! – OpsDataStore Unifies Systems Management Data

(Excerpt from original post on the Taneja Group News Blog)

We are only a bit excited by the impending Star Wars release. How old were we when the first one came out? I’m not saying. We are all very excited here to see this new continuation – of the story, the characters, and the universe of the force. Especially compared to our day-to-day IT management reality which often seems stuck in the 70’s. Systems management has been around even longer than the Star Wars franchise, but it seems to have stagnated along the way. Where is the rebellion? The good Jedi warriors to save us all from the Dark side?

…(read the full post)

Can your cluster management tools pass muster?

An IT industry analyst article published by SearchDataCenter.


A big challenge for IT is managing big clusters effectively, especially with bigger data, larger mashed-up workflows, and the need for more agile operations.

Cluster designs are everywhere these days. Popular examples include software-defined storage, virtual infrastructure, hyper-convergence, public and private clouds, and, of course, big data. Clustering is the scale-out way to architect infrastructure to use commodity resources like servers and JBODs. Scale-out designs can gain capacity and performance incrementally, reaching huge sizes cost-effectively compared to most scale-up infrastructure.

Big clusters are appealing because they support large-scale convergence and consolidation initiatives that help optimize overall CapEx. So why haven’t we always used cluster designs for everyday IT infrastructure? Large cluster management and operations are quite complex, especially when you start mixing workloads and tenants. If you build a big cluster, you’ll want to make sure it gets used effectively, and that usually means hosting multiple workloads. As soon as that happens, IT has trouble figuring out how to prioritize or share resources fairly. This has never been easy — the total OpEx in implementing, provisioning, and optimally managing shared clustered architectures is often higher than just deploying fully contained and individually assigned scale-up products.

…(read the complete as-published article there)

Container technology’s role in storage

An IT industry analyst article published by SearchServerVirtualization.


Could containers dethrone virtual machines as the next generation compute architecture? I’ve heard many industry folks say that containers are moving faster into real deployments than almost any previous technology, driven by application developers, DevOps and business-side folks looking for agility as much as IT needs efficiency and scale.

Containers were one of the hottest topics at VMworld 2015. VMware clearly sees a near-term mash-up of virtual machines and containers coming quickly to corporate data centers. And IT organizations still need to uphold security and data management requirements — even with containerized applications. VMware has done a bang-up job of delivering that on the VM side, and now it’s weighed in with designs that extend its virtualization and cloud management solutions to support (and, we think, ultimately assimilate) enterprise containerization projects.

VMware’s new vSphere Integrated Containers (VICs) make managing and securing containers, which in this case are running nested in virtual machines (called “virtual container hosts”), pretty much the same as managing and securing traditional VMs. The VICs show up in VMware management tools as first-class IT managed objects equivalent to VMs, and inherit much of what of vSphere offers to virtual machine management, including robust security. This makes container adoption something every VMware customer can simply slide into.

However, here at Taneja Group we think the real turning point for container adoption will be when containers move beyond being simply stateless compute engines and deal directly with persistent data.

…(read the complete as-published article there)