Hyperconverged Supercomputers For the Enterprise Data Center

(Excerpt from original post on the Taneja Group News Blog)

Last month NVIDIA, our favorite GPU vendor, dived into the converged appliance space. In fact we might call their new NVIDIA DGX-1 a hyperconverged supercomputer in a 4U box. Designed to support the application of GPU’s to Deep Learning (i.e. compute intensive deeply layered neural networks that need to train and run in operational timeframes over big data), this beast has 8 new Tesla P100 GPUs inside on an embedded NVLink mesh, pre-integrated with flash SSDs, decent memory, and an optimized container-hosting deep learning software stack. The best part? The price is surprisingly affordable, and can replace the 250+ server cluster you might otherwise need for effective Deep Learning.

…(read the full post)

Diamanti Reveals Hyperconverged Scale-out Appliances for Containers

(Excerpt from original post on the Taneja Group News Blog)

Diamanti (pre-launch known as Datawise.io) has recently rolled out their brand new hyperconverged “container” appliances. Why would containers, supposedly able to be fluidly hosted just about anywhere, need a specially built host? Kubernetes et.al. might take care of CPU allotment, but there are still big obstacles for naked containers in a production data center, especially as containers are now being lined up to host far more than simple stateless micro-services.  Now their real-world storage and networking needs have to be matched, aligned, and managed or the whole efficiency opportunity can be easily lost.

…(read the full post)

CI and disaggregated server tech can converge after all

An IT industry analyst article published by SearchDataCenter.


I’ve talked about the inevitability of infrastructure convergence, so it might seem like I’m doing a complete 180 degree turn by introducing the opposite trend of infrastructure: aggregation. Despite appearances, disaggregated server technology isn’t really the opposite of convergence. In fact, disaggregated and converged servers work together.

In this new trend, physical IT components come in larger and denser pools for maximum cost efficiency. At the same time, compute-intensive functionality, such as data protection, that was once tightly integrated with the hardware is pulled out and hosted separately to optimize performance and use cheaper components.

Consider today’s cloud architects building hyper-scale infrastructures; instead of buying monolithic building blocks, they choose to pool massive amounts of dense commodity resources.

…(read the complete as-published article there)

Evaluating hyper-converged architectures: Five key CIO considerations

An IT industry analyst article published by SearchCio.


Plain IT convergence offers IT organizations a major convenience — integrated and pre-assembled stacks of heterogeneous vendor infrastructure, including servers, storage and networking gear, that help accelerate new deployments and quickly support fast-growing applications.

But IT hyper-convergence goes farther to integrate IT infrastructure into simple modular appliances. Where pre-converged racks of infrastructure can provide good value to enterprises that would otherwise buy and assemble component vendor equipment themselves, hyper-converged architectures present a larger opportunity to not only simplify IT infrastructure and save on capital expenditures (CAPEX) but also help transform IT staff from internally focused legacy data center operators into increasingly agile, business-facing service providers.

With hyper-converged architectures, IT organizations can shift focus towards helping accelerate and enable business operations and applications, because they don’t spend as much time on, for example, silo troubleshooting, stack integration and testing, and traditional data protection tasks. The decision to adopt hyper-converged architectures is therefore something that business folks will see and appreciate directly through increased IT agility, cloud-like IT services, realistic BC/DR, and a greatly improved IT cost basis.

…(read the complete as-published article there)

Assimilate converged IT infrastructure into the data center

An IT industry analyst article published by SearchDataCenter.


I feel like the Borg from Star Trek when I proclaim that “IT convergence is inevitable.”

Converged IT infrastructure, the tight vendor integration of multiple IT resources like servers and storage, is a good thing, a mark of forward progress. And resistance to convergence is futile. It is a great way to simplify and automate the complexities between two (or more) maturing domains and drive cost-efficiencies, reliability improvements, and agility. As the operations and management issues for any set of resources becomes well understood, new solutions will naturally evolve that internally converge them into a more unified integrated single resource. Converged solutions are faster to deploy, simpler to manage, and easier for vendors to support.

Some resistance to converge does happen within some IT organizations. Siloed staff might suffer — convergence threatens domain subject matter experts by embedding their fiefdoms inside larger realms. That’s not the first time that has happened, and there is always room for experts to dive deep under the covers to work through levels of complexity when things inevitably go wrong. That makes for more impactful and satisfying jobs. And let’s be honest — converged IT is far less threatening than the public cloud.

…(read the complete as-published article there)

Hyperconvergence for ROBOs and the Datacenter — Virtualization Review

An IT industry analyst article published by Virtualization Review.


article_hyperconvergence-for-robos-and-the-datacenter-1
Convergence is a happy word to a lot of busy IT folks working long hours still standing up large complex stacks of infrastructure (despite having virtualized their legacy server sprawl), much less trying to deploy and manage mini-data centers out in tens, hundreds, or even thousands of remote or branch offices (ROBOs).

Most virtualized IT shops need to run lean and mean, and many find it challenging to integrate and operate all the real equipment that goes into the main datacenter: hypervisors, compute clusters, SANs, storage arrays, IP networks, load balancers, WAN optimizers, cloud gateways, backup devices and more. From a logical perspective, when you multiply the number of heterogeneous components by a number of remote locations, the “scale” of IT to manage climbs very fast. If you factor together the number of possible locations and interactions, the challenges of managing at scale can grow non-linearly (i.e., exponentially).

…(read the complete as-published article there)

Make this your most modern IT year yet

An IT industry analyst article published by SearchDataCenter.

No one sticks with their New Year’s resolution — where is that gym card anyway? — but you can pick up seven modern habits and feel good about fit, healthy IT ops.


Industry analysts like to make outlandish predictions at the start of every new year. But since I focus already on predicting the next big thing, I’ve decided to adopt another New Year’s tradition: making resolutions. But I have a problem with that too. I only have a few bad habits and am intent on keeping them.

Instead, I’m going to suggest some resolutions for organizations to adopt modern IT:

  1. Do something predictive with big data. Don’t even worry about the big data part if “big” doesn’t come naturally to the top of your pile of opportunities. You can always grow into bigger data efforts. But do look for a starter project to leverage the power of prediction. Commit to a project that embeds predictive algorithms or machine learning to get accustomed to what it is, what it can and can’t do, and how to approach it profitably. Some areas to consider might be to explore inherent clusters in your customer or client base, to estimate which client or transaction will succeed or fail, or to identify the most likely root causes of support issues.
  2. Reduce opex through hyperconvergence. Converged infrastructure is clearly a natural evolution under the ever-present pressure to reduce total cost of ownership. Hyperconvergence, which offers a single building block of data center infrastructure that bakes in server, storage and networking resources all into one scale-out unit, takes this process to the extreme. While it might not solve every problem, there is no doubt that a large portion of do-it-yourself data center architecture could profitably migrate onto hyperconverged platforms. If you aren’t ready to completely convert, at least resolve to evaluate hyperconverged solutions for new projects. And if that is too big a leap, at least deploy some software-defined storage this year to get comfortable with this potential “new order” for modern IT.
  3. Accelerate your infrastructure. Several acceleration technologies, like caching and in-memory processing, can easily drop into an IT environment with little effort, cost or risk. Although downstream users may complain about poor performance, they rarely ask if acceptable service could be accelerated 10x or 100x or more. Resolve to improve on the “satisfactory,” not just oiling the squeaky wheels, because acceleration technologies can spur noticeable improvements in quality of service. They can also lead to huge productivity gains for many, if not all, applications, to the point of creating competitive differentiation or even new sources of revenue.

…(for four more resolutions read the complete as-published article there)

5 Ways Storage Is Evolving

An IT industry analyst article published by Virtualization Review.

Be sure to take advantage of these storage industry trends.

article_5-ways-storage-is-evolvingWith its acquisition of Virsto, VMware certainly understands storage as usual doesn’t cut it when it comes to dense, high-powered virtual environments. This technology addresses the so-called “I/O blender” effect that comes from mixing the I/O from many VMs into one stream on its way to external shared storage. It does this by journaling what looks like highly random I/O to flash. Then asynchronously sorts it out to a hard disk. This is more optimization, though, than game-changing storage strategy.

Here are five broad trends in the storage industry that you can take advantage of today.

  • TAKE 1 Flash
    Flash has certainly changed the storage game. There are many ways it’s applied — at the server (such as PCIe cards from Fusion-IO, EMC XtremIO), in the network (such as Astute), or in the array (such as pure flash and hybrid storage from just about everybody). To make the most of your flash investment, keep an eye on factors like where high performance will have the best impact on the applications for which it’s best suited.
  • TAKE 2 Hyperconvergence
    We’ve all seen pre-packaged “converged” racks of servers, storage, networking, and hypervisor platforms from vendors such as VCE, Dell, and HP. These can be great deals if you want a single source and low risk when building a virtual environment. However, the storage isn’t necessarily different than what you’d get if you built it yourself. In some ways, running a virtual storage appliance is a type of convergence that architecturally shifts the burden of hosting storage directly onto your hypervisors. Taking things a step further are hyperconvergence vendors like Simplivity, Nutanix and Scale Computing. These collapse compute, storage and hypervisor into modular building blocks that make scaling out a datacenter as easy as stacking Legos. Purpose-built storage services are tightly integrated and support optimized and highly cost-efficient VM operations.
  • TAKE 3 VM Centricity


…(read the complete as-published article there)

Converged Infrastructure, or ‘Where Did All the Silos Go?’

An IT industry analyst article published by Virtualization Review.

Quite a few companies are mashing up server, storage, networking and even the hypervisor into turnkey solutions that can scale up or down as data center needs dictate.

article_converged-infrastructureOnce upon a time as IT shops grew and matured, infrastructure subgroups would form to focus on complex domain-specific technologies. Servers, storage and networking all required deep subject matter expertise and a single-minded focus to keep up with the varying intricacies of implementation, operations and management. In large enterprises, fully staffed silos working in concert could leverage mountains of technology to great effect. But inevitably, turf battles, budget tightening and the fact that smaller organizations might not reach critical mass can make the silo approach costly and inefficient.

Virtualization solutions at first helped the silo model by abstracting the user of idealized IT from its physical implementation. Increasingly independent silos of underlying infrastructure could then be designed and managed very differently, and hopefully optimally, from what the end client sees. And in fact, virtualization became its own new IT domain, adding yet another layer of IT silo complexity.

But here at Taneja Group, we’ve noted several trends that are coming together to break down the traditional IT silo model. The virtual admin originally in charge of the hypervisor and server cluster is now on the verge of subsuming storage and networking too. New generations of “software defined” and cloud-provisionable technologies enable virtual admins to dynamically allocate increasingly enterprise-class resources to clients. And on the infrastructure side, converged infrastructure solutions make the physical implementation as simple as snapping Lego-like building blocks together.

For a while there have been bundled solutions that pre-package infrastructure into nice racks. Buying IT in pallets can be attractive in many growth or transformation scenarios, but at the end of the day they are still composed of racks of traditional enterprise infrastructure. In most cases, these solutions are adopted by customers that probably have the silo expertise to build their own, but are looking for a cost-effective shortcut.

What we are really excited about are the new “hyper-converged” infrastructure solutions that are designed from the ground up as scale-out units of IT. Server, storage, networking, and even the hypervisor may have been integrated as a single racked unit. Deployment and growth are simply handled by racking and stacking more identical (or similar as needed) units. They plug together and re-pool storage, cluster the servers and share key resources like flash. IT no longer needs deep silo staffing to deploy and operate enterprise quality solutions.

…(read the complete as-published article there)