What’s a Multi-cloud Really?  Some Insider Notes from VMworld 2017

(Excerpt from original post on the Taneja Group News Blog)

As comfortable 65-70 degree weather blankets New England here as we near end of summer, flying into Las Vegas for VMworld at 110 degrees seemed like dropping into hell. Last time I was in that kind of heat I was stepping off a C-130 into the Desert Shield/Desert Storm theater of operations. At least here, as everyone still able to breathe immediately says -“at least it’s a dry heat.”

…(read the full post)

Five VM-Level Infrastructure Adaptations — Virtualization Review

An IT industry analyst article published by Virtualization Review.

Infrastructure is evolving for the better, making the job of the admin easier in the long run. Here are five ways it’s evolving to work at the VM level.

article_adaptations-of-the-infrastructureIt used to be that IT struggled to intimately understand every app in order to provide the right supporting infrastructure. Today, server virtualization makes the job much easier, because IT can now just cater to VMs. By working and communicating at the VM level, both app owners and infrastructure admins stay focused, using a common API to help ensure apps are hosted effectively and IT runs efficiently.

But the virtual admin still has to translate what each VM requires, going beyond direct-server resources into the specialized domains of other IT infrastructure silos. While silos have traditionally pooled rare expertise to optimize expensive resources, in today’s virtualized world, silos seem to offer more friction than leverage. Here are five ways infrastructure is evolving to work at the VM level.

  1. TAKE 1
    VM-Centric Storage.

…(read the complete as-published article there)

Virtualizing Hadoop Impacts Big Data Storage

An IT industry analyst article published by Enterprise Storage Forum.

by Mike Matchett, Sr. Analyst, Taneja Group
article_virtualizing-hadoop-impacts-big-data-storage
Hadoop is soon coming to enterprise IT in a big way. VMware’s new vSphere Big Data Extensions (BDE) commercializes its open source Project Serengeti to make it dead easy for enterprise admins to spin and up down virtual Hadoop clusters at will.

Now that VMware has made it clear that Hadoop is going to be fully supported as a virtualized workload in enterprise vSphere environments, here at Taneja Group we expect a rapid pickup in Hadoop adoption across organizations of all sizes.

However, Hadoop is all about mapping parallel compute jobs intelligently over massive amounts of distributed data. Cluster deployment and operation are becoming very easy for the virtual admin. But in a virtual environment where storage can be effectively abstracted from compute clients, there are some important complexities and opportunities to consider when designing the underlying storage architecture. Some specific concerns with running Hadoop in a virtual environment include considering how to configure virtual data nodes, how to best utilize local hypervisor server DAS, and when to think about leveraging external SAN/NAS.

The main idea behind virtualizing Hadoop is to take advantage of deploying Hadoop scale-out nodes as virtual machines instead of as racked commodity physical servers. Clusters can be provisioned on-demand and elastically expanded or shrunk. Multiple Hadoop virtual nodes can be hosted on each hypervisor physical server, and as virtual machines can be easily allocated more or less resource for a given application. Hypervisor level HA/FT capabilities can be brought to bear on production Hadoop apps. VMware’s BDE even includes QoS algorithms that help prioritize clusters dynamically, shrinking lower-priority cluster sizes as necessary to ensure high-priority cluster service.

…(read the complete as-published article there)

Don’t Miss These VMworld 2013 Sessions

An IT industry analyst article published by Virtualization Review.

With 358 sessions, time is money. Here are five sessions where your time will be well spent.

article_dont-miss-these-vmworld-2013-sessionsTAKE 1 Directions in VMware EUC & the Multi-Device, Virtual Workspace (EUC4544)

Virtual desktop infrastructure (VDI) and related end-user computing capabilities are most definitely not dead. In fact, I think the technologies are finally starting to support practical and cost-effective implementations for every size business. VMware Horizon and Mirage likely have some hot things going, and it’s always interesting to see how PCoIP has evolved.

TAKE 2 Designing Your Next-Generation Datacenter for Network Virtualization (NET5184)

Can you spell VXLAN? If you come away from VMworld 2013 with a good understanding of software-defined networking (SDN) and network virtualization, you could be the geek hero of your IT shop.

…(read the complete as-published article there)

VMware Adds Hadoop into vSphere with Big Data Extensions

(Excerpt from original post on the Taneja Group News Blog)

VMware announced vSphere Big Data Extensions this week, which might at first seem to be just a productizing of some open source Hadoop deployment software, but if you dig in a bit you can see that the big future of Hadoop might just be virtual hosting, a big shift from its intentional commodity server roots.  And this puts VMware on top of data center workload trends towards scale-out computing apps and offering everything “as a service”.

…(read the full post)

Hooked with a Non-Linear Curve – VMTurbo’s Economic Approach

(Excerpt from original post on the Taneja Group News Blog)

As a long-time capacity planner, if you show me a non-linear curve with a real model behind it I’ll tend to bite. Predictive analysis alone would have been enough to get my attention, but VMTurbo also talks about optimizing IT from an economics perspective. I spent a lot of years convincing and cajoling folks that capacity planning and infrastructure optimization is basically about investing your money effectively while ensuring the resulting system is efficiently utilized.

It is invigorating to see an experienced team (folks with a SMARTS heritage) approach virtualized IT environments as an economic system with calculable trade-offs and optimizable peformance-cost curves. We are told this approach works for both real-time optimizing operational control and for forward planning exercises.

It does leave me wondering if virtualized applications and resources are fully rational economic agents. By not having a true “view” of the physical world, perhaps they might obey a virtual kind of irrational “behavioral economics” (e.g. influenced by memory ballooning, virtual clock cycles, virtualized IO…)?

In any case it’s not too early to begin thinking about VMworld 2012 coming up in August. There is so much going on that one needs to have a hit list for whose booths to make sure to search out first –  high on my list this year is VMTurbo.

…(read the full post)

VMware Expands to Heterogeneous Clouds with DynamicOps Acquisition

(Excerpt from original post on the Taneja Group News Blog)

We’ve been fans of DynamicOps for the simple reason that they can take whatever virtual and physical IT infrastructure that organizations already have deployed and turn it all into a full-up private cloud. They seem to work with almost every existing system management solution too, providing just the right higher level bits needed to bring existing orchestration, config, and systems management into a holistic cloud delivery architecture.

By bringing DynamicOps into its portfolio, VMware may be single-handedly causing a watershed moment for private cloud adoption — if not a full-blown IT cloud revolution. We may look back on this acquisition and count it no less significant than VMware’s server virtualization itself. At a minimum, by enabling private/hybrid clouds over existing infrastructure everywhere, VMware can expect associated vSphere adoption to ramp up while increasing VMware’s relevance to the complete end-to-end IT enterprise.

As a smaller company, DynamicOps may have faced an uphill battle with larger enterprises to convince them they could sit at the top of the stack “of everything” and really deliver. But as a VMware solution that obstacle is removed. Watch out world, IT is going to get cloudy!

…(read the full post)

Quest VKernel vOPS Adds Intelligent Remediation and Planned Provisioning

(Excerpt from original post on the Taneja Group News Blog)

A great recent trend in virtualization management is to intelligently integrate “analytical” functions like capacity planning with active operational processes like remediation and provisioning. Each individual management activity has had its challenges with virtualization – capacity planning has had to learn to pierce through layers of abstraction to piece together the actual infrastructure components in play, while operationally doing anything smartly requires a thorough grasp of the dynamics built-in to the virtual management layers (e.g. VMware DRS and Storage vMotion).  But as these individual management capabilities mature, the next level of value comes in leveraging them together to make smarter, more automated environments.

When Quest acquired VKernel to augment and extend their (v)Foglight solutions, it was probably thinking about this higher level of intelligent automation in the virtualization space. After all, virtual admins have got quite a lot on their plate, and as more and more mission-critical apps virtualize, multi-tool management operations become onerous and error-prone. For example, the latest vOps helps its admin users see historical configuration changes on a timeline perspective against performance metrics, review a ranked list of changes by potential risk, and to revert or rollback each change if desired. You could compare this to the latest vCenter Enterprise edition which also enables charting and rollback of configuration changes but at a higher price and without the risk evaluation. VKernel’s vOps also has an existing one-click feature that can add automatically identified critical resources to constrained vm’s (e.g. “add a CPU” button shows up when a vm is compute constrained) to accelerate remediation in support of tight SLA’s.

On the planning side vOps had previously enabled admins to set hard reservations of resources for future vm deployments based on an identified vm template. In large multiple administrator environments this helps ensure the right resources are going to be available on day 0 for new vms. They’ve now enhanced their active provisioning so that it initially deploys the right vm’s into their specific reservations in one “atomic” step, avoiding having to either first manually release the reservations or temporarily over-subscribe the system. Remember that virtual systems are dynamic, so releasing reservations manually ahead of deployment can cause other things to inefficiently “shift” around. And manually keeping track of reservations mapped to deployments is likely to lead to orphaned reservations floating around. You definitely don’t want reservation “leakage” to add to your vm sprawl problems!

Note that the virtual admin is still in the loop on these operations tasks, but the upfront analytical “expertise” is getting baked in. Fully automated remediation and performance-based provisioning are still in our future, but we suspect those capabilities are eventually going to become the ultimate definition and real value of “private cloud.”

…(read the full post)