Quest VKernel vOPS Adds Intelligent Remediation and Planned Provisioning

(Excerpt from original post on the Taneja Group News Blog)

A great recent trend in virtualization management is to intelligently integrate “analytical” functions like capacity planning with active operational processes like remediation and provisioning. Each individual management activity has had its challenges with virtualization – capacity planning has had to learn to pierce through layers of abstraction to piece together the actual infrastructure components in play, while operationally doing anything smartly requires a thorough grasp of the dynamics built-in to the virtual management layers (e.g. VMware DRS and Storage vMotion).  But as these individual management capabilities mature, the next level of value comes in leveraging them together to make smarter, more automated environments.

When Quest acquired VKernel to augment and extend their (v)Foglight solutions, it was probably thinking about this higher level of intelligent automation in the virtualization space. After all, virtual admins have got quite a lot on their plate, and as more and more mission-critical apps virtualize, multi-tool management operations become onerous and error-prone. For example, the latest vOps helps its admin users see historical configuration changes on a timeline perspective against performance metrics, review a ranked list of changes by potential risk, and to revert or rollback each change if desired. You could compare this to the latest vCenter Enterprise edition which also enables charting and rollback of configuration changes but at a higher price and without the risk evaluation. VKernel’s vOps also has an existing one-click feature that can add automatically identified critical resources to constrained vm’s (e.g. “add a CPU” button shows up when a vm is compute constrained) to accelerate remediation in support of tight SLA’s.

On the planning side vOps had previously enabled admins to set hard reservations of resources for future vm deployments based on an identified vm template. In large multiple administrator environments this helps ensure the right resources are going to be available on day 0 for new vms. They’ve now enhanced their active provisioning so that it initially deploys the right vm’s into their specific reservations in one “atomic” step, avoiding having to either first manually release the reservations or temporarily over-subscribe the system. Remember that virtual systems are dynamic, so releasing reservations manually ahead of deployment can cause other things to inefficiently “shift” around. And manually keeping track of reservations mapped to deployments is likely to lead to orphaned reservations floating around. You definitely don’t want reservation “leakage” to add to your vm sprawl problems!

Note that the virtual admin is still in the loop on these operations tasks, but the upfront analytical “expertise” is getting baked in. Fully automated remediation and performance-based provisioning are still in our future, but we suspect those capabilities are eventually going to become the ultimate definition and real value of “private cloud.”

…(read the full post)

Managing It All By Reflex – Virtualization Performance, Configuration, AND Security

(Excerpt from original post on the Taneja Group News Blog)

We all want to virtualize deeper into our application portfolios, but those darn mission-critical applications are tough to break loose from their rock-solid physical infrastructure. One big problem is that we have long-established, mature and trusted IT management in the physical realm that’s hard to simply replicate in the virtual world. Who knew that being so good at something would become such a problem?

As we journey down the virtualization management maturity path, it seems the common approach is to just continue layering on increments of management capability. It’s as if the virtual server farm beast is alive and growing organically. There are at least two things with many-layered beasts that we have a right to be suspicious about – increasing complexity and spiraling management cost.  

Reflex Systems is claiming that they have a better way with their fully integrated virtualization management approach. It may be an uphill battle to convince skeptics that they can sit in the middle of all management and replace a raft of proven best-of-breed solutions, but Reflex does have a refreshing architecture intentionally designed from the ground up to support integrated management. Their single “platform” solution is designed to support the total management lifecycle of performance monitoring, capacity planning, provisioning and configuration management, and policy-based security all extensible with open API’s both in and out.   

A question I often hear from IT virtualizers is “what is a private cloud compared to what I’m already doing, really?”  I think Reflex provides a big clue here in their integration and implicit optimization across multiple IT management disciplines.

…(read the full post)

When the Light’s Red, Whose Throat Do I Choke?

(Excerpt from original post on the Taneja Group News Blog)

Application performance management (APM) solutions get at the system-wide nature of application performance issues. They follow important parts of an application transaction across IT domains, stitch together a cohesive mapping and summation of the application’s journey across IT infrastructure, and generate an alerting drill-down when application performance slows.

Sounds great, but the problem is that often these solutions are really meant for developers who can implement and understand deep app instrumentation. Otherwise they require significant transaction profiling, both upfront and as applications and systems change.  Neither of these approaches is really suitable for dynamic production environments. It goes without saying that production is hardly the place to install developer level debugging tools, and with the size and scope of today’s data centers there are few applications that can justify the cost and effort to continually maintain hardcoded transaction profiles (excepting perhaps trading applications in the exchanges).

BlueStripe’s FactFinder has broken away from those approaches and figured out that their real APM value is to IT operators. IT ops folks just aren’t up for instrumenting apps or modifying operating systems or maintaining transaction profiles. They just want to know whose throat to choke when the light goes red on performance – i.e. which level 2 admin in which domain should they call? FactFinder v6 adds Java app JVMs, Websphere, and Weblogic to its automatic application transaction discovery, mapping, and analysis. With these additions BlueStripe provides a fairly broad coverage of application components inside an enterprise data center, which might obviate the need for the other types of APM tools in production. IT operators should take a good look now that BlueStripe is aiming squarely for their production APM needs.

…(read the full post)

Excuse me, but I think your cache is showing…

(Excerpt from original post on the Taneja Group News Blog)

Everybody these days is adding flash-based SSD to their storage arrays.  Some are offering all flash storage for ultra-high performance.  And a few are popping flash storage right into the server as a very large, persistent cache.  But taking advantage of flash in these ways requires either hardware refresh or significant service disruption – or both.

GridIron offers a drop-in, non-disruptive way to immediately super-charge existing infrastructure. Their TurboCharger appliances logically plug into the middle of the SAN fabric where they can be installed (and removed) non-disruptively by taking advantage of I/O multi-pathing.  Once installed, they jump in to the data path as a virtual LUN fronting the real LUN on the back-end, providing a massive amount of SSD write-through cache that automatically adjusts to multiple workloads.  Because it’s in the SAN, TurboCharger can virtually “front” any underlying storage – even storage that is in turn further virtualized.

GridIron customers have generally faced serious data access challenges with large databases and in consolidated and virtualized environments that benefit from read-intensive IO acceleration. GridIron is now expanding its product line to help accelerate structured and unstructured “big data” access.   The OneAppliance all-Flash product line includes the FlashCube for offloading temp, log, and scratch space write-intensive workloads, and an iNode that combines massive flash and compute together for building high-performance compute clusters.

GridIron is clearly differentiating from other flash solutions in its direct and practical approach to bringing the power of flash to bear directly on the extreme data access and movement problems with big data. 

…(read the full post)