Accelerating Persistent VDI with RAM and Replication

(Excerpt from original post on the Taneja Group News Blog)

Atlantis Computing has won a bunch of awards for its ILIO Diskless VDI solution in which all the IO for stateless (i.e. transient/temporary) desktops are serviced completely out of de-duped server memory (RAM). This eliminates the IO bottleneck, increases density, lowers storage costs, and gives quite a performance boost.

But until now it has left persistent VDI, estimated at 70% of the market, still struggling with IO bottlenecks and expensive storage solutions. Today Atlantis has revealed its next generation VDI solution for persistent VDI that involves a new fast replication capability. Atlantis has added a new “Replication Host” that keeps a consistent copy of what is in each cluster server’s ILIO RAM down in persistent SAN or NAS storage. Desktops are persisted, and cluster servers and even the replication hosts can failover quickly to another.

Which means Atlantis ILIO is now an optimizing persistent VDI solution.  Desktops will still mainly be serviced out of RAM, but on the backside new data is permanently written and protected on disk.  Note that disk storage in this case may not need expensive SSD (although it certainly can) and only needs one 10GbE connection to the replication host, while each server effectively “fast replicates” over cost-effective 1GbE links.

VMware’s recent acquisition Virsto is also a technology that addresses the vm IO blender affect with under-the-hood intelligence, but it’s more of a storage accelerator as IO gets journaled into flash and then drained behind the scenes into slower spinning disk. Atlantis ILIO is in itself primary storage (in RAM) that tiers back into a cheaper capacity oriented storage. VDI IO in Atlantis for the most part will be completely served within the server itself.

Atlantis claims they can support 300+ IOPS per desktop, bursting to 5000, by serving them out of memory.  Persistent VDI won’t naturally be as dense as stateless on each server, but Atlantis will no doubt provide a significant improvement over the status quo. And persistent storage SAN requirements will be reduced to an estimated 3Gb per desktop.

Overall, Atlantis ILIO claims to have broken the storage limitations on persistent VDI, changing the cost equation to the point where most enterprises should re-evaluate the benefits of VDI if they haven’t gone down that road yet.  High density coupled with the impressive end-user performance is just what enterprises need out of VDI solutions.

…(read the full post)

Is Hadoop the New Data Center Platform for All Data?

(Excerpt from original post on the Taneja Group News Blog)

This morning we were able to attend EMC Greenplum’s launch of their new Hadoop distro called Pivotal HD. Core to this distro is HAWQ, their new massively parallel processing analytical database built with Hadoop at its heart. I’m not sure I can cover all the implications of this evolution in this short post, but consider that horizontal multi-PB scale-out, business class interactive performance, and high-end easily leveraged analytics are now available in one package from a trusted enterprise vendor.

…(read the full post)

Big Data Appliance Wrapped Up for the Enterprise

(Excerpt from original post on the Taneja Group News Blog)

Here we are in Santa Clara eagerly awaiting Strata tomorrow and a slew of new of Big Data solutions. Hadoop’s R+D infant years are passing, and now it is of the age where vendors are truly adding value for the enterprise IT shop.  Clearly the theme is to wrap up low level complexities into higher value solutions. One standout announcement this week is DDN’s hScaler appliance – a monster of a Hadoop machine. You might be thinking that a high-end appliance built from supercomputer class storage hardware runs completely counter to the point of doing analytics on the cheap commodity infrastructure Hadoop was originally designed for. Yet DDN claims it can get the job done with a lot lower TCO than rolling your own from components – and the do all the hard infrastructure work for you.

…(read the full post)

Is Virtualization Stalled On Performance?

An IT industry analyst article published by Virtualization Review.

Virtualization and cloud architectures are driving great efficiency and agility gains across wide swaths of the data center, but they can also make it harder to deliver consistent performance to critical applications. Let’s look at some solutions.

article_virtualization-stalled-performanceOne of the hardest challenges for an IT provider today is to guarantee a specific level of “response-time” performance to applications. Performance is absolutely mission-critical for many business applications, which has often led to expensively over-provisioned and dedicated infrastructures. Unfortunately broad technology evolutions like virtualization and cloud architectures that are driving great efficiency and agility gains across wide swaths of the data center can actually make it harder to deliver consistent performance to critical applications.

For example, solutions like VMware vSphere and Microsoft Hyper-V have been a godsend to overflowing data centers full of under-utilized servers by enabling such high levels of consolidation that it has saved companies, empowered new paradigms (i.e. cloud), and positively impacted our environment. Yet large virtualization projects tend to stall when it comes time to host performance-sensitive applications. Currently, the dynamic infrastructures of these x86 server virtualization technologies don’t provide a simple way for applications to allocate a “real performance level” in the same manner as they easily allocate a given capacity of virtual resource (e.g. CPU, disk space). In addition, virtualization solutions can introduce extra challenges by hiding resource contention, sharing resources dynamically, and optimizing for greatest utilization.

The good news is that additional performance management can help IT virtualize applications that require guaranteed performance, identify when performance gets out of whack, and rapidly uncover where contention and bottlenecks might be hiding

…(read the complete as-published article there)

Cloud has a silver lining for ROBO storage

An IT industry analyst article published by SearchDataBackup.

Providing and managing storage for remote and branch offices can be a challenge, but a hybrid approach using local and cloud-based storage may be the best solution.

article_Cloud-has-a-silver-lining-for-ROBO-storageStorage managers know that providing great data storage services to remote or branch offices (ROBOs) isn’t simply a matter of replicating a single, small office solution or extending data center storage to each ROBO with a WAN. But some vendors still insist that their traditional storage and data protection products can easily extend to cover ROBO needs, perhaps with just a few add-ons, a third-party product or two, and a bit of custom scripting. What they don’t mention is how quickly costs can climb, how tough management can be, and what to do with users who aren’t happy about compromising performance, accessibility or protection.

But there is hope. I’ve seen a couple of key trends that bode well for ROBO storage. First, cloud-based and cloud-enabled services are providing new opportunities to rethink and redesign storage services for distributed and mobile use cases. ROBOs are by definition distributed, and their users tend to be highly mobile. Second, some vendors are taking advantage of cloud services to build specific products to address ROBO storage challenges.

…(read the complete as-published article there)