Big data and IoT benefit from machine learning, AI apocalypse not imminent

An IT industry analyst article published by SearchITOperations.
article_Big-data-and-IoT-benefit-from-machine-learning-AI-apocalypse-not-eminent

Suddenly, everybody is talking about machine learning, AI bots and deep learning. It’s showing up in new products to look at “call home data,” in cloud-hosted optimization services and even built into new storage arrays!

So what’s really going on? Is this something brand new or just the maturation of ideas spawned out of decades-old artificial intelligence research? Does deep learning require conversion to some mystical new church to understand it, or do our computers suddenly get way smarter overnight? Should we sleep with a finger on the power off button? But most importantly for IT folks, are advances in machine learning becoming accessible enough to readily apply to actual business problems — or is it just another decade of hype?

There are plenty of examples of highly visible machine learning applications in the press recently, both positive and negative. Microsoft’s Tay AI bot, designed to actively learn from 18 to 24 year olds on Twitter, Kik and GroupMe, unsurprisingly achieved its goal. Within hours of going live, it became a badly behaved young adult, both learning and repeating hateful, misogynistic, racist speech. Google’s AlphaGo beat a world champion at the game of Go by learning the best patterns of play from millions of past games, since the game can’t be solved through brute force computation with all the CPU cycles in the universe. Meanwhile, Google’s self-driving car hit a bus, albeit at slow speed. It clearly has more to learn about the way humans drive.

Before diving deeper, let me be clear, I have nothing but awe and respect for recent advances in machine learning. I’ve been directly and indirectly involved in applied AI and predictive modeling in various ways for most of my career. Although my current IT analyst work isn’t yet very computationally informed, there are many people working hard to use computers to automatically identify and predict trends for both fun and profit. Machine learning represents the brightest opportunity to improve life on this planet — today leveraging big data, tomorrow optimizing the Internet of Things (IoT).

Do machines really learn?

First, let’s demystify machine learning a bit. Machine learning is about finding useful patterns inherent in a given historical data set. These usually identify correlations between input values that you can observe, and output values that you’d eventually like to predict. Although precise definitions depend on the textbook, a model can be a particular algorithm with specific parameters that are tuned, or one that comes to “learn” useful patterns.

There are two broad kinds of machine learning:

…(read the complete as-published article there)

Assimilate converged IT infrastructure into the data center

An IT industry analyst article published by SearchDataCenter.


I feel like the Borg from Star Trek when I proclaim that “IT convergence is inevitable.”

Converged IT infrastructure, the tight vendor integration of multiple IT resources like servers and storage, is a good thing, a mark of forward progress. And resistance to convergence is futile. It is a great way to simplify and automate the complexities between two (or more) maturing domains and drive cost-efficiencies, reliability improvements, and agility. As the operations and management issues for any set of resources becomes well understood, new solutions will naturally evolve that internally converge them into a more unified integrated single resource. Converged solutions are faster to deploy, simpler to manage, and easier for vendors to support.

Some resistance to converge does happen within some IT organizations. Siloed staff might suffer — convergence threatens domain subject matter experts by embedding their fiefdoms inside larger realms. That’s not the first time that has happened, and there is always room for experts to dive deep under the covers to work through levels of complexity when things inevitably go wrong. That makes for more impactful and satisfying jobs. And let’s be honest — converged IT is far less threatening than the public cloud.

…(read the complete as-published article there)

Scale-out architecture and new data protection capabilities in 2016

An IT industry analyst article published by SearchDataCenter.


January was a time to make obvious predictions and short-lived resolutions. Now is the time for intelligent analysis of the shark-infested waters of high tech. The new year is an auspicious time for new startups to come out of the shadows. But what is just shiny and new, and what will really impact data centers?

From application-focused resource management to scale-out architecture, here are a few emerging trends  that will surely impact the data center.

…(read the complete as-published article there)

Will container virtualization be the biggest data center trend of 2016?

An IT industry analyst article published by SearchServerVirtualization.


It’s hard to predict what the biggest thing to hit the data center will be in 2016. Big data? Hyper-convergence? Hybrid cloud? I’ve decided that this is the year that containers will arrive in a big way — much earlier and faster than many expect, catching unprepared IT shops by surprise.

Unlike other technologies like big data that require vision and forward investment, containers are a natural next step for application packaging, deployment and hosting that don’t require massive shifts in mindset or vision. It’s just quicker and easier to develop and deploy an application in a container than it is to build a virtual appliance. Containerized architectures also have the compelling operational and financial benefits of cheaper or free licensing, more efficient use of physical resources, better scalability and ultimately service reliability. Looking ahead, container virtualization will help organizations take better advantage of hybrid or cross-cloud environments.

Server virtualization was also a great idea when it first came out with significant advantages over physical hosting, but it still took many years for it to mature (remember how long it was before anyone hosted an important database in a VM?). The same has been true for private or hybrid clouds, new storage technologies and even big data. But even though container virtualization  is just out of the gate, it has gotten farther down the maturity road by leveraging the roadmap laid out by server virtualization. And you can get a jumpstart by using trusted hypervisors like VMware vSphere Integrated Containers to shepherd in containers while the native container world polishes up its rougher edges. Because containers are sleeker and slimmer than VMs (they are essentially just processes), they will slip into the data center even if IT isn’t looking or paying attention (and even if IT doesn’t want them yet).

…(read the complete as-published article there)

Can your cluster management tools pass muster?

An IT industry analyst article published by SearchDataCenter.


A big challenge for IT is managing big clusters effectively, especially with bigger data, larger mashed-up workflows, and the need for more agile operations.

Cluster designs are everywhere these days. Popular examples include software-defined storage, virtual infrastructure, hyper-convergence, public and private clouds, and, of course, big data. Clustering is the scale-out way to architect infrastructure to use commodity resources like servers and JBODs. Scale-out designs can gain capacity and performance incrementally, reaching huge sizes cost-effectively compared to most scale-up infrastructure.

Big clusters are appealing because they support large-scale convergence and consolidation initiatives that help optimize overall CapEx. So why haven’t we always used cluster designs for everyday IT infrastructure? Large cluster management and operations are quite complex, especially when you start mixing workloads and tenants. If you build a big cluster, you’ll want to make sure it gets used effectively, and that usually means hosting multiple workloads. As soon as that happens, IT has trouble figuring out how to prioritize or share resources fairly. This has never been easy — the total OpEx in implementing, provisioning, and optimally managing shared clustered architectures is often higher than just deploying fully contained and individually assigned scale-up products.

…(read the complete as-published article there)