Assimilate converged IT infrastructure into the data center

An IT industry analyst article published by SearchDataCenter.


I feel like the Borg from Star Trek when I proclaim that “IT convergence is inevitable.”

Converged IT infrastructure, the tight vendor integration of multiple IT resources like servers and storage, is a good thing, a mark of forward progress. And resistance to convergence is futile. It is a great way to simplify and automate the complexities between two (or more) maturing domains and drive cost-efficiencies, reliability improvements, and agility. As the operations and management issues for any set of resources becomes well understood, new solutions will naturally evolve that internally converge them into a more unified integrated single resource. Converged solutions are faster to deploy, simpler to manage, and easier for vendors to support.

Some resistance to converge does happen within some IT organizations. Siloed staff might suffer — convergence threatens domain subject matter experts by embedding their fiefdoms inside larger realms. That’s not the first time that has happened, and there is always room for experts to dive deep under the covers to work through levels of complexity when things inevitably go wrong. That makes for more impactful and satisfying jobs. And let’s be honest — converged IT is far less threatening than the public cloud.

…(read the complete as-published article there)

Scale-out architecture and new data protection capabilities in 2016

An IT industry analyst article published by SearchDataCenter.


January was a time to make obvious predictions and short-lived resolutions. Now is the time for intelligent analysis of the shark-infested waters of high tech. The new year is an auspicious time for new startups to come out of the shadows. But what is just shiny and new, and what will really impact data centers?

From application-focused resource management to scale-out architecture, here are a few emerging trends  that will surely impact the data center.

…(read the complete as-published article there)

Will container virtualization be the biggest data center trend of 2016?

An IT industry analyst article published by SearchServerVirtualization.


It’s hard to predict what the biggest thing to hit the data center will be in 2016. Big data? Hyper-convergence? Hybrid cloud? I’ve decided that this is the year that containers will arrive in a big way — much earlier and faster than many expect, catching unprepared IT shops by surprise.

Unlike other technologies like big data that require vision and forward investment, containers are a natural next step for application packaging, deployment and hosting that don’t require massive shifts in mindset or vision. It’s just quicker and easier to develop and deploy an application in a container than it is to build a virtual appliance. Containerized architectures also have the compelling operational and financial benefits of cheaper or free licensing, more efficient use of physical resources, better scalability and ultimately service reliability. Looking ahead, container virtualization will help organizations take better advantage of hybrid or cross-cloud environments.

Server virtualization was also a great idea when it first came out with significant advantages over physical hosting, but it still took many years for it to mature (remember how long it was before anyone hosted an important database in a VM?). The same has been true for private or hybrid clouds, new storage technologies and even big data. But even though container virtualization  is just out of the gate, it has gotten farther down the maturity road by leveraging the roadmap laid out by server virtualization. And you can get a jumpstart by using trusted hypervisors like VMware vSphere Integrated Containers to shepherd in containers while the native container world polishes up its rougher edges. Because containers are sleeker and slimmer than VMs (they are essentially just processes), they will slip into the data center even if IT isn’t looking or paying attention (and even if IT doesn’t want them yet).

…(read the complete as-published article there)

Can your cluster management tools pass muster?

An IT industry analyst article published by SearchDataCenter.


A big challenge for IT is managing big clusters effectively, especially with bigger data, larger mashed-up workflows, and the need for more agile operations.

Cluster designs are everywhere these days. Popular examples include software-defined storage, virtual infrastructure, hyper-convergence, public and private clouds, and, of course, big data. Clustering is the scale-out way to architect infrastructure to use commodity resources like servers and JBODs. Scale-out designs can gain capacity and performance incrementally, reaching huge sizes cost-effectively compared to most scale-up infrastructure.

Big clusters are appealing because they support large-scale convergence and consolidation initiatives that help optimize overall CapEx. So why haven’t we always used cluster designs for everyday IT infrastructure? Large cluster management and operations are quite complex, especially when you start mixing workloads and tenants. If you build a big cluster, you’ll want to make sure it gets used effectively, and that usually means hosting multiple workloads. As soon as that happens, IT has trouble figuring out how to prioritize or share resources fairly. This has never been easy — the total OpEx in implementing, provisioning, and optimally managing shared clustered architectures is often higher than just deploying fully contained and individually assigned scale-up products.

…(read the complete as-published article there)

Container technology’s role in storage

An IT industry analyst article published by SearchServerVirtualization.


Could containers dethrone virtual machines as the next generation compute architecture? I’ve heard many industry folks say that containers are moving faster into real deployments than almost any previous technology, driven by application developers, DevOps and business-side folks looking for agility as much as IT needs efficiency and scale.

Containers were one of the hottest topics at VMworld 2015. VMware clearly sees a near-term mash-up of virtual machines and containers coming quickly to corporate data centers. And IT organizations still need to uphold security and data management requirements — even with containerized applications. VMware has done a bang-up job of delivering that on the VM side, and now it’s weighed in with designs that extend its virtualization and cloud management solutions to support (and, we think, ultimately assimilate) enterprise containerization projects.

VMware’s new vSphere Integrated Containers (VICs) make managing and securing containers, which in this case are running nested in virtual machines (called “virtual container hosts”), pretty much the same as managing and securing traditional VMs. The VICs show up in VMware management tools as first-class IT managed objects equivalent to VMs, and inherit much of what of vSphere offers to virtual machine management, including robust security. This makes container adoption something every VMware customer can simply slide into.

However, here at Taneja Group we think the real turning point for container adoption will be when containers move beyond being simply stateless compute engines and deal directly with persistent data.

…(read the complete as-published article there)

Memristor technology brings about an analog revolution

An IT industry analyst article published by SearchDataCenter.


We are always driven to try to do smarter things faster. It’s human nature. In our data centers, we layer machine learning algorithms over big and fast data streams to create that special competitive business edge (or greater social benefit!).

Yet for all its processing power, performance and capacity, today’s digital-based computing and storage can’t compare to what goes on inside each of our very own, very analog brains, which vastly outstrip digital architectures by six, seven or even eight orders of magnitude. If we want to compute at biological scales and speeds, we must take advantage of new forms of hardware that transcend the strictly digital.

Many applications of machine learning are based on examining data’s inherent patterns and behavior, and then using that intelligence to classify what we know, predict what comes next, and identify abnormalities. This isn’t terribly different from our own neurons and synapses, which learn from incoming streams of signals, store that learning, and allow it to be used “forward” to make more intelligent decisions (or take actions). In the last 30 years, AI practitioners have built practical neural nets and other types of machine learning algorithms for various applications, but they are all bound today by the limitations of digital scale (an exponentially growing Web of interconnections is but one facet of scale) and speed.

Today’s digital computing infrastructure, based on switching digital bits, faces some big hurdles to keep up with Moore’s Law. Even if there are a couple of magnitudes of improvement yet to be squeezed out of the traditional digital design paradigm, there are inherent limits in power consumption, scale and speed. Whether we’re evolving artificial intelligence into humanoid robots or more practically scaling machine learning to ever-larger big data sets to better target the advertising budget, there simply isn’t enough raw power available to reach biological scale and density with traditional computing infrastructure.

…(read the complete as-published article there)

IT pros get a handle on machine learning and big data

An IT industry analyst article published by SearchDataCenter.


Machine learning is the force behind many big data initiatives. But things can go wrong when implementing it, with significant effects on IT operations.

Unfortunately, predictive modeling can be fraught with peril if you don’t have a firm grasp of the quality and veracity of the input data, the actual business goal and the real world limits of prediction (e.g., you can’t avoid black swans).

It’s also easy for machine learning and big data beginners to either make ineffectively complex models or “overtrain” on the given data (learning too many details of the specific training data that don’t apply generally). In fact, it’s quite hard to really know when you have achieved the smartest yet still “generalized” model to take into production.

Another challenge is that the metrics of success vary widely depending on the use case. There are dozens of metrics used to describe the quality and accuracy of the model output on test data. Even as an IT generalist, it pays to at least get comfortable with the matrix of machine learning outcomes, expressed with quadrants for the counts of true positives, true negatives, false positives (items falsely identified as positive) and false negatives (positives that were missed).

…(read the complete as-published article there)

Intro to machine learning algorithms for IT professionals

An IT industry analyst article published by SearchDataCenter.


Our data center machines, due to all the information we feed them, are getting smarter. How can you use machine learning to your advantage?

Machine learning is a key part of how big data brings operational intelligence into our organizations. But while machine learning algorithms are fascinating, the science gets complex very quickly. We can’t all be data scientists, but IT professionals need to learn about how our machines are learning.

We are increasingly seeing practical and achievable goals for machine learning, such as finding usable patterns in our data and then making predictions. Often, these predictive models are used in operational processes to optimize an ongoing decision-making process, but they can also provide key insight and information to inform strategic decisions.

The basic premise of machine learning is to train an algorithm to predict an output value within some probabilistic bounds when it is given specific input data. Keep in mind that machine learning techniques today are inductive, not deductive — it leads to probabilistic correlations, not definitive conclusions.

…(read the complete as-published article there)

Data aware storage yields insights into business info

An IT industry analyst article published by SearchDataCenter.


Many people think that IT infrastructure is critical, but not something that provides unique differentiation and competitive value. But that’s about to change, as IT starts implementing more “data-aware” storage in the data center.

When business staffers are asked what IT should and could do for them, they can list out confused, contrary and naïve desires that have little to do with infrastructure (assuming minimum service levels are met). As IT shops grow to become service providers to their businesses, they pay more attention to what is actually valuable to the systems they serve. The best IT shops are finding that a closer look at what infrastructure can do “autonomically” yields opportunities to add great value.

Today, IT storage infrastructure is smarter about the data it holds. Big data processing capabilities provide the motivation to investigate formerly disregarded data sets. Technological resources are getting denser and more powerful — converged is the new buzzword across infrastructure layers — and core storage is not only getting much faster with flash and in-memory approaches, but can take advantage of a glut of CPU power to locally perform additional tasks.

Storage-side processing isn’t just for accelerating latency-sensitive financial applications anymore. Thanks to new kinds of metadata analysis, it can help IT create valuable new data services…

…(read the complete as-published article there)

Data lakes swim with golden information for analytics

An IT industry analyst article published by SearchDataCenter.


One of the biggest themes in big data these days is data lakes.

Available data grows by the minute, and useful data comes in many different shapes and levels of structure. Big data (i.e., Hadoop) environments have proven good at batch processing of unstructured data at scale, and useful as an initial landing place to host all kinds of data in low-level or raw form in front of downstream data warehouse and business intelligence (BI) tools. On top of that, Hadoop environments are beginning to develop capabilities for analyzing structured data and for near real-time processing of streaming data.

The data lake concept captures all analytically useful data onto one single infrastructure. From there, we can apply a kind of “schema-on-read” approach using dynamic analytical applications, rather than pre-build static extract, transform and load (ETL) processes that feed only highly structured data warehouse views. With clever data lake strategies, we can combine SQL and NoSQL database approaches, and even meld online analytics processing (OLAP) and online transaction processing (OLTP) capabilities. Keeping data in a single, shared location means administrators can better provide and widely share not only the data, but an optimized infrastructure with (at least theoretically) simpler management overhead.

The smartest of new big data applications might combine different kinds of analysis over different kinds of data to produce new decision-making information based on operational intelligence. The Hadoop ecosystem isn’t content with just offering super-sized stores of unstructured data, but has evolved quickly to become an all-purpose data platform in the data center.
…(read the complete as-published article there)