Open source strategies bring benefits, but don’t rush in

An IT industry analyst article published by SearchDataCenter.


article_Open-source-strategies-bring-benefits-but-dont-rush-in
Before your organization can reap the benefits of open source, it’s important to understand your options and map out a plan that will guarantee success.

Mike Matchett
Small World Big Data

It’s ironic that we spend a lot of money on proprietary databases, business applications and structured business intelligence platforms for “little” data, but we turn to open source platforms for big data analytics. Why not just scale down free, big data open source systems to handle the little data too?

Of course, there are a number of real reasons, including minimizing risk and assuring enterprise-class data management requirements. Cost probably isn’t even the first criteria for most enterprises. Even when it comes to cost, open source doesn’t mean free in a real economic sense. Open source strategies require cutting-edge expertise, professional support and often buy-up into proprietary enterprise-class feature sets. The truth is, open source platforms don’t necessarily maximize ROI.

Still, open source strategies create attractive opportunities for businesses that want to evolve their aging applications. Many IT investing strategies now include a core principle preferring open source for new applications. In fact, we’d claim open source now represents the fastest growing segment of enterprise IT initiatives. From a theoretical point of view, when it comes to developing new ways of doing business, new types of agile and web-scale applications, and new approaches to analyze today’s ever-bigger data, open source presents innovative opportunities to compete and even disrupt the competition.

But this is much easier said than done. We’ve seen many enterprises fumble with aggressive open source strategies, eventually reverting to tried-and-true proprietary software stacks. So if enterprises aren’t adopting open source because it’s cheaper, and it often lacks enterprise-class features, then why has it become such a popular strategy?

Adopting open source strategies goes hand in hand with an ability to attract top technical talent, Rajnish Verma said at the Dataworks Summit in June, when he was president of big data software vendor Hortonworks. Smart people want to work in an open source environment so they can develop in-demand skills, establish broader relationships outside a single company and potentially contribute back to a larger community — all part of building a personal brand, I suppose.

In other words, organizations adopt open source because that’s what today’s prospective employees want to work on…(read the complete as-published article there)

Spark speeds up adoption of big data clusters and clouds

An IT industry analyst article published by SearchITOperations.


article_Spark-speeds-up-adoption-of-big-data-clusters-and-clouds
Infrastructure that supports big data comes from both the cloud and clusters. Enterprises can mix and match these seven infrastructure choices to meet their needs.

Mike Matchett

If enterprise IT has been slow to support big data analytics in production for the decade-old Hadoop, there has been a much faster ramp-up now that Spark is part of the overall package. After all, doing the same old business intelligence approach with broader, bigger data (with MapReduce) isn’t exciting, but producing operational time predictive intelligence that guides and optimizes business with machine precision is a competitive must-have.

With traditional business intelligence (BI), an analyst studies a lot of data and makes some hypotheses and a conclusion to form a recommendation. Using the many big data machine learning techniques supported by Spark’s MLlib, a company’s big data can dynamically drive operational-speed optimizations. Massive in-memory machine learning algorithms enable businesses to immediately recognize and act on inherent patterns in even big streaming data.

But the commoditization of machine learning itself isn’t the only new driver here. A decade ago, IT needed to stand up either a “baby” high performance computing cluster for serious machine learning or learn to write low-level distributed parallel algorithms to run on the commodity-based Hadoop MapReduce platform. Either option required both data science and exceptionally talented IT admins that could stand up and support massive physical scale-out clusters in production. Today there are many infrastructure options for big data clusters that can help IT deploy and support big data-driven applications.

Here are seven types of big data infrastructures for IT to consider, each with core strengths and differences:…(read the complete as-published article there)

Machine learning algorithms make life easier — until they don’t

An IT industry analyst article published by SearchITOperations.


article_Machine-learning-algorithms-make-life-easier-until-they-dont
Algorithms govern many facets of our lives. But imperfect logic and data sets can make results worse instead of better, so it behooves all of us to think like data scientists.

Mike Matchett

Algorithms control our lives in many and increasingly mysterious ways. While machine learning algorithms change IT, you might be surprised at the algorithms at work in your nondigital life as well.

When I pull a little numbered ticket at the local deli counter, I know with some certainty that I’ll eventually get served. That’s a queuing algorithm in action — it preserves the expected first-in, first-out ordering of the line. Although wait times vary, it delivers a predictable average latency to all shoppers.

Now compare that to when I buy a ticket for the lottery. I’m taking a big chance on a random-draw algorithm, which is quite unlikely to ever go my way. Winning is not only uncertain, but improbable. Still, for many folks, the purchase of a lottery ticket delivers a temporary emotional salve, so there is some economic utility — as you might have heard in Economics 101.

People can respond well to algorithms that have guaranteed certainty and those with arbitrary randomness in the appropriate situations. But imagine flipping those scenarios. What if your deli only randomly selected people to serve? With enough competing shoppers, you might never get your sliced bologna. What if the lottery just ended up paying everyone back their ticket price minus some administrative tax? Even though this would improve almost everyone’s actual lottery return on investment, that kind of game would be no fun at all.

Without getting deep into psychology or behavioral economics, there are clearly appropriate and inappropriate uses of randomization. When we know we are taking a long-shot chance at a big upside, we might grumble if we lose. But our reactions are different when the department of motor vehicles closes after we’ve already spent four hours waiting.

Now imagine being subjected to opaque algorithms in various important facets of your life, as when applying for a mortgage, a car loan, a job or school admission. Many of the algorithms that govern your fate are seemingly arbitrary. Without transparency, it’s hard to know if any of them are actually fair, much less able to predict your individual prospects. (Consider the fairness concept the next time an airline randomly bumps you from a flight.)
Machine learning algorithms overview — machines learn what?

So let’s consider the supposedly smarter algorithms designed at some organizational level to be fair. Perhaps they’re based on some hard, rational logic leading to an unbiased and random draw, or more likely on some fancy but operationally opaque big data-based machine learning algorithm.

With machine learning, we hope things will be better, but they can also get much worse. In too many cases, poorly trained or designed machine learning algorithms end up making prejudicial decisions that can unfairly affect individuals.

I’m not exaggerating when I predict that machine learning will touch every facet of human existence.

This is a growing — and significant — problem for all of us. Machine learning is influencing a lot of the important decisions made about us and is steering more and more of our economy. It has crept in behind the scenes as so-called secret sauce or as proprietary algorithms applied to key operations.

But with easy-to-use big data, machine learning tools like Apache Spark and the increasing streams of data from the internet of things wrapping all around us, I expect that every data-driven task will be optimized with machine learning in some important way…(read the complete as-published article there)

Storage technologies evolve toward a data-processing platform

An IT industry analyst article published by SearchDataCenter.


article_Storage-technologies-evolve-toward-a-data-processing-platform
Emerging technologies such as containers, HCI and big data have blurred the lines between compute and storage platforms, breaking down traditional IT silos.

Mike Matchett

With the rise of software-defined storage, in which storage services are implemented as a software layer, the whole idea of data storage is being re-imagined. And with the resulting increase in the convergence of compute with storage, the difference between a storage platform and a data-processing platform is further eroding.

Storage takes new forms

Let’s look at a few of the ways that storage is driving into new territory:

  • Now in containers! Almost all new storage operating systems, at least under the hood, are being written as containerized applications. In fact, we’ve heard rumors that some traditional storage systems are being converted to containerized form. This has a couple of important implications, including the ability to better handle massive scale-out, increased availability, cloud-deployment friendliness and easier support for converging computation within the storage.
  • Merged and converged. Hyper-convergence bakes software-defined storage into convenient, modular appliance units of infrastructure. Hyper-converged infrastructure products, such as those from Hewlett Packard Enterprise’s SimpliVity and Nutanix, can greatly reduce storage overhead and help build hybrid clouds. We also see innovative approaches merging storage and compute in new ways, using server-side flash (e.g., Datrium), rack-scale infrastructure pooling (e.g., Drivescale) or even integrating ARM processors on each disk drive (e.g., Igneous).
  • Bigger is better. If the rise of big data has taught us anything, it’s that keeping more data around is a prerequisite for having the opportunity to mine value from that data. Big data distributions today combine Hadoop and Spark ecosystems, various flavors of databases and scale-out system management into increasingly general-purpose data-processing platforms, all powered by underlying big data storage tools (e.g., Hadoop Distributed File System, Kudu, Alluxio).
  • Always faster. If big is good, big and fast are even better. We are seeing new kinds of automatically tiered and cached big data storage and data access layer products designed around creating integrated data pipelines. Many of these tools are really converged big data platforms built for analyzing big and streaming data at internet of things (IoT) scales.

The changing fundamentals

Powering many of these examples are interesting shifts in underlying technical capabilities. New data processing platforms are handling more metadata per unit of data than ever before. More metadata leads to new, highly efficient ways to innovate …(read the complete as-published article there)