Big data and IoT benefit from machine learning, AI apocalypse not imminent
So what’s really going on? Is this something brand new or just the maturation of ideas spawned out of decades-old artificial intelligence research? Does deep learning require conversion to some mystical new church to understand it, or do our computers suddenly get way smarter overnight? Should we sleep with a finger on the power off button? But most importantly for IT folks, are advances in machine learning becoming accessible enough to readily apply to actual business problems — or is it just another decade of hype?
There are plenty of examples of highly visible machine learning applications in the press recently, both positive and negative. Microsoft’s Tay AI bot, designed to actively learn from 18 to 24 year olds on Twitter, Kik and GroupMe, unsurprisingly achieved its goal. Within hours of going live, it became a badly behaved young adult, both learning and repeating hateful, misogynistic, racist speech. Google’s AlphaGo beat a world champion at the game of Go by learning the best patterns of play from millions of past games, since the game can’t be solved through brute force computation with all the CPU cycles in the universe. Meanwhile, Google’s self-driving car hit a bus, albeit at slow speed. It clearly has more to learn about the way humans drive.
Before diving deeper, let me be clear, I have nothing but awe and respect for recent advances in machine learning. I’ve been directly and indirectly involved in applied AI and predictive modeling in various ways for most of my career. Although my current IT analyst work isn’t yet very computationally informed, there are many people working hard to use computers to automatically identify and predict trends for both fun and profit. Machine learning represents the brightest opportunity to improve life on this planet — today leveraging big data, tomorrow optimizing the Internet of Things (IoT).
Do machines really learn?
First, let’s demystify machine learning a bit. Machine learning is about finding useful patterns inherent in a given historical data set. These usually identify correlations between input values that you can observe, and output values that you’d eventually like to predict. Although precise definitions depend on the textbook, a model can be a particular algorithm with specific parameters that are tuned, or one that comes to “learn” useful patterns.
There are two broad kinds of machine learning:
…(read the complete as-published article there)