Memristor technology brings about an analog revolution

An IT industry analyst article published by SearchDataCenter.


We are always driven to try to do smarter things faster. It’s human nature. In our data centers, we layer machine learning algorithms over big and fast data streams to create that special competitive business edge (or greater social benefit!).

Yet for all its processing power, performance and capacity, today’s digital-based computing and storage can’t compare to what goes on inside each of our very own, very analog brains, which vastly outstrip digital architectures by six, seven or even eight orders of magnitude. If we want to compute at biological scales and speeds, we must take advantage of new forms of hardware that transcend the strictly digital.

Many applications of machine learning are based on examining data’s inherent patterns and behavior, and then using that intelligence to classify what we know, predict what comes next, and identify abnormalities. This isn’t terribly different from our own neurons and synapses, which learn from incoming streams of signals, store that learning, and allow it to be used “forward” to make more intelligent decisions (or take actions). In the last 30 years, AI practitioners have built practical neural nets and other types of machine learning algorithms for various applications, but they are all bound today by the limitations of digital scale (an exponentially growing Web of interconnections is but one facet of scale) and speed.

Today’s digital computing infrastructure, based on switching digital bits, faces some big hurdles to keep up with Moore’s Law. Even if there are a couple of magnitudes of improvement yet to be squeezed out of the traditional digital design paradigm, there are inherent limits in power consumption, scale and speed. Whether we’re evolving artificial intelligence into humanoid robots or more practically scaling machine learning to ever-larger big data sets to better target the advertising budget, there simply isn’t enough raw power available to reach biological scale and density with traditional computing infrastructure.

…(read the complete as-published article there)

Is It Still Artificial Intelligence? Knowm Rolls Out Adaptive Machine Learning Stack

(Excerpt from original post on the Taneja Group News Blog)

When we want to start computing at biological scales and speeds – evolving today’s practical machine learning forward towards long-deferred visions of a more “artificial intelligence” – we’ll need to take advantage of new forms of hardware that transcend the strictly digital.

Digital computing infrastructure, based on switching digital bits and separating the functions of persisting data from processing, is now facing some big hurdles with Moore’s law. Even if there are a couple of magnitudes of improvement yet to be squeezed out of the traditional digital design paradigm, it has inherent limits in power consumption, scale, and speed. For example, there simply isn’t enough power available to meet the desires of those wishing to reach biological scale and density with traditional computing infrastructure, whether evolving artificial intelligence or more practically scaling machine learning to ever larger big data sets.

Knowm Inc. is pioneering a brilliant new form of computing that leverages the adaptive “learning” properties of memristive technology to not only persist data in fast memory (as others in the industry like HP are researching), but to inherently – and in one operation – calculate serious compute functions that would otherwise require the stored data to be offloaded into CPU’s, processed, and written back (taking more time and power).

The Knowm synapse, their circuit-level integrated unit of calculation and data persistence, was inspired by biological and natural world precedent. At a philosophical level this takes some deep thinking to fully appreciate the implications and opportunities, but this is no longer just a theory. Today, Knowm is announcing their “neuromemristive” solution to market supported with a full stack of  technologies – discrete chips, scalable simulators, defined low-level API’s and higher-level machine learning libraries, and a service that can help layer large quantities of Knowm synapses directly onto existing CMOS (Back End of Line or BEOL) designs.

Knowm is aiming squarely at the machine learning market, but here at Taneja Group we think the opportunity is much larger. This approach that intelligently harnesses analog hardware functions for extremely fast, cheap, dense and memory-inherent computing could represent a truly significant change and turning point for the whole computing industry.

I look forward to finding out who will take advantage of this solution first, and potentially cause a massively disruptive shift in not just machine learning, but in how all computing is done.
 

…(read the full post)