Is Neuromorphic Technology Ready to Take the Next Steps?

This market research report was originally published at Tractica’s website. It is reprinted here with the permission of Tractica.

Neuromorphic technology has been around since the 1990s in some form or the other. Early versions of neuromorphic chipsets included SpinNNaker from the University of Manchester and IBM’s TrueNorth. Both of these chipsets won several accolades from the university community and were primarily R&D projects. A recent chip from Intel called Loihi is yet another attempt from a prominent chipmaker to revive interest in neuromorphic technology. With new developments in software tools, the technology has made some leaps and is slowly starting to reach the maturity needed for commercial deployment.

Neuromorphic computing is different from today’s mainstream Von Neumann’s compute paradigm. The Von Neumann architecture separates compute and storage. Data is stored in RAM and is then shuffled to the compute unit to make operations such as convolutions; the results are stored back into RAM. The cost of moving data back and forth often consumes high power and limits the ability to significantly reduce power consumption. In contrast, neuromorphic computing tries to mimic the way the brain works, as compute and storage are colocated. One good outcome of colocation is that it reduces the cost of transferring the data back and forth, thus resulting in significant power savings.

Neuromorphic technology also uses what is called “spikes” to compute. Today’s neural networks work on a frame-by-frame basis in which an image is input to a compute element and the output is the value of that compute operation. The compute operations are repeated until meaningful results are produced. Neuromorphic technology looks at the difference between two frames (spikes). This data is then input to a compute element that outputs spikes, and the operation is repeated to generate meaningful results. Neuromorphic technology fundamentally changes the way computing is done and requires a different kind of network called a spiking neuron network (SNN).

A New Paradigm

Thus, neuromorphic computing can be considered a new hardware and software paradigm. In the last couple of years, there have been many new developments in this field driven by AI. Several neuromorphic chipset companies have entered the market that have successfully raised capital. These include Rain Neuromorphics (memristor-based approach), PROPHESEE (spiking sensor), BrainChip (SNN), and so on. Intel’s Loihi is another endorsement from a large corporate entity of the technology.

Neuromorphic chips can be fabricated using existing CMOS technologies, and these chips rely on traditional semiconductor technologies. However, software is where the technology had lagged in terms of making it commercially viable in the past. Applied Brain Research, a Canada-based company specializing in neuromorphic software, has developed a software tool called Nengo and has published very promising results in recent years. The company’s paper demonstrates how Loihi can be used to recognize wake words at a very low power consumption and provides a framework to map neural networks into neuromorphic chipsets with trade-offs in accuracy. Applied Brain Research has made tools freely available for download to anyone interested in trying out the technology.

Neuromorphic technology has some prominent critics, though. Yann LeCun, one of the fathers of AI, has been vocal about criticizing neuromorphic computing and Intel has openly responded to the criticism. According to LeCun, neuromorphic computing does not scale very well and the accuracy drops as the size of network increases. He believes that the results are not quite practical for commercial deployments for larger networks.

A Large Opportunity

Nevertheless, neuromorphic computing remains an active area of research in academia and efforts are ongoing to overcome limitations. The startups that have raised capital are planning on going after the edge market, which will be worth $55 billion by 2025, according to Tractica’s estimates. This presents a large opportunity for AI in a wide range of battery-powered consumer devices. Applications such as wake word recognition are on the rise and Alexa-like devices are starting to dominate households. A small SNN running on a neuromorphic chip that provides 10x the power benefit compared to traditional compute architecture might just be enough to enable neuromorphic companies to gain design wins and jump-start adoption of the technology.

Anand Joshi
Principal Analyst, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top