The AI Revolution: Less Artificial, More Intelligent, and Beneficial to Society (Part One)

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip.

Today’s neuromorphic computing architecture is capable of real-time learning and classification which is leading to a new path of Artificial Intelligence. As technology advances, new ways to support sequential memory, prediction and eventually, awareness is within our reach. Neuromorphic architecture is the most promising technology available and overcomes many limitations of current AI development efforts.

Current AI architectures are limited

Deep learning systems don’t learn per se, the way humans learn, instead they are trained. Training is important, and sufficient in many applications, but it’s ineffective in processing the amount of data that will meaningfully impact society. (It is said that today’s neural networks are at the intelligence level of a honeybee, but this is considered a massive exaggeration.)

The current conventional method of deep learning relies on Convolutional Neural Networks (CNNs) that are trained through repetitive sequences. This is similar in some respects to successive approximation, but with a descending gradient as it is going backwards from layer to layer. CNNs can perform relatively simple processes well, such as classifying objects in still or moving images, and have been found accurate up to 70 to 80%.  This methodology uses basic math, and lots of it: millions of MACs, multiple accumulation operation, and probability calculations per image.

Even a small CNN requires as many as 530 million computations to classify a single image. This makes the processing power-hungry, even on high-end chips such as GPUs or dedicated mathematics chips such as the Google Coral edge TPU. With the amount of calculations, this increases latency, and area and power of the system due to the size of memory needed for these larger number of calculations.

CNNs don’t learn in real-time

More importantly, they don’t learn as they go, and don’t keep learning after being trained like a biological brain does. This is significant because CNNs can be fooled, for instance, through the introduction of a pattern within the pixel map of an image, like a patch on an article of clothing, this can cause results to fall within a standard deviation, thus reducing accuracy.

Deep learning on CNNs gives us the ability to identify features, but not a context of what those features mean. For example, a face is classified as a face even if the position of the nose, mouth, and eyes are out of place or a  plane about to crash into a building may be classified as a plane parked at the terminal.

The one way CNNs do mirror the biological brain is in the progression from simple feature identification to more complex features recognition, as data is fed through a succession of layers. This is not unlike how we must learn to cook simple dishes first, before moving on to more elaborate three-course meals. Still, Deep Learning has many shortcomings in structure, organization, learning methods, and inefficiency to achieve truly intelligent systems, no matter how many more layers, residual connections, and recursion are added.

Neuromorphic computing and SNNs

Neuromorphic computing architecture, on the other hand, is based on combining the best of last-generation features, either trained by deep learning or extracted from a data-set, into an event-based Spiking Neural Network (SNN). It’s less artificial because it’s more closely derived from the way a biological brain operates. This makes for more intelligent processing because it’s being used to create systems that provide new solutions to previously unsolvable problems.

But let’s back up. What is a spike? Simply put, it is a short burst of electrical energy sent between neural cells. The biological brain relies on precision timing of these bursts, including the sequences and intervals between spikes. The intensity of spikes, and location where spikes occur all contain information, and the synapses store this information in a chemical signature, a memory. These memories stored in the synapses is what causes the neuron to learn. When a spike occurs, the memory is recalled and learning is enabled.

Constant learning, shaped by an environment, is intelligence

Unlike CNNs, with each new burst of information, Spiking Neural Networks (SNNs) can keep learning after they’re trained. Precision timing, which is completely lost in CNNs, is maintained and leveraged in SNNs.  Moreover, it’s in processing patterns and matching them to subsequent data streams that spiking neural models excel. This significantly reduces the multiplication and probability calculations of the CNNs, with learning occurring autonomously from unlabeled data, in as little as 3-5 repetitions, even in extremely “noisy,” disordered data.

Another key differentiator in spike-based learning is significant power-efficiency.  In SNNs only relevant information is identified as a spike. So, if a document is ten pages long, and nine are blank, the machine doesn’t bother performing mathematical calculations on the nine blank pages, only the one page that matters. With this differentiator, SNNs reduce latency, and memory, further lowering power, which in turn enables you to build a system more like a brain, than a machine.

The AI Revolution

The revolution is coming and neuromorphic architecture will enable society to benefit from Artificial Intelligence in a variety of areas, from medical diagnostics, energy harvesting, transportation efficiency, advanced processing at the Edge, plus so much more.

This is just the beginning. This is our Mission.

Rob Telson
Vice President of World Wide Sales, BrainChip

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top