feature article
Subscribe Now

Brainchip Debuts Neuromorphic Chip

Akida Neuromorphic SoC Takes on CNNs

Convolutional Neural Networks (CNNs)  have been dominating the discussion on AI advancement for the past couple of years. But CNNs have one glaring weakness – a heavy reliance on massive amounts of multiplication. This huge arithmetic obstacle has led to a plethora of initiatives to accelerate both the training and inference phases of deep learning with CNNs, and a wide variety of hardware and software architectures designed to improve CNN performance and efficiency – both in the data center and at the edge. FPGAs, GPUs, and a range of specialized hardware architectures are competing to capture what is expected to be an enormous market for AI computing over the coming decades.

Brainchip is taking a different approach.

Brainchip is a public company (listed on the Australian stock exchange) who has spent the past decade developing “neuromorphic” computing hardware and software based on spiking neural networks (SNNs). Structurally, SNNs are closer to biological neurons than their CNN cousins. According to Brainchip’s Bob Beachler, this allows SNNs to operate with a much smaller computational requirement, allowing more efficient inferencing at the edge. Where CNNs rely on linear algebra requiring matrix multiplication, rectified linear units for activation, pooling layers, “fully connected” layers, and very large datasets for training (typically run off-chip in data centers), SNNs use threshold logic and connection reinforcement via “spikes” and feed-forward training which can be done on- or off-chip with shorter training cycles and continuous learning.

Historically, SNNs have gotten less attention than CNNs – partly because they were thought to be more computationally demanding. So, what gives with Brainchip?

Brainchip’s software component “Brainchip Studio” is based on application software developed by SpikeNet Technology (which was acquired by Brainchip in 2017) and Centre de Recherche Cerveau et Cognition. According to the company, Brainchip Studio is a supervised learning application that can be trained instantaneously, has high accuracy, requires very little power and excels in particular where large training datasets are not available.

This week, Brainchip is announcing a new chip called Akida – designed to implement SNNs in edge computing applications – primarily for embedded vision and image recognition, but also for financial analysis and cybersecurity. Akida is expected to sample in late 2019, with a cost of about $10. This puts it squarely in competition with other devices such as Intel’s Movidius for the lucrative edge/inferencing market. Brainchip claims that Akida can deliver significantly better performance per watt than Intel’s Movidius Myriad 2 VPU, which should be a compelling advantage in power-stingy edge and embedded applications.

Beachler says Akida packs 1.2 million neurons and 10B synapses in an 11-layer SNN along with a RISC processor to work its magic – which should be up to 1,400 frames per second per watt. Those are impressive numbers – particularly for a $10 chip. That kind of cost- and power-performance puts Akida in a position where FPGAs (one of the hot contenders for inferencing applications) cannot go. Brainchip also claims excellent accuracy at those power and performance levels – comparable to what is achieved by CNN approaches.

Starting on the input side, Akida has both sensor interfaces (for embedded applications) and data interfaces (for co-processor applications). The sensor interfaces include pixel, audio, DVS, analog, and digital. Data interfaces include PCIe, USB 3.0, Ethernet, CAN, and UART. These interfaces drive a “conversion complex” whose job it is to convert the sensor and data interface outputs to spikes. These spikes are then passed to the Akida neuron fabric.

In an SNN, spikes at the inputs of a neuron are integrated over time and magnitude, and when that integral exceeds a certain threshold, a corresponding spike is generated at the neuron’s output. Because of this architecture, there is considerably less transistor toggling for each neuron event than one sees in a CNN implementation, and that orders-of-magnitude reduction in switching means dramatically lower power consumption – assuming other variables such as semiconductor process are equivalent.

One interesting element of the SNN vs CNN universe is the possibility of in-system, ongoing training, versus pre-training with large datasets in a data center/cloud environment. But, it’s not clear how this will play out in the real-world application space yet. Beachler says that in applications such as financial analysis, unsupervised learning can offer big benefits.

Brainchip’s test chip benchmarking results are impressive – claiming 1,100 fps with the $10 chip on CIFAR-10 with 82% accuracy, using less than 0.2 Watts – about 6K fps/watt. The company says this compares with 83% at 6K fps/watt from IBM’s “True North” (at a cost of around $1K) and 80% from Xilinx ZC709 at 6K fps/watt (also around $1K).

Akida’s efficiency is due to a number of factors. SNNs are “math-lite” with no MACs or “weight swapping” required. Akida’s use of a fixed neuron model with right-sized synapses and minimized on-chip RAM (6MB compared with 30-50MB for CNN implementations) helps power efficiency as well. A global “spike bus” connects all neural processors, and training and firing thresholds are programmable. Brainchip says their flexible neural processor cores are highly optimized to perform convolutions. Akida is multi-chip expandable to 1.2 billion neurons, so it should be easy to scale an Akida implementation to fit your application needs.

Brainchip’s development environment is available in Q3 2018, and should be accessible to an FPGA-based acceleration board in advance of availability of the Akida chip in 2019. It will be interesting to watch how Akida’s SNN approach competes with CNN-based devices/approaches in the same edge/embedded markets. Certainly the low cost and low power consumption of Akida are compelling, and the SNN approach appears to have merits and (according to the company’s benchmarking) should be competitive in accuracy. However, it’s a long time until these chips will hit distribution in 2019, and this is a very fast-evolving market and technology.

3 thoughts on “Brainchip Debuts Neuromorphic Chip”

  1. So are they as easily fooled as CNN’s that yield horrible classification errors?

    I’d rather bet lives on real algorithms that are verifiable, rather than the probably works most of the time “magic”.

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Advanced Gate Drive for Motor Control
Sponsored by Infineon
Passing EMC testing, reducing power dissipation, and mitigating supply chain issues are crucial design concerns to keep in mind when it comes to motor control applications. In this episode of Chalk Talk, Amelia Dalton and Rick Browarski from Infineon explore the role that MOSFETs play in motor control design, the value that adaptive MOSFET control can have for motor control designs, and how Infineon can help you jump start your next motor control design.
Feb 6, 2024
61,728 views