feature article
Subscribe Now

Bringing Innate Intelligence to Trillions of Devices

Did you ever wonder why they (whoever “they” are) chose the prefix “tera” to mean trillion (as in 10^12, or 1,000,000,000,000)? Well, it’s because because this comes from the Greek word teras, meaning “monster” or “marvel.” Thus, “tera” was chosen to reflect the large size of a trillion, implying something vast and extraordinary, much like the way a monster or marvel would stand proud in the crowd. You’re welcome.

The reason I mention this here is that I was just chatting with Sumeet Kumar, who is the CEO at Innatera. When I asked Sumeet as to the origin of the company’s name, he replied that it’s a portmanteau of “innate” and “tera.” The idea being that their mission is to bring intelligence to trillions of devices.

As an aside, the term “portmanteau,” which refers to a word blending the sounds and combining the meanings of two other words, is itself a combination of two words (French, in this case): porter, meaning “to carry,” and manteau, meaning “cloak.” Once again, you’re welcome.

Founded in 2018, Innatera is fabless a spin-off from the Delft University of Technology in the Netherlands. Based on a decade of research into energy-efficient neuromorphic computing, the guys and gals at Innatera are pioneers in developing a new breed of microcontrollers that aim to bring biological brain-like intelligence to sensors.

Let’s start with the fact that an estimated 4 billion new sensor-driven devices currently come online each year, ranging from smart watches to smart phones to smart cars. All these little rascals require some sort of processor to make sense of all that sensor data, but many of them—especially wearable devices—have an extremely low power budget.

This is where the chaps and chapesses at Innatera score with their spiking neural processor, the T1, which is an ultra-low-power neuromorphic microcontroller intended for always-on sensing applications.

Meet the T1 (Source: Innatera)

Of course, this leads to all sorts of questions, starting with “what do we mean by ‘neuromorphic’?” Well, this refers to the design and development of systems, hardware, or circuits that are inspired by the structure and function of the human brain and nervous system.

Neuromorphic chips employ a special form of artificial neural network (ANN) called a spiking neural network (SNN). Unlike more commonly used ANNs like convolutional neural networks (CNNs) that use continuous values to represent activations between neurons, SNNs use spikes or discrete events to represent data, which is like the way biological neurons fire in response to stimuli.

CNNs are excellent for tasks involving image recognition, object detection, and other computer vision applications. However, they process data in a layer-by-layer fashion, using continuous activation functions with fixed computation per layer. By comparison, SNNs operate in a time-dependent manner, processing information only when spikes occur. This means that SNNs are significantly more energy-efficient due to their sparse and event-driven computation nature (neural spikes occur only when necessary), which makes them well-suited for tasks involving real-time processing and event-based data.

As Sumeet told me, “The T1 is basically a brain-inspired processor that brings turnkey intelligence into applications where power is limited. It is essentially a chip that allows you to analyze sensor data in real time to detect and identify patterns of interest. We tend to focus on applications that have an ‘always on’ flavor where you have a sensor that’s continuously capturing data and where you need to identify patterns inside of that data very quickly, generally within a few milliseconds or less, all while using a miniscule amount of power like a milliwatt or less.”

As an interesting aside, since the chaps and chapesses at Innatera drew their inspiration from the functioning of biological brains, Sumeet pointed me at a video created by the folks at the Howard Hughes Medical Institute. This video shows the neural activity in a Zebra fish that has been modified to express its neural behavior in light.

As Sumeet explained, “What this video demonstrates is that the processing in a biological brain happens based on populations of neurons that respond to the same input. So, for every word that I’m speaking, there’s a unique combination of neurons in your brain that fires every time you hear that word, which is why the brain is so efficient and so quick at pattern recognition.”

A high-level view of the process flow associated with an SNN is depicted below. In a crunchy nutshell, the real-time sensor data is encoded into spikes, these spikes are fed into the SNN, then the spikes generated by the SNN are decoded and used as the basis for any actions that need to be performed.

Processing data with spiking neural networks (Source: Innatera)

There’s a lot more to this than meets the eye, but Sumeet explains it well as follows: “In a conventional ANN like a CNN, all the wide data is continually abstracted and processed through the layers of the neural network. By comparison. in an SNN, you take that data, and you convert it into a very simple representation, a simple voltage spike, a 0 or a 1. And the key here is that the information about the input signal is encoded into the time when a spike occurs. So, what you do is you take data coming in from a sensor, you convert it into a set of representative spikes, and the SNN essentially manipulates the timing relationships between these spikes to uncover hidden patterns inside of that sensor data. What this means is that the SNN inherently understands the meaning of an early spike and a late spike. And because it’s able to leverage that timing relationship in its processing, it tends to be about 100 times smaller than conventional neural networks at a similar level of accuracy.”

As we see in the block diagram below, the T1 is a combination of three different computational fabrics, the most important of which is the area shown in bright blue. This is a large array containing the silicon versions of neurons and synapses.

Block diagram of the T1 spiking neural processor (Source: Innatera)

This neuron-synapse array is implemented in a programmable fabric that can be configured to implement SNNs in real time. This massively parallel array boasts processing elements that are formed from analog mixed-signal circuits, which allows them to execute spiking neural network operations with a miniscule amount of energy. Typically, this energy tends to be 100x lower compared to traditional ANNs, and it’s 15x lower when compared to competitive SNNs implemented at the same process technology node (which is 28nm, on the off chance you were wondering).

Since the array is massively parallel, it runs the neural network in real-time, performing its inferencing operations in less than a millisecond. O-M-Gosh, is all I can say!

There’s also a low-power RISC-V microcontroller with associated floating-point unit (FPU). Furthermore, there’s also a traditional CNN accelerator because very often application architects wish to combine traditional and spiking networks together in some way.

The bottom line is that the T1 is a teeny, tiny, fully integrated chip—and it’s the only chip you need sitting next to your sensor to go from raw sensor data into actionable insights.

Well, color me impressed! What say you? Do you have any thoughts you’d care to share on any of this? As always, I await your captivating comments and insightful questions in dread anticipation.

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Evolution of GNSS
Sponsored by Mouser Electronics and Taoglas
In this episode of Chalk Talk, Pat Frank from Taoglas and Amelia Dalton explore the details of multi-constellations GNSS Systems. They also investigate the key characteristics of antennas and how you can future-proof your GNSS design with Taoglas antenna solutions.
Dec 11, 2024
8,459 views