feature article
Subscribe Now

An AI Storm is Coming as Analog AI Surfaces in Sensors

I worry that when writing these columns, I sometimes start by meandering my way off into the weeds, cogitating and ruminating on “this and that” before eventually bringing the story back home. So, on the basis that “a change is as good as a rest,” as the old English proverb goes, let’s do things a little differently this time.

Take a look at the image below. What do you see in addition to the penny piece? What I see is a Mantis AI-in-Sensor (AIS) System-on-Chip (SoC), where the “AI” portion of this moniker stands for “artificial intelligence.” This little beauty is brought to us from those clever little scamps at AIStorm.ai who simply cannot restrain themselves from dropping the phrase “a storm is coming” into every conversation.

Mantis AIS SoC next to an American penny piece (Image source, AIStorm)

Of course, we’ve all seen chips before, so what makes this one different? Well, what you are looking at here is essentially a self-contained “AI Smart Camera” because all that is needed externally is a lens and a capacitor. Using an analog AI implementation, this little beauty draws only 15 µW of power in “always-on” operation. Furthermore, it’s so fast that it can have finished its processing before its digital AI competitors have gathered enough data to even think about commencing their processing, but mayhap we are getting ahead of ourselves…

Let’s start with the problem, which is CMOS imaging sensors and MEMS audio sensors generating analog data. For example, in a traditional image-processing implementation, the pixel array is connected to a source follower (that is, a field-effect transistor (FET)-based common-drain amplifier), which drives an analog-to-digital converter (ADC), which feeds an image signal processor (ISP). The output from the ISP is then passed over a digital communications channel, such as a MIPI SerDes link, to a digital AI. This analyzes the image or video stream in the digital domain — using a microcontroller unit (MCU), graphic processing unit (GPU), digital signal processor (DSP), or field-programmable gate array (FPGA) — and generates appropriate events. The fact that the system must digitize the data to feed these discrete processing engines results in higher latency, higher power consumption, and higher cost.

The problem (unintegrated digital AI) versus the solution (integrated analog AI) (Image source: AIStorm)

By comparison, AIStorm’s solution transforms the sensor into the input layer of an analog charge domain AI. The AIS device accepts the sensor data directly without digitizing, it uses the sensor charge to directly couple to the first layer of analog neurons, it uses multiple layers of analog neurons to perform tasks like weight multiplication, summing, and biasing, and — ultimately — it produces a decision output. The fact that the AIS feeds analog data from the sensor directly into the integrated analog AI results in lower latency, lower power consumption, and lower cost. In turn, this results in a significant increase in performance and longer life for battery-powered products.

Although AIStorm’s Mantis data flow employs charge domain processing and uses pulses to communicate between its analog neurons, it’s still based on a standard TensorFlow development methodology, including a bridge that allows the artificial neural network (ANN) generated by TensorFlow to be downloaded into the Mantis SoCs.

Mantis development flow (Image source: AIStorm)

In addition to being able to accept analog data directly from its image sensor or audio sensor, Mantis can also accept data via digital interfaces such as SPI, I2S, or PDM. Furthermore, sensor data can be output using the same digital interfaces for use in creating training datasets. The training itself is performed on a PC. The resulting weights and execution information are loaded into Mantis, which provides inference execution.  

Now, it has to be acknowledged that we aren’t talking about high-definition image processing here. Mantis currently supports a resolution of 96 x 96 pixels, but that’s more than enough to support a tremendous range of markets.

A selection of potential target Mantis markets (Image source: AIStorm)

Out of all these potential markets, the one that immediately caught my eye was “Occupancy” in the “Consumer IoT” column. Over the years, I have grown to hate occupancy systems that are based on passive infrared (PIR) detectors. On the one hand, it’s nice that the lights turn on automatically when you enter a room without your having to do anything, especially if your arms are full of books and papers and suchlike. On the other hand, it’s a pain in the nether regions when the lights go out while you are in the middle of reading or writing something, forcing you to leap to your feet and start gesticulating furiously (it’s also embarrassing if someone enters the room just after the lights have activated to find you jumping up and down waving your arms around while casting PIR-centric aspersions… or so I’ve been told).

Sensors are a system’s eyes, ears, nose, and fingers. Furthermore, sensors can focus on what’s important — a face, a sound, an intruder, a change — while ignoring anything outside their purview. You might say that AIStorm’s AIS technology is just a clever mix of analog and mixed-signal technology, but it’s much more than that. For the first time, a teeny-tiny sensor is smart enough to perform complex analysis, make decisions, and deal with events itself, often before its digital competitors have even been able to start processing.

AIS SoCs are the first and only sensor solutions capable of accepting pixel-charge data or audio-MEMs-charge data directly in its native charge form. The result is the world’s only family of solutions capable of image or audio-based smart AI wakeup on a person, face, object, behavior, sound, or word. 

Mantis AIS devices include convolutional neural network (CNN) and fully connected (FC) capabilities with the flexibility to implement a variety of popular machine learning (ML) models. The first member of the Mantis family, the C100A, is a fully integrated AI-based Smart Camera supporting up to eight layers of programmable deep learning capability while drawing only 15 µW of power in “always-on” operation. This AI system is a powerful but lightweight CNN that’s capable of performing image analysis and waking up on detecting an object, person, or behavior using supplied MantisNet Models.

The great thing here is that there’s room for everyone at the AI party. The folks at AIStorm aren’t trying to replace “traditional” high-end AI vision and speech applications, such as surveillance systems that can identify persons of interest out of a crowd or detect and respond to complex commands and queries like the Amazon Alexa (I put “traditional” in quotes because commercially deployed AI is still so new that it seems strange to refer to it as traditional). Rather, they are taking AI where it’s never been possible to deploy it before — at least not in a cost-effective fashion — to the extreme edge of the internet in the sensors themselves.

I for one am very interested to see where this technology takes us. How about you? Do you have any thoughts you’d care to share?

One thought on “An AI Storm is Coming as Analog AI Surfaces in Sensors”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured chalk talk

Digi XBee® 3 Global LTE CAT 4
Sponsored by Mouser Electronics and Digi
Global functionality for cellular enhanced applications can be a complicated process. In this episode of Chalk Talk, Alec Jahnke and Amelia explore the details and benefits of Digi’s XBee 3 Global LTE CAT 4 solution. We also investigate the XBee programming process and how the over the air updates of Digi Remote Manager can help future proof your next cellular design.
Dec 17, 2024
2,018 views