feature article
Subscribe Now

Creating Tiny AI/ML-Equipped Systems to Run at the Extreme Edge

One of my favorite science fiction authors is/was Isaac Asimov (should we use the past tense since he is no longer with us, or the present tense because we still enjoy his writings?). In many ways Asimov was a futurist, but — like all who attempt to foretell what is to come — he occasionally managed to miss the mark.

Take his classic Foundation Trilogy, for example (before he added the two prequels and two sequels). On the one hand we have a Galactic Empire that spans the Milky Way with millions of inhabited worlds and quadrillions of people. Also, we have mighty space vessels equipped with hyperdrives that can convey people from one side of the galaxy to the other while they are still young enough to enjoy the experience.

On the other hand, in Foundation and Empire, when a message arrives at a spaceship via hyperwave for the attention of General Bel Riose, it’s transcribed onto a metal spool that’s placed in a message capsule that will open only to his thumbprint. Asimov simply never conceived of things like today’s wireless networks and tablet computers and suchlike.

As an aside, I was completely blown away when I heard that this classic tale will soon be gracing our television screens (see OMG! Asimov’s Foundation is Coming to TV!). I just took another look at the Official Teaser Trailer and I, for one, cannot wait!

 

Something else for which Asimov is famed are his Three Laws of Robotics, which were introduced in his 1942 short story Runaround. Don’t worry, I’m not going to recite them here — if you are reading this column, it’s close to a 99.9999% certainty that you, like me, can recite them by heart. Asimov used these laws to great effect in his various robot stories, along with the concept of the robots having “positronic brains” (when he wrote his first robot stories in 1939 and 1940, the positron was a newly discovered particle, so the buzz word “positronic” added a soupcon of science to the proceedings). Although there weren’t too many nitty-gritty details provided, there was talk about “pathways” being formed in the brains, resulting in what we might today recognize as being a super-sophisticated form of analog artificial neural network (AANN) (a supersized version of Aspinity’s Awesome AANNs, if you will, and don’t get me started talking about Analog AI Surfacing in Sensors).

The Dartmouth workshop in 1956 is now widely considered to be the founding event of artificial intelligence as a field, prior to which Asimov would probably not have been aware of many artificial intelligence (AI) and machine learning (ML) concepts. Now I come to think about it, however, I find it telling that the only time what we would now refer to as “artificial intelligence” ever raised its head in Asimov’s writings was in the form of his robots and their positronic brains. As we now know, AI and ML can appear all over the place, from small microcontrollers at the extreme edge of the internet to humongous server farms a.k.a. the cloud. I would love to be able to get my time machine working and bring Asimov to the present day to show him all the stuff we have now, like Wi-Fi and smartphones and virtual reality (VR) and suchlike. I would also love to introduce him to the current state of play regarding AI and ML.

As I’ve mentioned before (and as I’ll doubtless mention again), following the Dartmouth workshop, AI/ML was largely an academic pursuit until circa 2015 when it exploded onto the scene. The past few years have seen amazing growth in AI/ML sophistication and deployment to the extent that it’s becoming increasingly difficult to find something that doesn’t boast some form of AI/ML. (Some people think that the recent news that Google is Using AI to Design its Next Generation of AI Chips More Quickly Than Humans Can signifies “the beginning of the end,” but I remain confident that it’s only “the end of the beginning.”)

One of the things that characterized AI/ML in the “early days” — which, for me, would be about six years ago as I pen these words — was how difficult it all used to be. When these technologies first stuck their metaphorical noses out of the lab, they required data scientists with size-16 brains (the ones with “go-faster” stripes on the sides) to train them to perform useful tasks. The problem is that data scientists are thin on the ground. Also, when you are an embedded systems designer, the last thing you want to do is to spend your life trying to explain to a data scientist what it is you want to do, if you see what I mean (it’s like the old software developer’s joke: “In order to understand recursion, you must first understand recursion”).

Happily, companies are now popping up like mushrooms with new technologies to make things much, much easier for the rest of us. For example, I was recently chatting with the folks at SensiML (pronounced “sense-ee-mel” to rhyme with “sensible”), whose mission it is to help embedded systems designers create AI/ML-equipped systems that run at the edge. SensiML’s role in all of this is to provide the developers with accurate AI/ML sensor algorithms that can run on the smallest IoT devices, along with the tools to make the magic happen.

Consider the SensiML Endpoint AI/ML Workflow as depicted below. The SensiML analytics toolkit suite automates each step of the process for creating and validating optimized AI/ML IoT sensor code. The overall workflow uses a growing library of advanced AL/ML algorithms to generate code that can learn from new data either during the development phase or once deployed.

The SensiML Endpoint AI/ML Workflow (Image source: SensiML)

We start with the Data Capture Lab, which is a fully-fledged, time-series sensor data collection and labeling tool. As the folks at SensiML say, “Collecting and labeling train/test data represents the single greatest development expense and source of differentiation in AI/ML model development. It is also one of the most overlooked and error-prone aspects of building ML models.” One thing I really like about this tool is how you can combine the data being collected with a video of events taking place. Suppose you are trying to train for gesture recognition, for example. Having the time-synchronized video makes it easy for you to identify to the system where each gesture starts and ends.

Next, we have the Analytics Studio, which “uses your labeled datasets to rapidly generate efficient inference models using AutoML and an extensive library of edge-optimized features and classifiers. Using cloud-based model search, Analytics Studio can transform your labeled raw data into high performance edge algorithms in minutes or hours, not weeks or months as with hand-coding. Analytics Studio uses AutoML to tackle the complexities of machine learning algorithm pre-processing, selection, and tuning without reliance on an expert to define and configure these countless options manually.”

The final step is to use TestApp to validate the accuracy of the model in real-time on the intended IoT device. As the chaps and chapesses at SensiML say, “The time gap between model simulation and working IoT device can take weeks or months with traditional design methods. With SensiML’s AutoML workflow culminating with on-device testing using TestApp, developers can get to a working prototype in mere days or weeks.”

There is a cornucopia of content on SensiML’s YouTube channel that will keep you busy for hours, including some that address strangely compelling topics like Cough Detection — Labeling Events. Another good all-rounder is the Predictive Maintenance Fan Demo.

 

Unfortunately, I fear there is far too much to all of this to cover here. If you are interested in learning more, I would strongly suggest that you take the time to peruse and ponder all there is to see on SensiML’s website, after which you can follow up with the YouTube videos. As usual, of course, I welcome your comments and questions.

One thought on “Creating Tiny AI/ML-Equipped Systems to Run at the Extreme Edge”

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Audio Design for Augmented and Virtual Reality (AR/VR) Glasses
Open ear audio can be beneficial to a host of different applications including virtual reality headsets, smart glasses, and sports and fitness designs. In this episode of Chalk Talk, Amelia Dalton and Ryan Boyle from Analog Devices explore the what, where, and how of open ear audio. We also investigate the solutions that Analog Devices has for open ear audio applications and how you can design open ear audio into your next application. 
Jan 23, 2024
9,112 views