feature article
Subscribe Now

Shhh! Aspinity’s AML100 Analog AI Voice & Vibration IC is Listening!

Although the vast majority of people in the embedded and IoT industries are bouncing back and forth and jumping up and down singing the praises of digital systems and digital signal processing (DSP), it’s the topics of analog systems and analog signal processing (ASP) that have been much on (what I laughingly call) my mind of late.

As I’ve mentioned on occasion, I’m a digital hardware design engineer by trade. My first job after staggering out of the university doors into the light of day was as a member of a team designing central processing units (CPUs) for mainframe computers. I don’t remember the term DSP being mentioned while I was slogging away through my degree. Actually, I don’t recall the term ASP being mentioned either. The only thing we talked about in this regard was “signal processing,” which pretty much embraced everything.

As I’ve also made mention in earlier columns, when I commenced my degree in Control Engineering deep in the mists of time circa the mid-1970s, the only computer that was physically located in the engineering department was analog in nature (we also had access to a digital mainframe located in another building that we programmed using perforated paper products in the form of paper tapes and punched cards).

The analog machine was formed from lots of small modules, each of which provided a single analog function, like being able to compare, add, subtract, multiply, and integrate analog signals. The beast also possessed a plethora of potentiometers (pots) that we used to specify coefficient values. We “programmed” the machine by setting values on the pots and connecting the various functions together using cables with jack plugs on each end.

On the one hand, this sort of machine was a pain in the nether regions to set up, and it was even worse when it came to debugging a recalcitrant “program” (mayhap “model” would be a better moniker in this milieu). On the other hand, digital logic gates and memory elements — which are pretty much mandatory when it comes to performing DSP — were expensive in terms of transistors, which were themselves expensive in terms of cold hard cash. One of the great advantages of analog computers and ASP is that they can be energy efficient and functionally efficacious when it comes to modelling things like dynamic systems. Of particular interest these days is the fact that analog techniques are ideally suited for neuromorphic artificial intelligence (AI) and machine learning (ML) applications.

As an aside, if you are interested in learning more about the history of signal processing (both ASP and DSP), then I heartily recommend Fifty Years of Signal Processing: The IEEE Signal Processing Society and its Technologies 1948-1998. Another classic source is The Scientist & Engineer’s Guide to Digital Signal Processing by Stephen W. Smith. As noted in Chapter 1: “The roots of DSP are in the 1960s and 1970s when digital computers first became available. Computers were expensive during this era, and DSP was limited to only a few critical applications. Pioneering efforts were made in four key areas: radar & sonar, where national security was at risk; oil exploration, where large amounts of money could be made; space exploration, where the data are irreplaceable; and medical imaging, where lives could be saved. The personal computer revolution of the 1980s and 1990s caused DSP to explode with new applications. Rather than being motivated by military and government needs, DSP was suddenly driven by the commercial marketplace […]” Also of interest are the recent columns by my friend Steve Leibson: A Brief History of the Single-Chip DSP Part 1 and Part 2.

As another aside, I was recently introduced to a forthcoming analog computer called The Analog Thing (THAT). I don’t know why, but it took me several readings before I realized that “THAT” was an abbreviation of “The Analog Thing” and not some esoteric part of its name.

The Analog Thing (That) (Image source: Anabrid)

In addition to 5 integrators, 4 summers, 2 comparators, 2 multipliers, and 8 coefficient potentiometers, there are also X, Y, Z, and U output ports that can be used to drive things like an oscilloscope as shown in the image above. Even better, each THAT boasts Master and Minion ports that allow multiple THAT’s to be daisy-chained together to implement arbitrarily large programs. There’s also a hybrid port that provides an interface for controlling the THAT digitally, thereby facilitating the development of analog-digital hybrid programs.

Last, but certainly not least (at least, for this aside) is the fact that there’s a brilliant Veritasium video on YouTube that shows all sorts of cool mechanical implementations of analog functions, like summing sine waves and even integration, for goodness’ sake.

 

 

But we digress… yet another reason analog has been on my mind is that I was just chatting with Tom Doyle (CEO) and David Graham (CSO, a.k.a. Chief Science Officer) at Aspinity. I’ve touched on Aspinity’s technology in previous columns: A Brave New World of Analog Artificial Neural Networks (AANNs) (A little bit of analog can go an awfully long way… if you know what you are doing) and Meet Aspinity’s Awesome Analog Artificial Neural Networks (AANNs) (This AnalogML Core can perform AI/ML-based inferencing in the analog domain while consuming only microamps (µA) of power).

The exciting news is that, a few weeks ago as I pen these words, the folks at Aspinity launched the first member of their AnalogML family, the AML100, which they describe as: “The industry’s first and only tiny machine learning solution operating completely within the analog domain.” Presented in a small 7 x 7 mm 48-pin QFN package, the AML100 uses near-zero power to inference and detect events (it consumes <20µA when always-sensing).

The AML100 uses near-zero power to inference and detect events
(Image source: Aspinity)

At the heart of the AML100 is an array of independent, configurable analog blocks (CABs) that are fully programmable via software to support a wide range of functions, including sensor interfacing and ML. Furthermore, the AML100 can be reprogrammed in the field with software updates or with new algorithms targeting additional always-on applications. This versatility delivers a tremendous advantage over typical analog approaches, which are rigid and address only a single function.

There’s a lot to wrap our brains around here, but I think we can boil it down to something manageable. Let’s start with the fact that — according to the International Data Corporation (IDC) Worldwide Global DataSphere IoT Device and Data Forecast 2019–2023 (Document #US45066919) — there are expected to be 41.6 billion connected IoT devices by 2025, and many of these devices will be battery powered and always-on. Furthermore, in 2025 it’s expected that 79.4 zettabytes of new data will be captured from edge sensors driving the demand for low power.

Let’s take an acoustic glass (window) break example. If someone breaks a window in your home, you want the system to trigger an alarm. The thing is that there are a lot of sounds in a home throughout the day. What with kids and pets and television and air conditioning and… I venture to say that some form of sound is present most of the time.

Acoustic glass(window) break example (Image source: Aspinity)

In the illustration above, any sounds that may possibly correspond to breaking glass are presented as occurring less than 1% of the time. Suppose the acoustic sensor is a battery-operated wireless unit. If you are processing this data digitally, you will quickly drain your battery. Now suppose you use an AML100 consuming <20µA to monitor the sound, only waking a higher-level digital processor when it senses a possible glass-breaking event. You’ve just extended your battery life by a minimum of 100X!

Another great example is voice. Take the Amazon Echo in my office. This contains a digital wake word processor that’s constantly using DSP to monitor any sounds, waiting for me to say its wake word of “Alexa.” When this processor thinks it’s heard the wake word, it activates the main digital processor. Although the wake word processor is designed to be low power, it’s digital in nature, so “low power” is a relative term.

Now consider the fact that I’m alone in my office. There are typically a lot of sounds taking place (fans, the beeping of my Geiger counter, the pounding of my keyboard, the banging of my head against the wall…) but there’s relatively little human speech. The bottom line is that around 95% of sound data is discarded after its undergone higher-power processing. Even though the Echo is powered from the wall, I hate the fact that it’s consuming power unnecessarily — now extrapolate this to the vast number of voice-activated devices that are scattered around the globe.

Once again, let’s assume that we introduce an AML100 into the soundscape picture, where this device is listening for human speech. In this case, we now have a hierarchy of processors, because the AML100 will activate the wake word processor only when it determines that a person is talking, at which point the wake word processor takes over to listen for its wake word. The bottom line here is that having the AML100 handle the machine learning (ML) workload used to determine if a human is speaking results in a >95% reduction in always-on system power.

Moving ML closer to the source by performing it in analog redefines
always-on power efficiency (Image source: Aspinity)

One thing that really made me think is that all of the examples we’ve discussed thus far have been sound related, but sound is just one form of vibration. Following the sensor — a microphone in this case — the sound is converted to an electrical signal. The same applies to any form of vibration, such as physical vibration converted into an electrical signal via an accelerometer, for example. This means that the AML100 may be deployed in a wide variety of applications, such as monitoring vibration on machines for the purposes of problem detection and predictive maintenance. 

The AML100 supports up to four analog sensors, it provides field-programmable functionality, it can be trained to intelligently reduce any form of analog data by 100X, and it’s easy to integrate into current digital system architectures.

If you are eager to take a closer look, the AML100 is currently sampling to key customers with volume production planned for Q4 2022. Users can evaluate the AML100’s capabilities by purchasing one of Aspinity’s integrated hardware-software evaluation kits: the EVK1 for acoustic event detection (e.g., glass break and T3/T4 alarm tone detection) or the EVK2 for voice detection with pre-roll collection and delivery. Also, feel free to contact Aspinity (tell them “Max says Hi”) about evaluation kits with software packages for other applications, such as Industrial Vibration Monitoring.

So, what say you? Are you a hard-and-fast digital developer like your humble narrator, or do you favor the wibbly-wobbly analog side of things? Either way, do you think an AML100 may be of use for any of your projects? If you can think of any cunning applications for this device, I’d love to hear about them in the comments below.

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,826 views