feature article
Subscribe Now

Ambiq’s Low-Power AI Cancels Speech Noise Like Magic

The nice thing about magic is that you need not know how the magic works to apply it. For example, in Harry Potter’s world, Hogwarts students learned to use spells and incantations that invoked magic without knowing the underlying physics (metaphysics?) of that magic. Too science-fictiony for you? Then grok this. You don’t need to understand immersion or EUV lithography to design with the integrated circuits produced by these magical applications of real-world physics. For 99.99% of us, digital logic abstracts away almost everything happening in the real world and leaves us in the near-pristine Boolean universe without atoms, capacitance, inductance, and resistance. Although these real, physical quantities of matter continue to exist, in the worlds of digital ICs, we are simply able to ignore these physical quantities and still get on with our work because someone else abstracted them out of our immediate engineering universe.

Ambiq, the low-power microcontroller company, recently announced new capabilities by combining two forms of engineering magic. The first bit of magic, long mastered by the company, is FET-based logic circuits running at sub-threshold supply voltages. If you squint your eyes to remember your first classes in logic circuits, you’ll remember that we like to use transistors as switches. We like them fully turned on or fully turned off because those are the two states where the transistor normally dissipates the least amount of power when operating from power supplies of a few volts. However, there’s another way to achieve low-power operation: use a power supply below the threshold voltage of the FET.

In a simplistic view of FETs, they ought not to work when operating below threshold voltage. But they do. Or at least, they can. The FETs switch, but more slowly. Much more slowly. That’s OK, because there are many applications for digital ICs that do not require multi-gigahertz operation but do need to draw minimal power, and Ambiq specializes in using sub-threshold circuitry for one such application: low-power microcontrollers.

Microcontrollers have always been slow creatures. Back in the 1970s, they ran at one or a few megahertz – five to ten times slower than the fastest microprocessors of the day. Even today, with all the nanometer lithographic legerdemain available to us, it’s unusual to see microcontrollers running faster than a few hundred megahertz, because they don’t need to be faster.

Ambiq’s latest subthreshold microcontrollers, the Apollo 3 and 4, made with TSMC’s 40ULP and 22ULL semiconductor processes respectively, come close to but can’t quite attain 100 and 200 MHz operation, respectively. Yet that’s plenty fast for these flea-power microcontrollers, which consume microwatts per megahertz, thanks to their subthreshold circuit design. Even at 100 to 200 MHz, you can still do very useful things with these microcontrollers.

Which brings us to Ambiq’s second bit of magic. The company has noticed that AI is all the rage these days, so the company has been packing some AI application magic into the software it offers for its Apollo 4 low-power microcontroller, hoping to make this device even more attractive to the makers of battery-powered devices. What kind of devices? Tech Insights recently extracted an Ambiq Apollo 4 microcontroller with 2 Mbytes of non-volatile, on-chip MRAM from a Fitbit Luxe fitness band and wrote a teardown report. The Fitbit is just the sort of end product that Ambiq has in mind for its low-power microcontrollers, and the company is expecting to attract people developing more battery-powered devices of that type that are worn or carried on an everyday basis with its latest magic trick: AI-powered noise cancellation.

Now this isn’t generative AI like ChatGPT that’s been snagging all the headlines. It’s functional AI, of the machine-learning (ML) sort. Ambiq has implemented a noise-cancelling neural network (NN) model for speech enhancement as part of its growing library of NN models that run on the company’s own TinyML implementation. TinyML aims to bring real-time ML applications to systems at the extreme edge where you’ll find battery-powered, microcontroller-based devices that are possibly not connected to the Internet. TinyML applications stand in stark contrast to other sorts of ML applications running on GPUs in data centers, where power consumption is measured in kilowatts instead of milliwatts.

Ambiq has developed an ML model zoo that works in conjunction with the company’s neuralSPOT SDK and TinyML inference engine for its Apollo 4 microcontroller. Currently, the Ambiq model zoo contains three ML models:

  •         NN Speech: A collection of three speech-focused models for voice activity detection, keyword spotting, and speech-to-intent inference
  •         Arrhythmia Classification: Detects several types of heart conditions based on single-lead ECG sensors
  •         Speech Enhancement: A TinyLSTM-based audio model that removes noise from speech

The speech-enhancement model is the latest Ambiq model zoo addition. It’s designed to remove noise from speech. That’s remarkably useful for many speech applications, including video conferencing and speech recording. In fact, BabbleLabs developed a remarkably effective ML-based speech-enhancement application and demonstrated it back in 2019. That demonstration ran on Nvidia V100 Tensor Core GPUs, which consume far more than one milliwatt, but the demonstration was impressive enough that Cisco acquired BabbleLabs in 2020 to add speech enhancement to its Webex offering. Ambiq can now perform the same ML-based magic trick with one of its micropower microcontrollers running in a battery-powered edge device just a few years later, which is truly amazing.

Part of the trick is Ambiq’s home-grown version of the TinyML inference engine: TinyEngine. The ML models in the Ambiq zoo run on TinyEngine. Because of the diversity of resources available across the microcontroller spectrum, TinyEngine was originally conceived by the TinyML community as a resource-lite application. Consequently, Ambiq’s Tiny Engine implementation runs on just the Apollo 4 microcontroller’s CPU and leverages Arm Cortex-M4F processor core’s vector math acceleration features. Although the Ambiq Apollo 4 has an on-chip GPU, Ambiq does not use it for the TinyEngine implementation, saying that “embedded GPUs tend to be purpose-built for popular features such as displaying graphics for IoT devices… Embedded GPUs don’t generally support the type of general-purpose compute that you see in data center and smartphone GPUs. Most, if not all, of our customers are using those GPUs to drive better user interfaces such as animated smartwatch displays.” Which is fine, considering that the CPU alone seems perfectly capable of de-noising speech without the need for additional hardware acceleration.

One device that I think that could really benefit from this sort of ML-based speech enhancement is my Zoom H1 Portable Digital Recorder. I’ve used this sub-$100 product for more than ten years to record excellent audio for video blogs. The small, handheld recorder runs for hours on one AA battery and can record many hours of sound on a microSD card, captured by a pair of superb electret microphones integrated into the unit. However, one of the things that really mars the sound recorded by the Zoom H1 is wind noise, which you currently fight by fitting a little fuzzy cap – colloquially called a “dead cat” – over the microphone end of the recorder. It’s a pain to carry the dead cat around in a little plastic bag, and it’s not always effective.

I can easily envision a future version of the Zoom H1 recorder with a built-in, ML-based denoiser. I think this is exactly the sort of product that can benefit from the latest member of Ambiq’s model zoo. Ambiq has posted its neuralSPOT SDK, the TinyEngine inference engine, and the model zoo on Github as an aid to development teams using the company’s Apollo 4 microcontroller. If you’ve got to deal with noisy speech, this could be the solution to your problem.

One thought on “Ambiq’s Low-Power AI Cancels Speech Noise Like Magic”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

Vector Funnel Methodology for Power Analysis from Emulation to RTL to Signoff
Sponsored by Synopsys
The shift left methodology can help lower power throughout the electronic design cycle. In this episode of Chalk Talk, William Ruby from Synopsys and Amelia Dalton explore the biggest energy efficiency design challenges facing engineers today, how Synopsys can help solve a variety of energy efficiency design challenges and how the shift left methodology can enable consistent power efficiency and power reduction.
Jul 29, 2024
107,789 views