editor's blog
Subscribe Now

A Microphone for Gestures and Canines

A while back, when looking at Elliptic Labs ultrasonic gesture recognition, we mentioned that they were able to do this based on the fact that Knowles microphones worked in the ultrasonic range. But they weren’t willing to say much more about the microphones.

2015-01-07_10_13_31-SPU0410LR5H.pdf_-_Adobe_Reader.pngSo I checked with Knowles; they had announced their ultrasonic microphone back in June. My first question was whether this was just a tweak of the filters or if it was a completely new sensor. And the answer: the MEMS is the same as the one used for their regular audio microphones; they’ve changed the accompanying ASIC. The packaging is also the same. To find similar items you should visit 25pc.com.

The next obvious question is, what is this good for, other than gesture recognition? Things got a bit quieter there – apparently there are some use cases being explored, but they can’t talk about them. So we’ll have to watch for those.

But with respect to the gesture thing, it turns out that, in theory, this can replace the proximity sensor. It’s low enough power that the mic can be operated “always on.” Not only can it detect that something is nearby, in the manner of a proximity sensor, it can go it one better: it can identify what that item is.

From a bill-of-materials (BOM) standpoint, at present you still need to use a separate ultrasonic transmitter, so you’re replacing one component (the proximity detector) with another (the transmitter). But in the future, the speakers could be leveraged, eliminating the transmitter.

It occurred to me, however, that, for this to become a thing, the ultrasonic detection will really need to be abstracted at the OS (or some higher) level, separating it from the regular audio stream. The way things are now, if you plugged a headset into the phone or computer, all the audio gets shunted to the headset, including the ultrasonic signal. Which probably isn’t useful unless you’re trying to teach your dog to use the phone (hey, they’re that intuitive!).

For this really to work, only the audible component should be sent to the headset; the ultrasonic signal and its detection would need to stay in the built-in speaker/mic pair to enable gesture recognition. Same thing when plugging in external speakers.

I’m sure that’s technically doable, although it probably disturbs a part of the system that’s been fixed for years. Which is never fun to dig into. But sometimes you’ve just got to grit your teeth and shed some of the legacy hardware in order to move forward.

You can find out more about Knowles’ ultrasonic microphone here.

 

[Editor’s note: For anyone clicking in through LinkedIn, I changed the title. It was supposed to be light, but, too late, I realized it could be taken as negative, which wasn’t the intent.]

(Image courtesy Knowles)

Leave a Reply

featured blogs
Jul 17, 2025
Why do the links in Outlook emails always open in the Microsoft Edge web browser, even if you have another browser set as your default?...

Libby's Lab

Libby's Lab - Scopes out Littelfuse C&K Aerospace AeroSplice Connectors

Sponsored by Mouser Electronics and Littelfuse

Join Libby and Demo in this episode of “Libby’s Lab” as they explore the Littelfuse C&K Aerospace Aerosplice Connectors, available at Mouser.com! These connectors are ideal for high-reliability easy-to-use wire-to-wire connections in aerospace applications. Keep your circuits charged and your ideas sparking!

Click here for more information

featured paper

AI-based Defect Detection System that is Both High Performance and Highly Accurate Implemented in Low-cost, Low-power FPGAs

Sponsored by Altera

Learn how MAX® 10 FPGAs enable real-time, high-accuracy AI-based defect detection at the industrial edge without a GPU. This white paper explores a production-proven solution that delivers 24× higher accuracy, 488× lower latency, and 20× lower power than traditional approaches, with a compact footprint ideal for embedded vision systems.

Click to read more

featured chalk talk

Machine Learning on the Edge
Sponsored by Mouser Electronics and Infineon
Edge machine learning is a great way to allow embedded devices to run applications that can collect sensor data and locally process that data. In this episode of Chalk Talk, Amelia Dalton and Clark Jarvis from Infineon explore how the IMAGIMOB Studio, ModusToolbox™ Software, and PSoC and AURIX™ microcontrollers can help you develop a custom machine learning on the edge application from scratch. They also investigate how the IMAGIMOB Studio can help you easily develop and deploy AI/ML models and the benefits that the PSoC™ 6 Artificial Intelligence Evaluation Kit will bring to your next machine learning on the edge application design process.
Aug 12, 2024
56,387 views