feature article
Subscribe Now

PZ Progress for Sound Production

A USound Update a Year Later

We’ve covered sound as a general topic quite a few times in these pages. When it comes to MEMS, however, most of that discussion has been around microphones. MEMS microphones have been a thing for a long time, even though advances continue.

What’s been notably missing until a year ago has been MEMS used for sound production instead of sound detection. Effectively, it’s MEMs as sound actuator rather than as sound sensor. It was USound that we covered last year, and I got a chance to talk to them again at last fall’s MEMS and Sensors Executive Congress. While their basic story – that of using MEMS for creating sound – hasn’t changed, they had some updates and some new platforms for demonstrating – as well as for selling – their technology.

How Does It Sound, Bud?

You may recall from last year that they were talking about earbuds that used a single driver for the entire frequency range. That includes bass, and it’s hard to imagine a tiny MEMS membrane (memsbrane?) pushing around enough air to do low frequencies any justice.

Well, this time I got a chance to listen to them. And, honestly, they sound pretty good. But, also honestly, it’s not a “wow!” thing – unless you know that it’s a tiny membrane making all that noise. So if a consumer were to listen, they probably wouldn’t be blown away – because they don’t know (or care, really) what’s inside the earbud.

So this makes the technology, in my mind, more of a sell to equipment makers than to actual consumers. It allows them to make speakers that do what consumers expect in a way that can impact size or reliability or cost – things that matter, but that don’t automatically translate into different or better sound. So, in fact, consumers might notice a difference – in price, size, or some other feature – just not so much in sound.

This isn’t a diss against the technology; it’s simply my response to having heard them. Of course, the only reason that they can do this with the tiny membrane – which can’t push a lot of air around – is because the earbud is inside your ear canal, and there’s not a lot of air to push around there.

Be Free, Field!

The next application they discussed was on headphones, not earbuds. And they’re considered free-field in that they’re not just pushing the air around in the confines of the ear canal. So the bass frequencies aren’t going to be implemented as well if they want to rely only on that one driver.

As a result, these headphones use standard electrodynamic woofers; they don’t use their MEMS technology for the low notes. But, mounted around the woofer are multiple tweeters that can create the effect of 3D immersive sound. Each of the tweeters creates a slightly different sound – which is a function not of the tweeters themselves, but of the sound processing that sends signals to each of the tweeters.

Done properly, you get what they call sound externalization, which can give the impression that the sound is coming from somewhere outside in front of you or behind you even though the sound is being produced right over your ear.

So Long, Obnoxious Sound Leakage

Then there’s everyone’s favorite (not!) sensation of being in some public place and getting bits and pieces of the earphone sounds from everyone around you. USound has a solution for this, although not with earphones, but with VR glasses. And it’s not a solution for blotting out everyone else’s noises, but rather for being a good citizen and not littering everyone else’s soundscape with your personal sound experience.

We talked last year about how the USound response is fast enough to allow for non-periodic sound cancellation. They’ve leveraged this in the glasses, placing a microphone behind the ear on the stem. It generates the opposite of the sound being produced in the earpiece itself. This keeps the sound from traveling from the ear to anyone standing behind you.

Of course, this particular version of sound cancellation is probably somewhat easier than your average noise cancellation. In order to cancel unknown and unwanted sounds around you, you first have to detect them with a microphone and then do whatever processing is necessary to invert the signal and add that to your soundstream. In the AR/VR glasses case, you’re not cancelling outside sounds; you’re cancelling the very sounds you’re also creating. So the processing used to create the sound for the main tweeters and woofer can simultaneously be used to create the cancellation signal, meaning it will probably be faster and more accurate than a similar application canceling outside noise of a different origin.

Commercial Steps Forward

Finally, they’ve moved forward along a number of tracks to further enable them to do the rollicking business they’d like to do.

  • In order to reduce the overall power of their solution, they’ve created their own ASIC instead of using some other processing element. Their speakers are still piezoelectric, and they still need high voltages, but they can manage it better with their new ASIC – primarily by using less power – than was the case before.
  • They changed their foundry strategy, moving to STMicroelectronics as their source. You might wonder why this might be important, since, really, who cares who builds it? Well, apparently, their prospective customers care. They want to be comfortable that their supply will come from a foundry that has demonstrated an ability to produce high volumes reliably.
  • They’re moving to a subsystem sell rather than a MEMS chip sell. This seems consistent with so much other new technology: rather than having to teach the world how to do it, just do it yourself and sell the solution. It’s more work for USound, but less work for their customers.

We’ll continue to keep an eye on this space as developments warrant.

 

More info:

USound

One thought on “PZ Progress for Sound Production”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

Mounted Robotics End of Arm Tools
In this episode of Chalk Talk, Rajan Sharma and Rafael Marengo from Analog Devices and Amelia Dalton chat about the challenges associated with motor control, industrial vision and real-time connectivity for industrial robotic designs. They also investigate the variety of solutions Analog Devices offers for mounted robotics designs and the role that Gigabit Multimedia Link (GMSL) plays in these applications.
Dec 2, 2024
17,559 views