feature article
Subscribe Now

PZ Progress for Sound Production

A USound Update a Year Later

We’ve covered sound as a general topic quite a few times in these pages. When it comes to MEMS, however, most of that discussion has been around microphones. MEMS microphones have been a thing for a long time, even though advances continue.

What’s been notably missing until a year ago has been MEMS used for sound production instead of sound detection. Effectively, it’s MEMs as sound actuator rather than as sound sensor. It was USound that we covered last year, and I got a chance to talk to them again at last fall’s MEMS and Sensors Executive Congress. While their basic story – that of using MEMS for creating sound – hasn’t changed, they had some updates and some new platforms for demonstrating – as well as for selling – their technology.

How Does It Sound, Bud?

You may recall from last year that they were talking about earbuds that used a single driver for the entire frequency range. That includes bass, and it’s hard to imagine a tiny MEMS membrane (memsbrane?) pushing around enough air to do low frequencies any justice.

Well, this time I got a chance to listen to them. And, honestly, they sound pretty good. But, also honestly, it’s not a “wow!” thing – unless you know that it’s a tiny membrane making all that noise. So if a consumer were to listen, they probably wouldn’t be blown away – because they don’t know (or care, really) what’s inside the earbud.

So this makes the technology, in my mind, more of a sell to equipment makers than to actual consumers. It allows them to make speakers that do what consumers expect in a way that can impact size or reliability or cost – things that matter, but that don’t automatically translate into different or better sound. So, in fact, consumers might notice a difference – in price, size, or some other feature – just not so much in sound.

This isn’t a diss against the technology; it’s simply my response to having heard them. Of course, the only reason that they can do this with the tiny membrane – which can’t push a lot of air around – is because the earbud is inside your ear canal, and there’s not a lot of air to push around there.

Be Free, Field!

The next application they discussed was on headphones, not earbuds. And they’re considered free-field in that they’re not just pushing the air around in the confines of the ear canal. So the bass frequencies aren’t going to be implemented as well if they want to rely only on that one driver.

As a result, these headphones use standard electrodynamic woofers; they don’t use their MEMS technology for the low notes. But, mounted around the woofer are multiple tweeters that can create the effect of 3D immersive sound. Each of the tweeters creates a slightly different sound – which is a function not of the tweeters themselves, but of the sound processing that sends signals to each of the tweeters.

Done properly, you get what they call sound externalization, which can give the impression that the sound is coming from somewhere outside in front of you or behind you even though the sound is being produced right over your ear.

So Long, Obnoxious Sound Leakage

Then there’s everyone’s favorite (not!) sensation of being in some public place and getting bits and pieces of the earphone sounds from everyone around you. USound has a solution for this, although not with earphones, but with VR glasses. And it’s not a solution for blotting out everyone else’s noises, but rather for being a good citizen and not littering everyone else’s soundscape with your personal sound experience.

We talked last year about how the USound response is fast enough to allow for non-periodic sound cancellation. They’ve leveraged this in the glasses, placing a microphone behind the ear on the stem. It generates the opposite of the sound being produced in the earpiece itself. This keeps the sound from traveling from the ear to anyone standing behind you.

Of course, this particular version of sound cancellation is probably somewhat easier than your average noise cancellation. In order to cancel unknown and unwanted sounds around you, you first have to detect them with a microphone and then do whatever processing is necessary to invert the signal and add that to your soundstream. In the AR/VR glasses case, you’re not cancelling outside sounds; you’re cancelling the very sounds you’re also creating. So the processing used to create the sound for the main tweeters and woofer can simultaneously be used to create the cancellation signal, meaning it will probably be faster and more accurate than a similar application canceling outside noise of a different origin.

Commercial Steps Forward

Finally, they’ve moved forward along a number of tracks to further enable them to do the rollicking business they’d like to do.

  • In order to reduce the overall power of their solution, they’ve created their own ASIC instead of using some other processing element. Their speakers are still piezoelectric, and they still need high voltages, but they can manage it better with their new ASIC – primarily by using less power – than was the case before.
  • They changed their foundry strategy, moving to STMicroelectronics as their source. You might wonder why this might be important, since, really, who cares who builds it? Well, apparently, their prospective customers care. They want to be comfortable that their supply will come from a foundry that has demonstrated an ability to produce high volumes reliably.
  • They’re moving to a subsystem sell rather than a MEMS chip sell. This seems consistent with so much other new technology: rather than having to teach the world how to do it, just do it yourself and sell the solution. It’s more work for USound, but less work for their customers.

We’ll continue to keep an eye on this space as developments warrant.

 

More info:

USound

One thought on “PZ Progress for Sound Production”

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Outgassing: The Hidden Danger in Harsh Environments
In this episode of Chalk Talk, Amelia Dalton and Scott Miller from Cinch Connectivity chat about the what, where, and how of outgassing in space applications. They explore a variety of issues that can be caused by outgassing in these applications and how you can mitigate outgassing in space applications with Cinch Connectivity interconnect solutions. 
May 7, 2024
39,294 views