editor's blog
Subscribe Now

Audio Upgrade

Used to be that there were processors (of the “regular” kind) and there were DSPs.  It’s no longer enough to be a DSP: you have to be the right kind. At least that’s how CEVA has rolled out their offering, with one family for communications, one for video and imaging, and one for audio and voice.

They recently announced the latest version of the latter, their TeakLite family. As in other areas that used to seem so simple and innocent, voice and audio processing have become increasingly sophisticated, with multiple microphones helping quash ambient noise (which we’ll talk more about in an upcoming feature), and even “beam forming”: an array of mikes that can zero in on an individual in a crowd – without moving the array, and with no noise. Creepy much? (It’s typically used for sports, but we all know that’s just a gateway stalk…)

Anyway, CEVA has announced four new cores. Two are for stand-alone DSP chip use, with one optimized for small area (single 32×32-bit MAC or dual 16×16-bit MAC), the other for performance (double the MACs plus optional audio instructions). The other two are for integration with a CPU on an SoC; they add cache controllers and an AXI interface to the first two.

Feeds and speeds can be found in their release

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Accelerating Tapeouts with Synopsys Cloud and AI
Sponsored by Synopsys
In this episode of Chalk Talk, Amelia Dalton and Vikram Bhatia from Synopsys explore how you can accelerate your next tapeout with Synopsys Cloud and AI. They also discuss new enhancements and customer use cases that leverage AI with hybrid cloud deployment scenarios, and how this platform can help CAD managers and engineers reduce licensing overheads and seamlessly run complex EDA design flows through Synopsys Cloud.
Jul 8, 2024
34,564 views