I tell you; things are currently racing towards the edge where the internet “rubber” meets the real-world “road” in embedded space (where no one can hear you scream). Unfortunately, every time I say the word “edge,” I immediately think of the classic 1972 “Close to the Edge” album by Yes.
While the song and tune are rattling around my noggin, I visualize the image on the album’s cover, which was designed by the English artist Roger Dean. Incidentally, this album cover marked the debut of the band’s “bubble logo,” which has since been recognized as being one of the most beautiful band logos of all time (the original sketches are now housed permanently in a collection at the Victoria and Albert Museum in London).
When things have gotten tough for me in this world, I’ve often imagined being transported into the Close to the Edge universe. I’ve envisioned myself with a backpack, one-man tent, sleeping bag, and inflatable canoe, ambling my way down the path, crossing the bridge to the edge of the main landmass, and then peacefully paddling my way around the inner islands.
Now I’m wondering what the correct terminology would be for the main landmass in this image. We can’t say “peninsula” because that’s a piece of land almost surrounded by water or projecting out into the water. I was thinking “mesa,” but that’s defined as an isolated flat-topped hill with steep sides, as found in arid and semi-arid areas of the US. Unless you have a better suggestion, how about we call it “mesa with a big lake on top” and leave it at that?
But we digress… I was just chatting with the folks at Ceva. As you may recall, these lads and lasses are industry leaders in innovative silicon and software intellectual property (IP) solutions that enable smart edge products to connect, sense, and infer data more reliably and efficiently.
Connect, sense, and infer (Source: Ceva)
We’ve discussed the connect and sense portions of this story in previous columns. For example, it seems like only a few weeks ago (probably because it was only a few weeks ago) that I was waffling and warbling about how Ceva’s Connectivity IP is now unified in the form of Ceva-Waves Links, which is a versatile family of multi-protocol wireless platform IPs. Ceva-Waves Links leverages the industry-leading Ceva-Waves Wi-Fi, Bluetooth, IEEE 802.15.4 (for Thread / Matter and Zigbee) and Ultra-Wideband (UWB) IPs to offer integration-friendly wireless solutions to accelerate the development of connectivity-rich SoCs (see my Ceva-Waves Links Multi-Protocol Wireless Connectivity IP column for more details).
As a somewhat related aside (I simply cannot help myself), I’ve made mention of the guys and gals at Alif Semiconductor in several columns. Well, I recently heard that Ceva’s Bluetooth Low Energy and 802.15.4 IPs are bringing ultra-low power wireless connectivity to Alif Semiconductor’s Balletto family of MCUs (see the Press Release for more details).
But, once again, we digress, because the topic of this column is the inference portion of the Ceva picture. Ceva’s strategy in this artificial intelligence (AI) and machine learning (ML) arena is to supply neural processor unit (NPU) IPs that span the gamut from TinyML-based AIoT and MCU embedded applications all the way up to high-end generative AI (GenAI) solutions requiring a couple of thousand tera operations per second (TOPS).
Ceva’s AI strategy: NPUs from embedded AI to GenAI (Source: Ceva)
On the off chance you are unfamiliar with the term “TinyML,” this refers to the deployment of ML models on low-power, resource-constrained devices to bring the power of AI to the IoT, resulting in the AIoT. Driven by the increasing demand for efficient and specialized AI solutions in IoT devices, the market for TinyML is growing rapidly. According to research firm ABI Research, by 2030, 75% of TinyML shipments will be powered by dedicated TinyML hardware rather than all-purpose MCUs.
With respect to the term “AIoT,” I made mention of this in my blog What the FAQ are the IoT, IIoT, IoHT, and AIoT? As I said in that blog: “According to the IoT Agenda, ‘The Artificial Intelligence of Things (AIoT) is the combination of artificial intelligence (AI) technologies with the Internet of Things (IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics […] the AIoT is transformational and mutually beneficial for both types of technology as AI adds value to IoT through machine learning capabilities and IoT adds value to AI through connectivity, signaling, and data exchange.’ I couldn’t have said it better myself.”
The reason why it’s important for the folks at Ceva to have IP offerings that span TinyML to GenAI is that most of their end customers are not intending to deploy AI/ML-enabled point products. Instead, Ceva’s customers wish to create a portfolio of products targeting different market segments. To facilitate this, they want to align on a common set of IPs and a common software framework to run across their entire portfolio.
The chaps and chapesses at Ceva are already addressing the high-end with IPs like the Ceva-NeuPro-M, which they announced a while ago. What we are talking about here is the recently announced Ceva-NeuPro-Nano.
By addressing the specific performance challenges of TinyML, Ceva-NeuPro-Nano NPUs aim to make AI ubiquitous, economical, and practical for a wide range of use cases, spanning voice, vision, predictive maintenance, and health sensing in consumer and industrial IoT applications.
The new Ceva-NeuPro-Nano NPU architecture is fully programmable and efficiently executes neural networks (NNs), feature extraction, control code and DSP code, and it supports most advanced ML data types and operators, including native transformer computation, sparsity acceleration, and fast quantization.
This optimized self-sufficient architecture enables Ceva-NeuPro-Nano NPUs to deliver superior power efficiency with a smaller silicon footprint and optimal performance compared to the existing processor solutions used for TinyML workloads, many of which utilize a combination of CPU or DSP with AI accelerator-based architectures.
Furthermore, Ceva-NetSqueeze AI compression technology directly processes compressed model weights without the need for an intermediate decompression stage. This enables Ceva-NeuPro-Nano NPUs to achieve up to 80% memory footprint reduction, thereby solving one of the key bottlenecks inhibiting the broad adoption of AIoT processors today.
Ceva’s NPU IPs are delivered with a complete AI SDK—Ceva-NeuPro Studio—which is a unified AI stack that delivers a common set of tools across the entire Ceva-NeuPro NPU family, supporting open AI frameworks, including TensorFlow Lite for Microcontrollers (TFLM) and microTVM (µTVM).
As always, I am tremendously enthused to hear about the cool things coming out of Ceva. I can remember when I thought their early DSP IPs were exciting. I was young and foolish. I had no idea that those DSP IPs were only the tip of the IP iceberg (I never metaphor I didn’t like). What say you? Do you have any thoughts you’d care to share on any of this?
“…then peacefully paddling my way around the inner islands…”
…as the headhunters on the shore, peek out from behind your small tent, to see if the ravenous 60 foot Megalodons, Otodus megalodon, (“with friggen LASERS”), that are circling below your canoe, will get you first. Suddenly, the pent-up stresses from the endless shifting of large water volumes creates a massive seismic shift, and, well, uh…
“I’ll bet those clouds of mosquitoes wouldn’t have been a problem if I’d have used a long-range drone equipped with CEVA’s NPU IP”, is your final thought – as suddenly, everything goes black.”
I mean, it is “The Edge”.
Dr. Sigismund Odin Smythe, OBE, GED, DBS
I fear you’ve been dipping into the Dried Frog Pills again https://wiki.lspace.org/Dried_frog_pills I keep telling you not to do that!