feature article
Subscribe Now

Nudging Chips Aside

CEVA Uses Femtocells to Motivate DSP Cores

Make versus buy has always been, and remains to this day, a debate. It’s just that the particular frontline on which the fight happens keeps moving.

“Buy” is the sensible decision. You can take advantage of what exists, what’s been proven, what has an infrastructure, what has other customers besides just you.

“Make” involves more chutspa. I mean, really: what makes you so smart that you’re too good for all the solutions already out there and you have to do your own? And the risks! And the upfront costs! You give your mother a heart attack!

Such a debate, when it comes to a decision that might involve creation of a custom SoC, is a luxury afforded only the large. If you can’t bankroll many millions into a project with some clout to help spin the market a little more in your direction and some padding in case you’re late (or, god forbid, wrong), this is not a fight you should take on.

Of course, making tons and tons of the specialized silicon is also a requirement in order for “make” to make sense. Which has typically meant consumer electronics of one sort or another. Which have to be cheap, so it can pay off to do the chip if you do it right.

But when it comes to consumer equipment that requires infrastructure – most typically communications of various flavors – the infrastructural elements are where the conversation is needed in earnest. As you move inward from the consumer – the “terminal end” – you move into various levels of aggregation as you approach a core infrastructural backbone. The center of the core consists of very expensive equipment purchased in very low volumes.

So the question is, at what point in this funnel from the periphery to the core does it make sense to create a dedicated chip? Where do the volumes remain high enough? And is that changing?

CEVA, a provider of DSP processor IP, sees that front moving as the mobile phone world migrates from macrocells to microcells to picocells to femtocells. The smaller the cell, the more of them you need, driving volume up – and cost requirements down.

Just to calibrate for a second here, yes, this did say “femtocell.” A femtocell is a cell that more or less can be contained within your home, handling a few phones. You know, when your provider has bad coverage where you live and instead of improving it they tell you to buy your own receiver and plug it into your broadband internet connection (making it someone else’s problem that they don’t have to pay for)? That’s a femtocell. Think, “Honey, I shrunk the base station.”

Whereas CEVA concedes current leadership to TI and Freescale in sales of DSP chips supporting the classic base station, CEVA contends that those guys simply won’t be able to compete in the femtocell arena because of the cost of their chips. They have to be able to survive a femtocell unit selling for $150. And so they’re targeting this area with dedicated DSP cores on the expectation that equipment makers will need to design their own SoCs in order to be competitive. In fact, it’s even more specific than that: they had already announced a core intended for handsets (the XC321); now they’re announcing one tuned for the base-station side of the radio link, the XC323.

If you google around looking for DSP processor IP, you actually don’t find very much. CEVA sees most of their competition as non-IP approaches. They’re the “make,” the others are the “buy.” As they discuss their architecture planning process, you see why DSP cores might not be a business to jump into willy-nilly: it can take two to four years to develop a next-generation DSP core. At that point, their customers can use it for integration into SoCs, which is another one- to two-year prospect. Add yet another year or so to get the chips onto systems and into users’ hands, and you’ve got an incredibly long cycle that requires patience and confidence.

So if you decide to define such a DSP, and if it’s not going to be just an IP version of the generic DSPs out there already, then how do you optimize the architecture for an application like this? To some extent, the CEVA core looks like many communications-oriented chips, with different processors to handle the “fast path” – used for all of the typical high-volume packets that constitute the bulk of the traffic – and the “slow path” – those occasional odd-balls or management packets that you can afford to take longer to process because they’re uncommon. They use vector processors (VPUs) for the fast path and a general-purpose unit for the slow path.

The VPUs each consist of two 256-bit single-instruction-multiple-data (SIMD) units, allowing 32 MAC operations per cycle and with dedicated complex arithmetic support. One core consists of a general-purpose unit plus one to four vector processors. And you can replicate that core to add more processing power.

Their goal is to implement as much of the PHY as possible through software, thereby reducing the amount of silicon needed. In fact, the only function they have implemented in hardware is the turbo decoder. The rest they have handled by optimizing the instruction set.

Of course, the meaning of “optimizing the instructions” can conjure up different images. For example, with Altera’s Nios (admittedly an FPGA core), you can define custom instructions. This would let you take a common combination of instructions, call it a new instruction, and now get that functionality for the same cycle cost as most any other instruction. But in reality that’s done by creating dedicated hardware for that instruction – essentially a mini-accelerator.

That’s not how CEVA does it. They claim to have pored over their instruction set, tuning it for the kinds of things they see as performance-critical. The areas they claim to support in particular are DFT, high-precision FFT, channel estimation, MIMO (multiple-in/multiple-out – that is, multiple antennae) detection, and interleaving and de-interleaving.

They’ve also worked to keep power down, with numerous power domains and a lot of options for shutting down different regions.

While they’ve motivated the price and power points of the core based on the needs of femtocells, their literature doesn’t stop there. They seem to be taking on the entire 4G (and 3G) arena, from femto up. They would seem to be counting on a “make” decision that would scale to points where the economics aren’t quite as stringent – and where silicon ROI might be reduced.

On the other hand, moving down the food chain a notch, if a single SoC using the CEVA core can cover the entire macro-to-femto gamut, then the economics of femtocells are leveraged on behalf of the others. The “make” for femtocells effectively becomes the “buy” for everyone else.

You pull that one off, you just might make your mother proud.

 

More info:

CEVA-XC323

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens investigate the details of Calibre’s shift-left strategy. They take a closer look at how the tools and techniques in this design tool suite can help reduce signoff iterations and time to tapeout while also increasing design quality.
Nov 27, 2023
61,368 views