feature article
Subscribe Now

Dialing-in DSP on FPGA

Catapult Customized for Altera

We’ve discussed the amazing potential FPGAs bring to DSP acceleration for years now.  We’re not alone, either.  FPGA vendors have pumped out trumped up performance specifications with dizzying claims as to the number of GMACs (Giga-Multiply-Accumulates per second) their hardware could execute.  So dizzying, in fact, that most of the potential customers got vertigo and fell to the floor without buying any FPGAs.

This was a problem for FPGA vendors – who quickly hooked up probes to the unconscious DSP dudes, downloaded their issues through virtual JTAG ports, and found out (among other things) that whipping out a few lines of algorithmic software for a DSP processor was a whole different ballgame from going back to school to learn enough about datapath microarchitectures to design one of the highly-parallel, heavily-pipelined, carefully-timed creations in VHDL or Verilog that would actually bring any reasonable percentage of those GMACs to life.

If you watch the whole thing in slow motion (using our high-frame-rate HD resolution digital camera with both Stratix III AND Virtex 5 devices processing the video in real time using all of their embedded DSP blocks simultaneously… Oh wait, that’s the marketing pitch), you’d see that the FPGA vendors got those outrageous GMACs numbers by simply multiplying the number of multipliers on their device by the maximum frequency at which they could be clocked.  Nothing in the real world will ever, ever, ever even come close to that performance with those devices. 

This small marketing miscue, however, has nothing to do with the problem.  It turns out that many DSP designers would be perfectly content with only 10-50X the performance they got with a traditional DSP (not the 1000X or so some of the GMACs numbers might lead one to believe).  The real issue was the designer expertise required to do the FPGA design and the fear factor faced by project teams in picking up that gauntlet – even in hopes of enormous performance gains.

Over in the ASIC world, however, it turns out that the EDA industry had been busy working on the same problem.  Instead of licensing a DSP core for your next system-on-chip, you could get much better performance (usually at lower cost and power) by designing a chunk of custom hardware for your specific algorithm.  Even in the lofty world of SoC ASIC design, they don’t like hand-crafting complex software algorithms into even more complex parallel hardware architectures, so some impressive tools were developed that could analyze an algorithm specified in a sequential, procedural language (like C, for example) and create a highly-optimized, parallelized, pipelined, ready-to-synthesize RTL microarchitecture that would put that algorithm into practice.

The head of the class in those tools is Mentor Graphics’s Catapult C Synthesis.  High-end ASIC design teams snapped up Catapult in droves, despite its hefty price tag, because they gained enormous productivity benefits from taking algorithms directly from C or C++ to hardware with performance that matched or even bettered what they got from months of hand-crafting RTL architectures.  In the ASIC world, however, projects are large, long and expensive.  Saving a few engineering-months of converting algorithm to architecture was still peanuts compared with the massive cost and schedule impact of creating and verifying a huge, costly, small-geometry SoC ASIC.  ASIC design teams prized Catapult as much for the flexibility it offered as for the productivity gains.  Need a smaller area solution with a longer latency?  Just press a few buttons.  Suddenly found out that you need to crunch data twice as fast?  Press a few more and your architecture is completely re-jiggered.  Need a different interface at the borders?  Catapult can handle it for you – retiming all the register-to-register logic as it goes along.

In the FPGA space, the benefits of a tool like Catapult C align perfectly with the value proposition of the FPGA itself – getting a complex design to market in absolute minimal time and retaining the flexibility inherent in the programmable logic platform.  Unfortunately, all the work to make Catapult give spectacular results in ASIC land fall apart when the tool starts stitching multipliers together out of LUTs while optimized hard-core Multiply Accumulate (MAC) units sit idle on the same chip.

This week, Altera and Mentor Graphics announced a collaboration that brings the benefits of high-powered algorithm-to-architecture technology like Catapult C to the FPGA community – in a way that takes advantage of the unique capabilities of both technologies.  The two companies have worked together to produce optimized Catapult libraries specifically for Altera’s FPGAs that allow the tool to understand and infer the high-performance resources already built into the FPGA.  Without these libraries, some of the most powerful design tools in the world (Catapult) and the some of the most powerful compute acceleration hardware in existence (Altera’s Stratix III FPGAs) just didn’t play nicely together.  You’d get results, but those results would fall far short of what was possible.

The two companies claim an average of 50-80% “DSP Fmax performance improvement” with the advent of the new libraries.  We’ll take exception to the use of Fmax as a DSP throughput metric, but the sentiment (and the hardware behind it) is what counts here.  Clearly, if you have a DSP algorithm that takes 50 multipliers, and you move those multipliers from LUT fabric to hard-wired, optimized multiply-accumulate devices sitting right on the chip, you gain significant performance, save a bundle of power, and free up all those expensive LUTs for other tasks.  For people targeting Altera devices with DSP algorithms, this is an enormous step forward in performance, cost, and power.

Using the Catapult C tool will not turn a software engineer into a hardware expert.  Catapult is not a fully-automatic algorithm-to-hardware converter.  It is, instead, a power tool that can assist someone with some hardware design knowledge in very quickly finding an optimal hardware implementation of an algorithm that meets any arbitrary tradeoff between area, power, and performance.  It also affords incredible flexibility in changing that architecture almost on-the-fly as design demands shift during (or after) the project. 

If you’re looking for a tool that’s typical of the “near-free” FPGA norm, be prepared for some serious sticker shock.  Catapult C is as expensive as a ground-breaking productivity tool should be.  Mentor claims that Catapult gives a 4-20X productivity advantage over hand-coding RTL for complex algorithms.  Unless your engineering time is very inexpensive, that kind of productivity boost can pay for even a very expensive tool on the first project.

The benefits of high-level design abstraction on the design process go far beyond productivity, however.  The combination of Catapult’s ability to fine-tune the hardware architecture to exactly balance area, performance, and power demands combined with the FPGAs time-to-market and flexibility advantages make something that, if you squint your eyes a little…  starts to approach the implementation flexibility of traditional DSP processors with orders of magnitude better performance and power consumption.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Advances in Solar Energy and Battery Technology
Sponsored by Mouser Electronics and onsemi
Passive components will play an important part in the next generation of solar and energy storage systems. In this episode of Chalk Talk, Amelia Dalton, Prasad Paruchuri from onsemi, Walter Fusto from Würth Elektronik explore trends, challenges and solutions in solar and energy storage systems. They also examine EMI considerations for energy storage systems, the benefits that battery management systems bring to these kinds of designs and how passive components can make all the difference in solar and energy storage systems.
Aug 13, 2024
54,608 views