About a decade ago, FPGA design followed in the footsteps of ASIC and went language-based. For a very long time, the only question we asked ourselves was “VHDL or Verilog?” It was reminiscent of the “Paper or Plastic?” scenario in the grocery checkout line. Gradually, however, people sneaked into the FPGA-designing fold that weren’t FPGA designers. Who are these folks anyway? We’ve got DSP engineers, embedded systems designers, board designers, supercomputing folks… the list goes on and on.
Apparently all those new engineers didn’t get the memo about conforming to our established design methodologies, or else they just didn’t feel like becoming experts in VHDL and Verilog. Compounding the problem was the fact that FPGA and EDA companies – money-grubbing monsters that they are — decided to actually cater to these interlopers by giving them gold-plated, easy-as-pie design entry mechanisms that allowed them to almost completely forego the time-honored traditions of entities and architectures.
If, for whatever reason, you don’t want to hand code each and every line of your field-programmable masterpiece, you now have abundant options for populating your LUTs with logic of alternative origin. Thanks to all these HDL-phobic pansies, we now have a variety of fun, interesting, sometimes dubious ways to create working FPGA designs without really working on FPGA design. Let’s take a survey of some of the many methods available and emerging for harnessing the power of programmability without dedicating ourselves to the high art of HDL mastery.
Fundamentally, most of the approaches are based on design re-use. If somebody’s already coded up the block we need, (and we’re not stricken by terminal acute NIH syndrome) we are probably better off re-using their code rather than re-inventing the wheel, or the PCI interface, or… you get the idea. Every FPGA vendor has a library of IP that we can use to round out our design, saving us time we can use to focus on the original bits of our design – the parts that will add value and differentiation when our product goes to market. If you want some of the more advanced stuff, you’ll probably have to pay a bit to license it, but most of it is either free or very inexpensive – FPGA companies don’t want to put up any barriers to our using the maximum possible amount of their silicon.
Beyond the general-purpose IP supplied by FPGA companies, there are a variety of vendors that have more specific, higher-value blocks we can license for FPGA use. A few years ago there was a divide between ASIC-appropriate and FPGA-appropriate IP, but today almost all IP intended for ASIC use will at least work in FPGA because most ASIC designs are prototyped and verified in FPGAs before they go to ASIC tapeout. This means that the rich IP libraries from companies like Synopsys contain mostly FPGA-friendly IP.
Recently, even high-end, licensable IP like sophisticated 32-bit RISC processor cores have been coming over to the FPGA world. While ASIC-optimized processor IP would probably be functionally usable in FPGAs, optimizing that IP for FPGAs is a new trend. A couple of years ago, ARM rolled out an FPGA-optimized version of their Cortex architecture – the Cortex M1 — with specific design features to make it FPGA friendly. Designing a processor core to take advantage of the specific structure of FPGA fabric gives a lot more performance and a smaller footprint than simply synthesizing ASIC processor IP for the same FPGA. ARM’s core is also code-compatible with their ASIC versions, making a prototype-to-production upgrade a non-event on the software side.
Of course, the FPGA vendors themselves have long offered sophisticated processor cores as synthesizable IP for their own devices. Altera’s Nios architecture and Xilinx’s MicroBlaze architecture form the basis of sophisticated system-on-chip platform capabilities in the respective companies’ FPGAs. The advent of FPGA-based embedded systems based around these processor cores has created a demand for simplified FPGA programming, allowing embedded software engineers to create their own FPGA-based designs without the need to learn VHDL or Verilog. FPGA and EDA companies have responded with drop-dead simple “platform creation” tools that allow even a novice to stitch together a complex system-on-chip by dragging and dropping processor cores, busses, memory, controllers, peripherals, and just about anything else you might want in a typical SoC. All of this can be accomplished (according to both marketing materials and independent verification by our editorial staff) in just a few minutes with a few mouse clicks and no VHDL or Verilog whatsoever. Embedded software developers can then be free to do what they are trained to do – develop tight, efficient code to run on embedded devices.
As general-purpose processing goes, FPGAs are pretty pedestrian. You won’t be setting any speed records with typical embedded applications running under an OS like Linux on a normal FPGA soft-core processor. FPGAs do blow away the speed limits, however, when it comes to things like signal-processing algorithms that can take advantage of datapath parallelism. If you’re one of the multitudes of DSP designers that ran out of processing power on the highest-end DSPs, you may have turned to FPGAs to dig you out of the dirt. FPGAs can deliver far more GMACs (Giga-Multiply ACcumulates – the standard by which many DSPs measure themselves) than any conventional DSP processor – and at a much lower power point. Of course, to woo all those DSP dudes over to FPGAs, we once again need to break down the HDL barrier. DSP programmers are smart people, but most of them have spent their careers fine-tuning their MATLAB and C programming skills, not on the hardware-centric semantics of typical HDLs.
A number of approaches have emerged to help the DSP designer adapt and adopt FPGAs as their MAC munching monsters. The simplest approach starts with the most commonly used algorithm development environment – The Mathworks MATLAB. MATLAB has always been known as a general-purpose math engine for everyone from calculus professors to market analysts to geological survey engineers. A few years ago, they apparently noticed that a lot of signal processing people were using MATLAB in a very specific way, and they came up with a companion product – Simulink — that addressed a lot of the shortcomings of MATLAB for that application as well as giving a much higher ASP for Mathworks products in the lucrative electronics engineering market. Simulink facilitates model-based design – stitching together common algorithmic blocks graphically to make a complex model of a datapath or the data flow of a complex algorithm. After simulating and tuning the algorithm to give the right results, of course, we still have the not-so-simple problem of getting that algorithm implemented efficiently in FPGA hardware.
To make the jump from The Mathworks world into LUT fabric, a number of approaches can be used. The simplest methods take Simulink building blocks and replace them with chunks of HDL code – stitched together with the same topology as in Simulink. While this will give results quickly and easily, it is often far from optimal and can become cumbersome when your algorithm can’t exactly be cut apart into a simple network of standard functions.
At the high end of this spectrum are high-level synthesis tools. These programs take a sequential, behavioral description of an algorithm in a language like C or C++ (or a specialized dialect of one of these like SystemC) and perform scheduling, resource allocation, and interface generation in order to make an optimized, parallelized datapath and control network that implements that algorithm in hardware. Mentor’s Catapult C, Forte’s Cynthesizer, Impulse’s Impulse C, and a number of other products perform this function in various ways. The key tradeoffs in this space are the coding style required to keep the tool happy, the quality or efficiency of the finished result, the complexity of the tools themselves, and the cost.
Closely related to the DSP design problem is the long-hyped arena of reconfigurable computing. Here we have a small but important community working to overcome the inherent power limitations of conventional processors – even those in massive modern arrays — by using FPGAs to accelerate parallelizable algorithms. The difficulty in this area has always been the programming model. It is fairly straightforward to interface a bunch of FPGA fabric in a way that allows data and control to transfer smoothly to a conventional supercomputing system, but writing the hardware-specific code to enable that combination to work efficiently was too much for all but the most determined developers. Here, unlike DSP, the challenge is to come up with a way to make software written specifically for von Neumann architectures translate smoothly into hardware implementations. The most effective approaches in this area are still those that add hints to the code as to what structures can be parallelized rather than relying on automation to sort it all out.
Besides the ASIC verifiers, the embedded software engineers, the DSP designers, and the super-computers, there are a lot of engineers skilled in board-level system design that need to use FPGAs to add a little (or a lot) of flexibility to their design. For most of them, VHDL and Verilog weren’t at the top of their curriculum either. Luckily, people like Altium found that niche and provided FPGA design that looks just about like plain-old PCB engineering. Using Altium’s library of virtual components, you stitch together a schematic that looks for all the world like a board design, then compile it into FPGA logic without ever worrying about the underlying HDL, synthesis, or place-and-route.
With all these alternative points of entry, it may get hard to find a good, old-fashioned FPGA designer that writes HDL, simulates it for functional correctness, synthesizes it with a logic-synthesis tool, and irons out all the timing violations with synthesis and place-and-route tuning options. Already today there are engineers implementing FPGA designs without really even knowing it. As it often goes in high-tech, the mainstream becomes the endangered species before we even know it’s happening.