feature article
Subscribe Now

FPGAs for the Masses?

Freeing FPGA Implementation from the Hardware Designer's Grip

Over the years, there have been many attempts to make FPGAs easier to use, and most of them now occupy the footnotes of FPGA history. So when I got a note from Stéphane Monboisset introducing me to a new FPGA design tool called QuickPlay from a company called PLDA, I was about to send a polite, “Thanks, but no thanks,” when I remembered where I had last met Stéphane. It was when Xilinx was launching Zynq, and he was very successfully handling the European aspects of the launch, including the press conversations. The fact that he had moved from Xilinx to PLDA made me take it more seriously.

The context in which QuickPlay has been developed is that, while FPGAs can offer a great way of developing products and can often provide performance and other advantages over using, say, multicore processors and GPUs, they are now complex beasts. They need experienced hardware engineers to implement the intricate designs that provide the performance advantages, and projects can take a long time from initial concept to a debugged device that is ready for delivery. QuickPlay is intended to open up the development process to software people and improve the speed and quality of implementation.

It was, for me, a little difficult to get a grasp on exactly what QuickPlay is. Let’s start with what it is not. It is not a complete tool chain for FPGAs – it relies on the synthesis and place and route tools from the device manufacturer. It is not an all-purpose tool; it presumes that you will be developing a system based on one of a (wide) range of boards. But it is a way of developing systems around an FPGA that is accessible to people who are not hardware experts.

Stepping back a bit, QuickPlay already has in place an ecosystem – the QuickAlliance programme – with partners who supply pre-validated IP cores and libraries in VHDL, Verilog, and C, a group of board partners and partners who provide tools that integrate into the QuickPlay design flow, and suppliers of design services.

The underlying model of the QuickPlay approach is based on how a software developer thinks – that a design is a number of functions that communicate with one another and also with the outside world. For the hardware implementation, these designs are considered kernels.

The design flow is to create a software-functional model, using C or C++ implementations of the kernels from the IP library or building them from scratch, and verifying the model using standard tools and methodologies. Once the model is clean, then a hardware engine is generated for an FPGA platform along with standard I/O.

Now clearly there is a great deal more going on under the hood. Let’s go back to look at the flow in a little more detail.  The QuickPlay, Eclipse-based IDE has a working space for drag and drop from libraries of functions in C or C++ or even HDL. These can be from PLDA or from the ecosystem, or they can be developed in-house. Streaming communications channels link the functions, and these can include links to memory blocks and system I/O. Parallelism is supported by merely duplicating functions.

When the system model is assembled, it can be compiled and then exercised with a test program to verify behaviour. For example, is the output from the system appropriate for the input? The IDE includes open source debugging and other tools.

Once the software model is performing according to specification, then it is mapped onto an appropriate target board with an FPGA and physical interfaces. The FPGA is implemented using the manufacturer’s tools within the IDE. The board can then be exercised with a wide range of tests, in parallel with the software model. As QuickPlay has guaranteed functional equivalence between the software model and the hardware implementation, if there are any bugs in the hardware, then they also exist in the software. There is no need for hardware debugging, since fixing the software and re-targeting to the FPGA automatically fixes the hardware.

What are the benefits? We have seen that the developer doesn’t need to know what is happening at much below the functions of the kernels. There is no need to worry about clock networks, timing, communication stacks and protocols or a raft of other issues that normally absorb significant energy of the hardware engineer. The implementation is correct by construction, so there is no need for hardware level verification and debugging.

The promise of creating hardware without hardware engineers has been a holy grail for some for many years. Every few years, there is an announcement that this has been achieved, only for the breakthrough to gradually fade away (Handel-C anyone?). With QuickPlay, we may be getting there, even if it is for only a subset of all possible applications. See for yourself in the selection of video demos at https://www.youtube.com/channel/UCM-Loe1CQ6iNgC32ndtWakA

13 thoughts on “FPGAs for the Masses?”

  1. “The promise of creating hardware without hardware engineers has been a holy grail for some for many years.”

    This is similar to the current holy grail of creating software without software engineers. The quality lowered, the software deployment size increased, applications are sistematically slower despite the big increase in computational power.

    So, honestly, I don’t think it’s the way to go, long-term. Future will tell.

    Just a small note regarding your title “FPGAs for the Masses”. I honestly was expecting something different – a radical decrease in FPGA prices from some vendor, at least for the low-volume or hobbyist market. That would be the best news around – be able to buy a decent FPGA (similar to a Xilinx Spartan6 or Altera Cyclone IV, or even newer families) for less than USD$4 a piece.

    Unfortunately, I think this is not going to happen so soon. But you can buy a GHZ ARM SoC chip for only a few cents… only sales volume explains the cost difference.

  2. Pingback: GVK Biosciences
  3. Pingback: GVK Bioscience
  4. Pingback: Kindertrampolin
  5. Pingback: kari satilir
  6. Pingback: iraqi geometry
  7. Pingback: Cheap

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Jan 10, 2025
Most of us think we know something about quantum computing, right until someone else asks us to explain it to them'¦...

featured chalk talk

Industrial Internet of Things
Sponsored by Mouser Electronics and CUI Inc.
In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from CUI Inc explore power supply design concerns associated with IIoT applications. They investigate the roles that thermal conduction and convection play in these power supplies and the benefits that CUI Inc. power supplies bring to these kinds of designs.
Aug 16, 2024
50,915 views