feature article
Subscribe Now

Best Practices for FPGA Prototyping of MATLAB and Simulink Algorithms

As the complexity of modern FPGAs and ASICs increases, engineers are discovering that verification using HDL simulators alone is not enough to fully test system-level design requirements in an efficient and timely manner. 

Many engineers are now deploying FPGAs for algorithm acceleration and prototyping. Using FPGAs to process large test data sets enables engineers to rapidly evaluate algorithm and architecture tradeoffs and test designs under real-world scenarios without incurring the heavy time penalty associated with HDL simulators.   System-level design and verification tools like MATLAB and Simulink enable engineers to realize these benefits by rapidly prototyping their algorithms on FPGAs.

This paper describes Model-Based Design best practices for creating FPGA prototypes with MATLAB and Simulink. The best practices are listed below and highlighted in Figure 1.

(1)   Analyze the effect of fixed-point quantization early in the design process and optimize  the word length to yield smaller and more power-efficient implementations

(2)   Use automatic HDL code creation to produce FPGA prototypes faster

(3)   Reuse system level test benches with HDL cosimulation to analyze HDL implementations using system-level metrics

(4)   Accelerate verification with FPGA-in-the-loop simulation

fig1.png

Why prototype on FPGAs?

Prototyping algorithms on an FPGA increases confidence that the algorithm will work in the real world.  In addition to running test vectors and simulation scenarios at speed, engineers can use FPGA prototypes to exercise software functionality and adjacent system level functions, such as RF and analog subsystems.

Moreover, because FPGA prototypes run faster, larger data sets can be used, potentially exposing bugs that would not be uncovered by a simulation model.

Model-Based Design using HDL code generation enables teams to produce the first prototype faster than a manual workflow, as illustrated in Figure 2. In addition, this approach lets engineers make algorithm changes at the system level, rather than at the implementation level, which accelerates hardware iterations.

fig2.png

Digital Down Converter Case Study

To illustrate best practices for FPGA prototyping using Model-Based Design, a digital down converter (DDC) will be used as a case study. A DDC is a common building block in many communications systems (see Figure 3). It transforms a high-rate passband input, which requires substantial computational resources to process, to a low-rate baseband output, which can be easily processed by less computationally-intensive DSP algorithms. 

fig3.png

The main components of a DDC are (see Figure 4):

–        Numerical controlled oscillator (NCO)

–        Mixer

–        Digital filter chain

fig4.png

Best Practice #1 – Analyze the Effect of Fixed-Point Quantization Early in the Design Process

Engineers typically test new ideas and develop initial algorithms using floating-point data types. Hardware implementation in FPGAs and ASICs, however, requires a conversion to fixed-point data types, which often introduces quantization errors.  In a manual workflow, fixed-point quantization is usually performed during the HDL coding process. In this workflow, the engineer cannot easily quantify the effect of fixed-point quantization by comparing the fixed-point representation to a floating-point reference. Nor is it easy to analyze the HDL implementation for overflows. 

To make intelligent decisions on the required fraction lengths, engineers need a way to compare the floating-point simulation results against fixed-point simulation results before starting the HDL coding process. Increasing the fraction length reduces quantization errors, however, such increases does mean that word length needs to be increased (more area and more power consumption).

fig5.png

For example, Figure 5 illustrates the differences between the floating-point and fixed-point simulation results for stage 1 of the low pass filter in the DDC filter chain.  These differences are due to fixed-point quantization. The figure on the top show both the floating point and fixed point simulation result overlayed. The bottom figure shows the quantization error at every point in the plot. Depending on the design specification, engineers may need to increase fraction lengths to reduce the introduced quantization error.

In addition to selecting a fraction length, engineers must optimize the word length to achieve low-power and area-efficient designs. 

fig6.png

In the DDC case study, Simulink Fixed Point was used to reduce the word length of parts of the digital filter chain by as many as 8 bits (see Figure 6).

Best Practice #2 – Use Automatic HDL Code Creation to Produce FPGA Prototypes Faster

HDL code is required to produce an FPGA prototype. Traditionally, Verilog or VHDL code has been written by hand. As an alternative, generating HDL code automatically using Simulink HDL Coder offers several important benefits. Engineers can:

–        Quickly assess if the algorithm can be implemented in hardware

–        Rapidly evaluate different algorithm implementations and choose the best one

–        Prototype algorithms on FPGAs faster

For the DDC case study, we generated 5780 lines of HDL code within 55 seconds. The code is readable and readily understood by engineers (see Figure 7). Automatic code generation enables engineers to make changes in the system-level model, and produce an updated HDL implementation in minutes by regenerating the HDL code.

fig7.png

Best Practice #3 – Reuse System Level Test Benches For HDL Verification with HDL Cosimulation

HDL cosimulation enables engineers to reuse Simulink models to drive stimuli into the HDL simulator and perform system-level analysis of the simulation output interactively (Figure 8).

fig8.png

Whereas HDL simulation provides only digital waveform output, HDL cosimulation provides complete visibility into the HDL code as well as access to the full suite of system-level analysis tools of Simulink.  When a difference between expected results and HDL simulation results is observed, cosimulation enables engineers to better understand the system level effect of the mismatch.

For example, in Figure 9, the spectrum scope view enables the engineer to make an informed decision to ignore the mismatch between the expected results and HDL simulation results because the differences lie in the stop-band.. The digital waveform output, in contrast, just flags the mismatch in expected results and HDL simulation results as an error. The engineer may eventually arrive at the same conclusion, but it would take more time to complete the required analysis.  

fig9.png

Best Practice #4 – Accelerate Verification with FPGA-in-the-loop Simulation

Having been verified via HDL simulations or HDL cosimulations, the DDC algorithm is now ready to be deployed on an FPGA target platform.   FPGA-based verification, also referred to as FPGA-in-the-loop simulation, of the algorithm increases confidence that the algorithm will work in the real world.  It enables engineers to run test scenarios faster than traditional HDL simulation.

For the DDC algorithm, the Simulink model is used to drive FPGA input stimuli and to analyze the output of the FPGA (Figure 10).  As with HDL cosimulation, the results are available in Simulink for analysis. 

fig10.png

Table 1 compares the two verification methods, HDL cosimulation and FPGA-in-the-loop simulation, used for the DDC design.

In this case, FPGA-in-the-loop simulation was 23 times faster than HDL cosimulation. Such speed increases enable engineers to run more extensive sets of test cases and perform regression tests on their designs.  This enables them to identify potential problem areas that need more detailed analysis.

Though slower, HDL cosimulation provides more visibility into the HDL code. It is, therefore, well suited for more detailed analysis of the problem areas that are found during FPGA-in-the-loop simulation.

table1.png

Summary

Following the four best practices outlined in this article enables engineers to develop FPGA prototypes much faster and with a greater degree of confidence than a traditional, manual workflow.  In addition, engineers can continue to refine their models throughout development and rapidly regenerate code for FPGA implementation. This capability enables much shorter design iterations than a traditional workflow that relies on hand-written HDL.  To learn more about the workflow outlined here or to download a technical kit, visit http://www.mathworks.com/programs/techkits/techkit_asic_response.html

11 thoughts on “Best Practices for FPGA Prototyping of MATLAB and Simulink Algorithms”

  1. Pingback: GVK Biosciences
  2. Pingback: Taruhan Olahraga
  3. Pingback: hash

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Driving Next-Gen Efficiency and Productivity to the Battery Lifecycle
Sponsored by Mouser Electronics and Vicor
In this episode of Chalk Talk, Amelia Dalton and David Krakauer from Vicor explore the evolution of battery technology and the specific benefits that power modules bring to battery cell formation, battery testing and battery recycling. They investigate what sets Vicor power modules apart from other solutions on the market today and how you take advantage of Vicor power modules in your next design.
May 24, 2024
36,386 views