feature article
Subscribe Now

Selecting the Right FPGA Synthesis Tool

Looking Beyond the Obvious

When it comes to successful FPGA implementation, synthesis serves as arguably the most influential step in ensuring that design requirements are met and the product can be shipped. In many cases, design requirements refer to performance and area—the design needs to operate at a minimum frequency and it needs to fit into the selected device. Hence, designers or CAD managers looking to standardize on a synthesis tool tend to look for good out-of-the-box quality-of-results (QoR).

The criteria for selecting the right synthesis tool, however, can—and should—be involve more than this. Adequate out-of-the-box QoR is a legitimate starting point, but it is not the whole story in terms of a successful FPGA design methodology. Often, to meet aggressive QoR goals, constraints need to be refined or small portions of RTL may need to be re-coded—but without help from the tool it is difficult to identify these areas. In other cases QoR goals may have been met but design changes are constantly being introduced, and either the new changes degrade the previous QoR results or the tool’s runtime of iterating each new change delays the project schedule. FPGA designers and managers must take into account these other scenarios and understand how to addresses them before selecting the right tool.

Quality-of-Results

As mentioned, a tool’s ability to produce adequate quality-of-results is an important criterion and should not be downplayed. If the FPGA can not operate at the required frequency, the product simply cannot ship. And if the design cannot fit into the selected device, then using a larger device may affect the product’s cost structure. It should not be a surprise that a common reason why users decide to move to 3rd party EDA synthesis tools is because they are not meeting the desired QoR requirements with their free vendor tools.

While there are many features contributing to QoR such as advanced technology inference, retiming, and resource sharing, one capability to be considered carefully is the tool’s  physical synthesis support. Physical synthesis, as opposed to standard RTL synthesis, is a method of synthesizing RTL based on the physical characteristics of the target FPGA device. While regular RTL synthesis only takes into account logic cell delays and wire load timing models in their algorithms, physical synthesis produces an optimized netlist by taking into account placement or potential placement of logic on the device and routing resources. Different vendors have different approaches to physical synthesis, but the goal is the same—improve design performance—particularly for high-end devices or designs with aggressive performance goals.

Designers should pay particular attention to the device support offered by the vendor’s physical synthesis. An impressive optimization technology has limited use if it’s only available for a limited number of devices. Even if it supports the device of their current project, designers typically want the freedom to switch devices or FPGA families and still have all the tool’s optimization technologies at their disposal. Ease-of-use is also another factor. Traditional methods of FPGA physical synthesis require expert knowledge of the chip, but many users may not want to invest time acquiring such device knowledge and seek a more push-button physical synthesis flow.

Design Analysis

As design complexity continues to increase, design analysis remains a critical component of the synthesis flow. In an ideal scenario, a design is imported into the tool, a button is pushed, and a netlist is generated that meets all requirements—and no analysis is needed. Unfortunately, this scenario is neither common nor realistic. Few engineers will get the design and timing constraints right the first time, and for aggressive QoR goals, synthesis trade-offs may need to be explored by the user. A good tool has the right analysis features to help analyze constraints, analyze the design itself, and guide the user through architectural trade-offs to meet timing or area goals.

Particularly important is the ability to analyze trade-offs when optimizing for either performance or area. Descriptive timing reports, graphical views of critical paths, and the ability to cross-probe to relevant HDL are examples of the analysis features needed to understand QoR bottlenecks.

Another example is the ability to graphically analyze and control allocation of embedded memory or DSP resources on the device. Embedded resources are specialized areas on the FPGA that can map certain operators more efficiently for either performance, area, or both. The ability to graphically analyze how RTL functions will be mapped—before synthesis is performed—allows for a user to make changes as needed to obtain superior results. Figure 1 shows a graphical view of all operators in a design that can be potentially mapped to embedded memory or DSP resources. Equally important is the ability to cross probe from such a view to the relevant design schematic or HDL source.

20080916_mentor_fig1.gif
Figure 1 : Graphical Resource Analysis

In fact, the ability to cross-probe to and from different graphical and textual views is foundational to a good analysis environment. A user should be able to cross-probe from generated reports, to the post-synthesis technology schematic, to the generic RTL schematic, to the relevant HDL code to understand all issues in their appropriate contexts.

Incremental Flows

Even when QoR goals are met, they risk being lost in subsequent iterations of the design. Time spent meeting performance goals on critical blocks may be lost due to changes in other areas of the design. This and the time lost from long synthesis and place-and-route run-times can jeopardize project schedule and delivery. For this reason, a tool’s incremental flow is important to consider.

There are a variety of incremental flows available, and some of these require pre-planning, such as the block-based partition flow. Perhaps the ideal solution to look for is a fully automatic incremental synthesis flow, where the user does not need to identify logic areas that may potentially be changed in future runs. Equally important is that the flow not prevent full optimization of the design to obtain best results. Coupled with an incremental place-and-route flow, such a solution can provide significant time savings and QoR preservation.

Conclusion

There are naturally other criteria to consider for FPGA synthesis, depending on the application. The point to remember is that selecting the right tool is not a one-dimensional exercise. Even when performance or area is the highest priority, capabilities such as analysis and incremental support can directly relate to quality-of-results. Designers and managers should realistically consider the various stages of their RTL-to-FPGA-implementation flow and select the synthesis tool that best supports each stage of the flow.

10 thoughts on “Selecting the Right FPGA Synthesis Tool”

  1. Pingback: GVK Biosciences
  2. Pingback: Judi Togel Terbaik
  3. Pingback: DMPK
  4. Pingback: zd porn
  5. Pingback: Filosofi Bola Kaki
  6. Pingback: SCR888 Casino
  7. Pingback: cpns2016.com

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Infineon and Mouser introduction to Reliable Solid State Isolators
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Amelia Dalton and Daniel Callen Jr. from Infineon explore trends in solid state isolator and relay solutions, the benefits that Infineon’s SSI solutions bring to the table, and how you can get started using these solutions for your next design. 
May 28, 2024
36,504 views