feature article
Subscribe Now

A New Spin on FPGA Re-spins

Back when FPGAs were simpler devices, in-system debug was sufficient. Turning a re-spin in response to a specification violation found on the bench was a quick and easy process. Life was great, since re-spins were essentially “free”. This is no longer the case today. One company recently spent three entire months trying to incorporate just one late-coming specification change, because the design team encountered difficulties meeting requirements after making that single change. This is not an isolated case; increasingly painful re-spins are no longer a rare occurrence. Clearly, this particular re-spin cost the customer dearly. So, what was different? The customer was designing a platform FPGA.

Platform FPGAs are pretty amazing products that offer excellent value to customers through increased capacity and many differentiating capabilities such as on-chip dedicated resources for storage, communications and DSP. Platform FPGAs present many new opportunities for using programmable logic that might not have been otherwise feasible. With these opportunities come new challenges. Essentially, when designing any platform FPGA, defect discovery must be consciously driven earlier in the design cycle, where the overall pain and cost for fixing errors is much less (figure). This can be accomplished by leveraging the increasingly convergent roles of synthesis and verification, and by adopting platform-specific design flows.

Figure: Defects encountered in the latter stages of higher-complexity platform FPGA design entail much higher costs. You can reduce these costs and predictably meet project specifications via interactive debug, analysis and verification upfront during the RTL design and synthesis steps.

 

Synthesis and Verification Converge

Because most defect discovery takes place on the lab bench in the traditional programmable logic design flow, a serial flow — from synthesis, through place-and-route, followed by in-system debug — is appropriate. Verification is almost an afterthought. Here, synthesis offers basic logic mapping with a pushbutton flow; some amount of silicon vendor-dependent IP within the design itself is acceptable. Verification, if at all, is done with a straightforward VHDL or Verilog testbench, or just with in-system debug on the lab bench.

This simple approach becomes ineffective for platform FPGAs. Due to the higher silicon capacity and complexity and longer design iteration times, the cost of discovering and correcting defects at later stages in the design cycle is unacceptable. Design flows for platform FPGAs more closely resemble those adopted by SoC designers in the late 1990s, where design creation and synthesis is closely linked to verification at every step of the way. Since they should no longer be disparate steps in platform FPGA design, synthesis and verification methodologies must evolve and converge accordingly. Adopting the following strategies will help reduce the chances of late defect discovery and the resultant costly re-spins.

Check HDL code early and often

Design management tools today assist in checking code against an established set of coding style rules that are agreed upon either by the design team or suggested by the programmable silicon vendor. These coding style rule checkers can be used prior to burning simulation cycles to catch defects and flag potential defects, thus bringing defect discovery right up front in the design cycle.

Implement a more effective functional verification strategy

Synthesis tools for platform FPGAs do more then simply generate a technology-mapped netlist. Best of breed synthesis tools contain important design capabilities that provide more insight into the design at every stage in the cycle. These analysis capabilities can identify potential problem areas such as clock domain crossing points, where functional verification needs to be handled delicately.

Moreover, when applying a traditional functional verification approach to platform FPGAs, modeling random or pseudorandom stimuli and checking circuit response against designer intent becomes increasingly tedious with strictly VHDL or Verilog testbenches. Effective functional verification for platform FPGAs requires approaches like those offered by SystemVerilog, which improves upon earlier Verilog modeling capabilities. In addition, SystemVerilog introduces assertions for instrumenting a design with basic rules describing its expected behavior. These assertions, when used in conjunction with stimulus modeling, dramatically improve the effectiveness of early defect discovery in functional verification.

Also, as the number of lines of code to describe a given circuit increases, so does the probability of inadvertently introducing defects. Using SystemVerilog typically reduces the lines of code required to describe a given circuit, potentially reducing the defect rate.

Adopt a consistent, vendor-independent synthesis and verification flow

A consistent vendor-independent synthesis and verification flow allows exploration of the capabilities offered by each of the various platform FPGA architectures within a single environment. This reduces the need to learn device-specific coding techniques and attributes just to carry out an architecture evaluation. It also eliminates the training overhead associated with having to learn multiple design environments.

To help meet your specifications, today’s programmable logic synthesis tools have raised the bar significantly over their predecessors in the level of sophistication exhibited by their high-level operator extraction and mapping—technology advancements provide intelligent multi-vendor device support, with inference and mapping to on-chip dedicated resources like DSP hardware and RAM. Making the most of these technology advancements in synthesis also reduces vendor-dependent design content, thus easing migration and maintenance efforts.

Do more timing and performance analyses up front

Are you willing to wait until after your design is functionally verified, synthesized, placed and routed to find out whether or not your chosen arbitration scheme is able to keep up with incoming traffic? Early discovery of throughput issues requires performing more in-depth analyses of performance and timing issues throughout the synthesis process. Similarly, before you burn cycles trying to meet your timing constraints in place and route, are you sure that your constraints are complete? Early discovery of timing issues also requires analyses of constraint coverage during synthesis.

Use interactive synthesis techniques for more predictability

An indispensable weapon in any platform FPGA designer’s toolkit is a capable interactive synthesis and analysis environment that goes all the way from RTL to physical implementation. Interactive synthesis techniques provide guidance to the designer, allowing “what-if” explorations earlier in the design cycle. A robust synthesis environment also provides a variety of design representations: high-level operators, architecture-specific technology cells, etc. Taking advantage of interactive synthesis capabilities provides an earlier understanding of the nature of the design and whether it will (or perhaps will not) meet specifications.

Platform-specific Flows Reduce Re-spins

The focus thus far has obviously been on pulling defect discovery into earlier stages of the design cycle. However, the fast-paced nature of the electronics business implies that requirements inevitably change late in the design cycle. To help reduce the impact of these late-coming specification changes when designing any platform FPGA, designers should make use of advanced incremental design and ECO flows. These flows attempt to limit the scope and impact of a specification change as much as possible, minimizing the number of manipulated variables and thus increasing the likelihood for a successful, convergent last-minute design change.

Having a solid, consistent, vendor-independent basic design and verification flow is crucial for any FPGA design. But just as a designer implicitly chooses the best programmable silicon architecture for a given application, platform FPGA design also mandates the use of tools that are best suited for a given platform application. For example:

DSP platforms need tools that enable algorithmic design at a significantly higher level of abstraction than RTL. These tools use C/C++ as input and generate bit-accurate RTL based on user-provided constraints. Since these tools unite the system and hardware design domains, designers from both domains realize benefits. Designing at a high level of abstraction allows “what-if” exploration of several platform FPGA device architectures, while exploring optimal implementation architectures for each without RTL coding. In addition, performance analysis can be performed on each implementation for earlier discovery of throughput issues. An effective DSP platform-specific flow thus enables faster, error-free RTL creation.

High performance, high density platforms need tools that enable advanced physically aware synthesis. In FPGAs, the reconfigurable interconnect dominates the timing budget and is architecture-dependent. Predetermined architectures make it impossible to rely on floorplanning to reduce wire length. The ideal solution is to manipulate the design using physically aware synthesis–integrated with logic synthesis–to converge on timing. After specific modules are optimized using physical synthesis, a huge productivity advantage can be gained if each FPGA developer on the team is able to reuse the optimized blocks on subsequent platform designs.

Connectivity platforms need concurrent I/O design solutions for both PCB and FPGA design, and signal integrity tools for high-speed analysis and debug. Tightly coupling PCB design tools to synchronize with the FPGA design creation process brings about earlier discovery of pin assignment issues. Signal integrity tools help to more quickly uncover issues with high-speed clock or data transmission lines on the PCB before signing off on the PCB layout. Leveraging a connectivity platform-specific flow helps reduce costs not only from FPGA re-spins but from PCB re-spins too. High-speed transceivers pose an additional problem: within the platform FPGA fabric, can the chosen micro-architecture for the parallel transceiver interfaces keep up with the throughput of the transceiver? As in the other throughput issues discussed earlier, performance analysis can be used here for earlier defect discovery.

Embedded CPU platforms need a methodology that allows incremental design and debug while the FPGA, PCB and embedded software are all under development. Adopting an incremental design approach can allow better design re-use from reference board to first article PCB, while allowing software teams to start debugging well before FPGA contents are finalized. Having better up-front visibility into possible defects in hardware/software interaction can substantially reduce costs of a potential FPGA re-spin, especially if the defect discovered was the result of a fundamental flaw in how the system functionality was partitioned into hardware versus software.

Conclusion

Re-spins are no longer free, especially in the platform FPGA world. The roles of design creation, synthesis and verification are converging to bring defect discovery earlier in the design cycle. A successful designer must use a combination of advanced automated flows coupled with interactive design analysis to solve the new challenges presented by platform FPGAs. Finally, it is important to enhance existing design environments with tools and flows that can flexibly target the FPGA technology best suited for a given platform design.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
39,658 views