feature article
Subscribe Now

Co-Verification Methodology for Platform FPGAs

The emergence of affordable high-end FPGAs is making them the technology of choice for an increasing number of electronics products that previously were the exclusive domain of ASICs. Offering unprecedented levels of integration on a single chip, today’s programmable devices have widely expanded the size, scope, and range of applications that can now be deployed on them .

To ensure a fast and efficient implementation of these advanced, feature rich FPGAs, designers need access to the latest in productivity enhancing electronic design automation (EDA) tools and methodologies. For years, hardware/software (HW/SW) co-verification has been commonly used to debug ASIC SoC designs. Now, with embedded processors such as the PowerPC405 from IBM, combined with multi-million gate capacities commonplace in Virtex series FPGAs, there is an increased relevance for ASIC-strength methodologies such as co-verification to add value in the FPGA design space.

The Debug Challenge

By various accounts, design verification is the most serious bottleneck that engineers face in delivering multi-million gate SoCs. In the case of ASICs, it is not uncommon for verification teams to spend as much as 50 to 70 percent of their time in verification and debug. In FPGAs, where the penalty of a design error is not as severe and a design respin is a matter of hours not months, there is, nonetheless, still an obvious need to introduce efficient debug methodologies that enable design teams to identify and fix errors early on in the process.

In the particular instance where a processor is part of a design, the interface between hardware and software becomes an area of increased focus and attention. Validating that the hardware and software will function correctly together can become an important aspect in the overall verification process. It is therefore essential that specialized methodologies such as HW/SW co-verification be available to FPGA designers, enabling them to achieve not only a higher debug efficiency but also a more streamlined approach to verification of their processor-based designs.

Co-Verification Simplifies the Debug Equation

The basic concept behind co-verification is to merge the respective debug environments used by hardware and software teams into a single framework. This provides designers with concurrent and early access to both the hardware and software components of the designs, thereby contributing to reducing the overall project cycle time.

From a performance perspective, processor models known as Instruction Set Simulators (ISS) can significantly speed up processor simulation execution when compared to using a register transfer level (RTL) model of the CPU. Moving up a level of abstraction enables engineers to verify large embedded processor-based FPGA systems – systems that could not otherwise be verified within a practical timeframe using conventional HDL simulation.

In addition, an efficient co-verification tool can help uncover a range of HW/SW interface problems, which include:

— Initial startup and boot sequence errors (including RTOS boot)
— Processor and peripheral initialization and configuration problems
— Memory accessing and initialization problems
— Memory map and register map discrepancies
— Interrupt service routine errors

The Advantages of Co-Verification

By uniting the hardware and software simulation environments in a processor-based system, a co-verification tool can be conceptually viewed as an extension of traditional “functional simulation” in logic-only designs. The co-verification concept establishes value for multiple design teams including hardware engineers (peripheral logic debug), embedded software engineers (SW application and firmware debug), and system designers (performance analysis and tuning).

To fully realize the advantages of co-verification, there are three prerequisites the design under test must meet:

– The system includes a processor executing software code as part of the design.

– There is extensive interaction between software and hardware parts of the design during execution.

– Both hardware and software engineering teams agree on using co-simulation early in the design stage.

These prerequisites increase the likelihood of achieving a smooth methodology flow and a common communication medium between the two teams. Once the above requirements are met, co-verification offers several key benefits when compared to using simulation alone.

1. Faster Performance

Pure logic simulation can be used to simulate a design with a processor component. This is accomplished by including an RTL model of the processor to simulate the software code. This approach, however, is painfully slow and insufficient when addressing all but the most basic debug requirements. The overall simulation speed is generally in the sub-100 Hz range. The bottlenecks in simulation are due to the accurate, but slow, logic simulators. When the software needs to communicate with hardware, the transaction must go through a logic simulator.

In comparison, co-verification is able to run simulation orders of magnitude faster. This speed-up is achieved through several methods including the use of faster processor models, instruction set simulators. The ISS significantly increases the simulation speed of the processor, but this alone is not enough. Bottlenecks still remain because the software running on the ISS is much faster than the hardware running on the slow logic simulators. Consequently, the software and ISS are always waiting for the hardware and logic simulator to catch up. Advanced co-verification suites, such as Mentor Graphics® Seamless® FPGA, bypass this fundamental limitation by introducing the concept of a coherent memory server (CMS). Using the CMS, the ISS is able to read and write to memory about 10,000 times faster than if it had to go through a logic simulator. Given that processor-to-logic interaction is mostly through read-write cycles to memory – fetching instructions, accessing peripheral registers, and such – the overall simulation speed is dramatically increased by diverting most routine CPU-to-memory transactions to run through the faster CMS instead of through the logic simulator.

Only the transactions from processor to memory that are under active debug run through the precise but slow logic simulator. Typically, this means the simulator bottleneck is only a factor in less than one percent of the software-hardware transactions, thus providing a significant overall throughput advantage versus pure RTL simulation.

2. Increased Comprehension

To efficiently address debugging problems that span multiple teams, designers need tools and methodologies that fit the specific needs of each team. For example, the SW team would find debugging processor code on a logic simulator to be inherently inefficient and impractical.

With advanced co-verification tools such as Seamless FPGA where a cycle-accurate ISS model replaces the RTL processor model, a symbolic source level debugger can be attached to the ISS, making possible an interactive and intuitive software debug environment. Some of the standard features of a software debugger include the ability to step through source code (C and assembly), set breakpoints, and observe register and memory contents. Thus, the introduction of a symbolic debugger now provides greater control and comprehension to the designer than would be possible trying to debug processor code using an HDL processor model running on a logic simulator.

3. Support for Abstract Models

Often times when very high data throughput is required to validate certain design functions, RTL models have to be replaced with faster, more abstract behavioral models. These high-speed models, usually written in C or C++, interact with the ISS at very high speeds allowing for complex protocols to be rapidly and comprehensively tested.

Seamless FPGA allows users to plug in these behavioral models through a “C-Bridge” interface technology. By working less in the logic simulator and more with higher-level models, verification speeds can deliver significant performance gains. With increased simulation throughput, the virtual platform can now offer visibility into system performance and architectural trade-off issues at a very early stage in the design process. Designers can quickly validate functionality, while also analyzing and tuning important system attributes, such as bus bandwidth, latency, and contention – all leading to increased system performance.

Additional FPGA Co-Verification Benefits

With access to co-verification technology, processor-based designs are not only easier to debug, but the process starts much earlier in the design cycle, which makes it more likely the design project will be completed earlier – with minimized risk of surprises later on.

Finding Problems Earlier

Design teams are highly motivated to identify and fix problems at an early stage in the design cycle. A well-known axiom states: “The earlier a problem can be identified, the easier and cheaper it is to fix.” Typically, designers cannot initiate software verification until a hardware prototype is available. As a consequence, when software verification occurs in a serial manner, HW/SW interaction problems may not be detected until much later in the design stage.

A virtual prototyping and debug environment removes this restriction by enabling product integration ahead of board and device availability, or even before the final design is committed. With co-verification, software teams do not have to wait for silicon before they can start developing and testing their portions of the design. As a result, problems can be found earlier and the time to working silicon is dramatically reduced.

Simplified Testbenches

To verify design functions, hardware engineers often write elaborate HDL testbench routines. These testbenches can become very complex, and it is not uncommon for the testbench code size to approach that of the design itself. With co-verification, the ISS processor model allows testbenches to be greatly simplified.

For hardware verification engineers testing protocols and device drivers, testbenches are simplified because actual embedded software code – and not contrived testbench code – is driving the hardware circuits.

Similarly, software engineers do not have to resort to writing stub code. Actual hardware devices provide real-life responses to calls made to hardware. Overall, this leads to fuller and more comprehensive test coverage, leading in turn to increased confidence in the working of the design in silicon in the first instance.

Ability to “Freeze” and Control Runtime

An important attribute of debugging in the virtual domain is the ability to “stop time.” As a result, it is possible to simultaneously observe and modify the internal values of the CPU registers, as well as those of the hardware device registers with which the processor is communicating. Freezing and synchronizing the hardware and software domains offers the ultimate in control and observability – and it is invaluable in efficiently helping debug complex and intricate transactions.

Programmable Silicon Complements Co-Verification

The chances for first-time success with a design are greatly increased by early integration and testing in the virtual prototype domain.

However, there are classes of problems involving behavior that can only be captured when the processor runs at full speed. In this regard, platform FPGAs serve as a perfect complement to virtual platform debug techniques. Designs can be downloaded into FPGA silicon for validation at full system speeds. If problems escaped earlier attention, the designer can debug in-system with the Xilinx ChipScope™ Pro interactive logic analyzer or go back to the co-verification environment for a more controlled analysis. Design errors can be fixed and re-implemented in silicon without incurring the huge delays and costly mask re-spins common with ASIC design flows.

Conclusion

The current generation of Xilinx Platform FPGAs with powerful RISC processors and multi-million gate capacities requires powerful and matching co-verification methodologies. With the introduction of Seamless FPGA, FPGA designers now have access to an ASIC-strength, best-in-class debug solution. The tool provides an efficient and easy-to-use methodology that can integrate, verify, and debug hardware and software interactions very early in the design cycle – preserving and enhancing the critical time-to-market advantage of large-scale platform FPGAs.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

High Power Charging Inlets
All major truck and bus OEMs will be launching electric vehicle platforms within the next few years and in order to keep pace with on-highway and off-highway EV innovation, our charging inlets must also provide the voltage, current and charging requirements needed for these vehicles. In this episode of Chalk Talk, Amelia Dalton and Drew Reetz from TE Connectivity investigate charging inlet design considerations for the next generation of industrial and commercial transportation, the differences between AC only charging and fast charge and high power charging inlets, and the benefits that TE Connectivity’s ICT high power charging inlets bring to these kinds of designs.
Aug 30, 2024
36,114 views