feature article
Subscribe Now

On the Cutting-Edge of FPGA Design and Verification

As advanced FPGAs have grown to ASIC proportions in terms of size and complexity, their design and verification have become correspondingly more difficult. This has driven the need for greater expertise in the design and verification of FPGAs. However, many companies, large and small, lack either the resources or the expertise for these demanding designs, so they turn to engineering service firms like A2e Technologies.

Typically projects reach us at a point in the design schedule where schedules are tight and time is of the essence. We need to get up to speed quickly on tough, challenging designs, and we must use cutting-edge technologies, including simulators, that help us quickly understand the design and give us the flexibility to work with different languages in the same design environment.

Living on the Edge: FPGA Design Services
As a third-party design and verification service, A2e offers a wide range of product development services, including hardware, firmware, software, analog, and digital FPGAs. Our customers hire us to do everything from a single electronic component to an entire consumer product. We can manage all or part the design process to ensure product quality and success. The volumes of our customers’ products range from millions per year for high-volume consumer products, down to thousands for test and measurement equipment. Projects can last anywhere from six weeks to two years, with the typical duration coming in around six months.

For a number of reasons, FPGAs have increased as a part of our business. For one thing, the cost of designing an ASIC has continued to grow while the price for FPGAs has dropped to the point that a large portion of the ASIC work is going into FPGAs. At the same time, the growth in size and complexity of FPGAs has expanded their uses into areas that were previously the sole domain of ASICs. Although we deal with everything from small EPLDs to multi-million gate FPGAs, most of our work is with the larger, cutting-edge FPGAs. Technology has progressed so much over the past few years that designers can squeeze just about everything onto a single FPGA. And, with the exception of emulating an ASIC, there is typically one FPGA per board. Five to ten years ago, that wasn’t the case; you had to use multiple FPGAs on a board.

These massive FPGAs typically integrate many intellectual property (IP) components and have many features and components, such as multiple processor cores and multi-layered bus architectures. Unfortunately, a large portion of the designs coming in the door are poorly documented. This prevents us from moving forward quickly on new projects, as we must fully understand the code, schematics, design intent, and so on before starting work on a design. Because designers usually don’t include all the necessary comments, we often must recreate the design intent by back-tracking through the partially finished design to understand the details.

In the engineering services world, getting the job done quickly and cost effectively is of paramount importance. The goal is to keep costs down, improve design quality, and meet tight project schedules, which means employing the best possible tools and methodologies. The quality of our documentation is also critical. We have to hand off a complete, easy-to-read documentation package to our customers.

With all of these requirements, it is no surprise that our tool suite is as broad as our services, with a design and verification flow that is not technology dependent. We use DxDesigner for schematic capture, Synplicity for synthesis, ModelSim® Designer for digital simulation, Hyperlynx for signal integrity, and PSPICE for analog simulation.

Visualization Maximizes Productivity
A graphical design and verification tool is key to maximum productivity. Our customers deliver their partially competed design in the form of hundreds of files written in Verilog or VHDL. We need to understand what’s there, and we need to do it quickly.

It is much easier for the human brain to grasp large amounts of information visually, for instance in a state diagram, than by reading thousands of lines of code. The proof is in the unmistakable productivity increases we’ve seen at every step in the process when using graphical state machine editors. When we bring up the initial design graphically, we quickly can assess the overall project and then go down to a specific module and look at its details. The ability to “reverse engineer” an existing design—by looking at graphical state machines, feeding it into the simulator, and then visually observing the results—helps us to gauge the level of effort much quicker than if we had to pour through code line by line and figure it out ourselves.

Working graphically is essentially self-documenting. This is important for a number of reasons; including the comprehensiveness of the final reports we deliver to our customers. A graphical environment also means that we are not dependent upon comments to capture the designer’s intent. Another advantage is that fewer errors are introduced when we go from the abstract state diagram to the coding because the state-machine editor automates the translation into Verilog or VHDL, eliminating an error-prone manual process.

Graphical design and verification is a capability that we’ve needed for a long time, but have only seen in the past few years. For our needs, it is important that the simulator scale to whatever size project we face—handling the EPLDs and multimillion gate FPGAs equally well. It must also support all of the major standards and synthesis technologies in order to fit our vendor independent design flow.

Automating Testbench Generation
Testbench automation is also important to us. ModelSim Designer features a Tcl scripting feature. Upon our simply typing a single command, the simulator does all the compiling, automatically runs the script and testbench, and displays a new waveform. We extensively rely on this capability to automate our testbenches, and then we go back and do our adjustments in the graphical state machine. This way we can quickly compile the whole testbench and look at the results, significantly speeding debug.

With Tcl script, we have an automated way of checking our code. When we change a single line to fix a bug, we have to make sure something else did not break. We run a regression test that sets all the features and automatically runs all the testbenches. Because the whole process is automated, the speed at which we can make these changes is much greater than it would be otherwise.

It is equally important that we can do design tweaks, run a simulation, then go back and debug in a single environment. Accessing all of these tasks within the same GUI boosts both productivity and performance. It is much more efficient compared to firing up multiple tool sets that invariably have problems communicating with each other. Tool integration also impacts performance, because it frees us from having to use a foreign language interface (FLI).

Flexibility Supports Many Languages and Vendors
The ability to do mixed-mode simulation of both VHDL and Verilog is absolutely critical. Historically, we either got designs in Verilog or VHDL, but the twain never met. For example, a customer might purchase a piece of IP, perhaps a memory, that is in Verilog, but the other 90 percent of the design is in another language.

Mixed-mode simulation enables us to simulate Verilog and VHDL, different models, and different languages all at the same time. This flexibility is very important to us. In fact, we are using mixed-mode simulation on a current project. It is an internal core, memory-based IP written in VHDL, and we’re writing the majority of the project in Verilog.

Interoperability with multiple FPGA vendors is also essential since we work with all leading FPGA vendors. The simulator must seamlessly integrate with the synthesis technologies of all of these vendors without forcing us to change tools.

Conclusion
Design services for leading-edge FPGA creation is not for the faint hearted, especially since these projects are relatively complicated compared to those done by most design houses. Success in our highly competitive market requires us to deliver superior designs within aggressive schedules at an attractive price. The only way to achieve all that is to use the most cutting-edge tools and methodologies, enabling us to quickly comprehend, design, and verify technology that pushes the envelope.

Using a simulator that supports the creation of graphical state and block diagrams of existing code allows us to quickly come up to speed on new designs and realistically determine the scope of work. That, in turn, enables us to put the right talent on the right projects to get them done successfully and in short order. By using the optimal set of tools to serve the needs of our customers, A2e has successfully become a critical design partner for many leading technology companies.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Machine Learning on the Edge
Sponsored by Mouser Electronics and Infineon
Edge machine learning is a great way to allow embedded devices to run applications that can collect sensor data and locally process that data. In this episode of Chalk Talk, Amelia Dalton and Clark Jarvis from Infineon explore how the IMAGIMOB Studio, ModusToolbox™ Software, and PSoC and AURIX™ microcontrollers can help you develop a custom machine learning on the edge application from scratch. They also investigate how the IMAGIMOB Studio can help you easily develop and deploy AI/ML models and the benefits that the PSoC™ 6 Artificial Intelligence Evaluation Kit will bring to your next machine learning on the edge application design process.
Aug 12, 2024
56,189 views