feature article
Subscribe Now

Debug Tools are for Losers…Oh, and Teams that Meet Schedules

Blue Pearl Brings Sanity to Debug

We don’t need no stinkin’ FPGA debug tools! Debug tools are for those “other” engineers. You know, the ones who make mistakes. 

As engineers, it’s hard to admit we’re fallible. Most of us have spent our careers, and maybe our whole lives, being lauded for our technical prowess. We pride ourselves on our ability to solve problems quickly, to design things that are robust and reliable, and to anticipate the twists and turns that the real world will throw at our creations.

That’s why I feel bad for the people at Blue Pearl Software who have to sell their tools. They have to begin the discussion at a place that’s already uncomfortable for most engineers. We don’t want to talk about our mistakes. We don’t really want to admit that we ever make them. And, on the rare occasion when we do have a problem, we try to forget about it as soon as possible. Our hindsight goggles may be 20/20 on the good stuff, but we can all be a little forgetful about our missteps.

And even if we admit to ourselves that designing complex circuits is a demanding enterprise, and that we may just occasionally take a couple of wrong turns on our way to the right answer, there’s simply no way we’re going to admit that to our boss. Especially not in the context of asking for budget for software to help find and fix our mistakes.

But let’s stop staring in that rose-colored mirror for just a minute here, OK?

I have this, uh, friend – who is a really talented, experienced, and respected engineer. And he has never in his career completed a design without having to go through a debug phase. Of course, the bugs aren’t always his. Things can go wrong with third-party IP, parts of the design that other people worked on, unforeseen corner cases, and – yeah, he usually actually makes a couple of slip-ups along the way all on his own. 

In FPGA design, a lot of our bugs are closer to typos – careless mistakes like reversing bit orders, changing or misspelling identifiers, the kinds of things that could happen to anybody. A second group of common problems comes in when we stitch pieces together – clock domain crossings, issues with multi-cycle paths, subtle stuff that can muck up the design in the background without being obvious. The point is, bugs happen, and that means that a part of every design project is (and should be) allocated for debug. And debug is not an easy part to schedule. Debugging and verifying a complex FPGA design can often be the biggest and least predictable phase of our project.

Having a tool that can look over the code and spot those issues – well before that night at 2AM when we’re running synthesis for the umpteenth time trying to figure out why the heck it STILL… Well, you know the scenario. That’s where Blue Pearl comes in. Their tools analyze our designs, point out the issues before they become problems, and help us quickly and easily locate and correct the source of those problems. This type of tool is not new, of course. There have been various forms of HDL “Lint” programs bouncing around for years. But a good debug tool (compared to an “OK” debug tool) can make the difference in shipping on time or not, or, worse yet, shipping with a hidden bug, or not.

Blue Pearl breaks down their helpful technology into three categories: RTL analysis, clock-domain crossing, and automatic SDC generation. The RTL analysis tool – cleverly named “Analyze RTL” – is designed to locate the kinds of bugs that you won’t find in simulation or conventional HDL linting. For example, you may have a beautiful state machine that doesn’t happen to initialize properly, or has unreachable states. Or, you may have a bus with multiple drivers active at a time (or with none). How many times have you found registers in your design that were not initialized properly or that have set/reset conflicts?

These types of problems can slip all the way through your tool flow and cause fits when you fire up your board in the lab. And with the complexity of today’s largest FPGAs in the hundreds of thousands to even millions of LUTs, debugging these types of problems in hardware can be nearly impossible. Analyze RTL helps to identify and correct these types of problems before you even run your first simulation. 

With the increase in the size and number of IP blocks in modern FPGA designs, the number of clock domains has exploded. Every time one of our signals crosses between clock domains, we have the potential for one of the most insidious and difficult bugs to track down – the domain-crossing bug. Of course, we try to synchronize every point where a signal crosses domains, but those can be among the easiest problems to miss, and it’s not uncommon for a design to get all the way into the field looking great, only to manifest “random” bugs caused by domain crossing issues. 

Blue Pearl’s Clock Domain Crossing (CDC) tool helps to identify metastable conditions resulting from lack of synchronization, incorrect synchronization, or reconvergence. These problems are often not found in simulation, or even in your board on the bench. Correcting them early in the design process can save tremendous headaches down the road. 

Finally, we’ve all slogged through the mega-frustration of timing closure on FPGA designs. You know the drill – you run synthesis, then place-and-route and … you end up with an enormous report with countless timing violations. Often the problem is that we didn’t have the best set of timing constraints to start with. Creating timing constraints for your design is a bit of a black art. With the wrong constraints, we can end up chasing down false paths and multi-cycle paths that have nothing to do with the actual function of our design. Worse yet, our synthesis and placement tools may have mis-optimized our design because of these bogus paths – at the expense of our actual critical paths.

Blue Pearl’s Automatic SDC Generation tool does what the name implies. It automatically generates constraints for false and multi-cycle paths up front, so synthesis and place and route can focus on the actual critical paths, and so we’ll have a much simpler problem to deal with when it’s time to close timing. (Or, if we’re lucky, perhaps no problem at all.) Getting these nailed down early on lubricates the entire design process afterward. Every step of the subsequent design flow will run faster and with fewer issues requiring human intervention. 

Speaking of human intervention – finding the bugs, all of the bugs, and nothing but the bugs is a worthy goal for a tool, but it’s just as important for the tool to help us humans isolate the root cause of problems and fix them quickly. This is where Blue Pearl’s suite really shines. The Blue Pearl debugging environment is a thing of beauty. Since it starts with the actual RTL code we wrote, it can help us to bridge the chasm between those random LUTs and routes in the physical view of our design, back through the opaque wall of synthesis, and to the original lines of code that we actually wrote and understand.

This connection of end-of-design-flow problems with beginning-of-design-flow RTL code can be one of the biggest obstacles to quickly finding and fixing problems in our design. Because Blue Pearl understands our design from the same basis we do – the original HDL – its UI can quickly help us map from the LUTs and bits that are showing symptoms back to the line of code that caused it. This can save us tremendous amounts of our own time – which is our most scarce and valuable asset in a design project. 

Of course, Blue Pearl tools cost money. And it may seem like a challenge to ask for budget for tools that will help us find mistakes that we are a little embarrassed to be making in the first place. So – Managers: Don’t wait for your team to ask for Blue Pearl’s tools. They probably won’t. But you’ll make yourself look better if your team uses them – you know, if your boss cares about you getting your project finished on time and without any sneaky problems showing up in the field.

11 thoughts on “Debug Tools are for Losers…Oh, and Teams that Meet Schedules”

  1. Kevin,
    You make excellent points about the need for debugging early at the RTL phase and having the right debug environment to get to the root cause of any reported errors. At Real Intent, we find that having the best speed, capacity, and precision of reporting are key elements for a successful evaluation by design teams. And great design teams need to spend money to get great tools. Having a verification tool that shows real problems in a few short hours is an excellent way to convince upper management that purchase is the right thing to do.
    +Graham

  2. Pingback: 123movies
  3. Pingback: Petplay
  4. Pingback: DMPK
  5. Pingback: Useful Site
  6. Pingback: sativa
  7. Pingback: SCR888 Casino
  8. Pingback: cpns 2018
  9. Pingback: Read Full Article
  10. Pingback: satta matka

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
60,951 views