In the wake of the UCIS announcement at DAC (which we’ll cover separately later), I sat down with some of Mentor’s functional verification folks to get an update. Coverage was one of the items on their agenda as part of addressing metric-driven verification.
They talk in terms of changing the engineering mindset when it comes to evaluating verification tools. Right now engineers tend to think in terms of “cycles/second”: how fast can you blaze through these vectors? Mentor is trying to change that thought process to “coverage/cycle”: it’s ok to take longer per cycle (OK, actually, they didn’t explicitly say that – according to Romain Berg it is probably a bit dodgy territory from a marketing standpoint – and I don’t know whether they’re solution is any slower on a per-cycle basis – but I’m inferring here…) as long as you get coverage faster. In other words, maybe one tool can zip through a bazillion vectors in three hours, but it’s better to have a tool that only needs a half-bazillion vectors and completes in two hours (slower on a per-vector basis, but faster overall completion).
Part of this is handled by their InFact “intelligent testbench.” They try to solve two problems with it, as I see it. First, there are hard-to-reach states in any design; the tool builds a graph of the design for use in identifying trajectories. From that, they should be able to reach any reachable state with the fewest vectors possible. Which is fine when testing just that one state.
But the second thing they do is what would appear to be their own variation of the “traveling salesman” problem. How do you traverse the graph to get to all the nodes without repeating any path? (The canonical traveling salesman problem is about not repeating any node and ending back where you started.) The idea is to get full coverage with as few vectors as possible. This gets specifically to the “coverage/cycle” metric.
Which reinforces the old truth that simply having and rewarding metrics doesn’t necessarily help things. It’s too easy to have the wrong metrics – which will be attained and for which rewards will be paid – and not improve life. Because they’re the wrong metrics.
Perhaps MDV should be modified to UMDV: Useful-Metric-Driven Verification. Of course, then we’ll get to watch as companies battle over which metrics are useful. But that could make for entertaining viewing too…