feature article
Subscribe Now

Software Is In Style

New C-Level SoC Verification Options

System-on-chip (SoC) verification is dominated by hardware verification languages and methodologies. Because you’re verifying hardware. Duh.

But, by definition, SoCs have processors that will run software. And that software represents a higher-level source of stimulation and observation for testing how well the IP blocks that make up the SoC work together.

It’s called software-driven verification, and we’ve looked at the concept before, both at the basic level and in more detail, via Breker’s solution. The former conceptually covers issues common to anyone trying to address this space, while the latter focuses more specifically on how one particular company tries to solve it.

But there’s a new kid in town. In fact, there are a couple of new kids in town. One is an old kid with a new offering: Mentor has announced their iSDV component, which is a part of their Questa verification platform. The other is less known: newcomer Vayavya (pronounced “vah-YAH-vyah” – it means “northwest” in Sanskrit, which, in their view, is a propitious direction – analogous to our “up and to the right”) has introduced SoCX-Specifier to enable test generation.

At a high enough level, these offerings sound pretty much the same. And, in fact, they’re addressing the same overall goal: automatically creating software tests that exercise a complete system by putting the processors through their paces – and, hopefully, stressing things to find unexpected gotchas.

But there are differences, and I’ll do my best to identify the nuances. I use the Breker technology as a baseline, not because I’m saying it’s the standard that others must meet, but rather because we’ve already covered it, so it’s a known quantity. (You can check the prior two articles above if you need a quick refresher. I know I would…)

First of all, each of the offerings uses scenarios or use cases as the primary input to the test-generation tool. The idea here is that systems engineers can specify all of the different things that might be expected out of the system – inputs, events, responses – and the tools can mine that information to exercise those scenarios.

How the scenarios are processed appears to differ by vendor. We looked extensively at Breker’s start-at-the-outcome approach; Mentor uses an approach that they say is a mixture of starting at the outcome and at the inputs, flowing forwards and backwards. Exactly how everyone’s algorithms works, however, is not something I’m going to try to tease out here. My guess is that, whatever is good or bad about them, the newest ones will probably be completely different in a few months anyway as the companies continue to refine the tools. If it becomes a serious point of competition, we can come back to it in the future.

The next fundamental attribute to consider is who’s in charge: the testbench or the C program. The output of the test generation algorithm is, conceptually, a series of tests executed in C. But that can be managed a couple of different ways. And how that works can affect performance and flexibility.

On the one hand, you can combine all of the tests and compile them into a single executable program. In this model, the processor is in charge: the program executes and that’s that. If you’re executing in simulation, then you can have monitors that capture outputs and update coverage and other such passive activities. But such testbench items are slaves to the program, and they can keep up only because simulation runs relatively slowly.

Once you move from simulation to emulation or actual silicon, however, the speed of execution is usually much too fast for a testbench to keep up with. Breker says that they have written their own monitor such that the program can write to it at speed and move on; that monitor can then write its data out at whatever speed it needs without slowing down the program. They claim that this was a tough problem that they solved, the implication being that they may uniquely have it (“may” because, since it’s early days here, not everyone is sure what everyone else has yet).

The other potential speed-related gotcha in this model is when a particular thread has to await input from the testbench. Such threads may slow down at that point, but other threads will keep going at speed, and, once they have their inputs, the slowed threads get back up to speed.

Mentor, by contrast, touts one of their benefits their integration with their much more expansive Questa platform. And this brings up the other way that you can run a C program: have the testbench call it. In this model, a test script calls C programs to run, and in this manner, all of the C programs can execute under the direction of the test script, and the programs can interact with other Questa features like CodeLink for debugging and such.

Whereas in the Breker and Vayavya cases, the program is king, in the Mentor case, either the test script or the software program can be king in a kind of monarchy-sharing arrangement. So which of these is better?

From a pure performance standpoint, a pre-compiled stand-alone run-till-you’re-done (not strictly run-to-completion from a technical standpoint) bare-metal program will execute more quickly than a series of C programs that are invoked by a test script. That can be good from a wall-clock standpoint, but it’s also a more stringent test if everything is running at speed (or as close as possible to at speed).

The flip side of this is the flexibility that Mentor’s iSDV provides. Rather than blindly running an entire suite, it can examine results and make decisions on the fly, adding some dynamic intelligence capabilities to the run. This is because you can have an intervening test script in charge, and it can decide which C snippets to run when.

So each model has something working for it; it’s probably fair to say that Mentor has the edge here simply because they allow both models.

One other feature that’s often called out as important is the ability, not just to generate tests for multiple cores that will run in parallel, but also to generate multi-threaded tests for a given core (or multiple cores) to stress the impact of swapping contexts. All three companies claim to offer this capability, so there is no obvious high-level distinction there.

Finally, Vayavya does something that the others don’t do based on its primary technology before this latest offering. They’ve historically provided a tool for automatically creating drivers for SoC components. Which now gives them two tools: SoCX-Specifier and SoCX-Virtualizer.

SoCX-Specifier is the means by which you sketch out the scenarios to create the test program; this is new. SoCX-Virtualizer allows hardware and software engineers to define the hardware and software architectures from which the tool can automatically create correct-by-construction drivers. These drivers, of course, connect the more abstract software program with the low-level hardware resources being exercised. Vayavya claims that that their drivers compete well with hand-written drivers from a performance standpoint (I haven’t tested whether hand coders would agree).

The Breker and Mentor offerings assume that the drivers are already in place. Which isn’t a bad assumption; folks have been writing drivers for years. So the SoCX-Virtualizer aspect of Vayavya’s offering could easily be split out as a separate complementary feature that could even be used to generate the drivers that Breker or Mentor tests would run over.

Note that portability is an important aspect of the generated tests; by one means or another, all three offerings promise portability as you move from simulation to emulation to execution on live silicon. There may well be nuances in that rather loose concept of “portable,” but those have yet to be distilled out.

So we have three companies trying to do more or less the same thing. All with slightly different approaches. And, not inconsequentially, the three companies have different sizes and histories that may well figure into the winner/loser verdicts. Obviously Mentor, by dint of its behemoth status as compared to the other guys, has an advantage from a shouting standpoint; they have the resources to send their messaging loud and far. Breker has established themselves in this particular area for a few years now (many of them stealthily), so they see themselves as the incumbent. And Vayavya is leveraging other technology to bring themselves into the game; they’re also a very small new company.

It’s not at all clear who will prevail. Feel free to cast your votes in the comments below.

 

More info:

Breker TrekSoC

Mentor iSDV announcement

Mentor iSDV

Vayavya SoCX_Specifier announcement

Vayavya SoCX_Specifier

One thought on “Software Is In Style”

  1. We’ve gone from one to three different vendors of software-driven verification tools. How do you rate their capabilities, at least at a high level? (Or a low level, if you’ve actually used them?)

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Industrial Internet of Things
Sponsored by Mouser Electronics and CUI Inc.
In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from CUI Inc explore power supply design concerns associated with IIoT applications. They investigate the roles that thermal conduction and convection play in these power supplies and the benefits that CUI Inc. power supplies bring to these kinds of designs.
Aug 16, 2024
50,903 views