I feel like an old fool (but where are we going to find one at this time of the day?). Almost everything I hear on the technology front these days causes me to have a knee-jerk reaction along the lines of, “Things have certainly changed since those far-off days when I was a bright-eyed, bushy-tailed, newly-minted engineer!”
As I’ve mentioned before, I now predate almost every modern electronic design tool and technology. Take hardware description languages (HDLs) for example. The term HDL is typically associated with languages like Verilog and VHDL that are used to represent digital circuits. These representations may subsequently be employed by software simulation or hardware emulation engines for test and verification, or by logic synthesis engines that compile them into their gate-and-register-level equivalents.
Having said this, languages like SPICE (this term refers both to the simulator and the language) that are used to represent analog circuits may also be classed as HDLs. And then there are crossovers like Verilog-AMS and VHDL-AMS, which are extensions to their respective digital core languages that support the representation of analog and mixed-signal circuits.
But we digress… The HILO-1 digital logic simulator was created towards the tail-end of the 1970s by students at Brunel University in England. A few of the names that spring to mind are Peter Flake, Phil Moorby, Simon Davidmann, and Brian Bailey. HILO-1, which was commercialized by Cirrus Computers in 1981, was a unit-delay simulator, which means it didn’t have any real concept of timing. This was succeeded by HILO-2, which was a combination of a logic simulator, a fault simulator, and a min-max timing simulator.
When I graduated from Sheffield Polytechnic (now Sheffield Hallam University) in 1980, my first position was as a member of a team designing CPUs for mainframe computers at International Computers Limited (ICL). In 1981, two of the managers at ICL left to form their own company, Cirrus Designs, which was a sister company to Cirrus Computers, and they invited me to join them. At that time, Cirrus Designs focused on creating test programs for printed circuit boards (PCBs), but we soon branched out into other things, like designing our own hardware accelerator and our own hardware emulator.
Testing circuits was a natural fit with simulation in general, and with fault simulation and automatic test pattern generation (ATPG) in particular, which is how I eventually ended up moving over to Cirrus Computers and commenced my fall into the rabbit hole known as Electronic Design Automation (EDA).
In 1983, both of the Cirrus entities were acquired by the American company GenRad (originally founded in 1915 as General Radio), which resulted in HILO-2’s HDL being called GenRad HDL (GHDL). By this time, Phil Moorby had left to join an American company called Gateway Design Automation, which is where he developed Verilog along with Prabhu Goel and Chi-Lai Huang. Verilog was introduced to the market in 1984, just one year after the initial release of VHDL, thereby sparking the Verilog vs. VHDL language wars (see also Language Wars in the 21st Century: Verilog versus VHDL Revisited).
But, once again we digress… Sometime circa the early-to-mid-1980s, I created a GHDL model of an 8-bit microprocessor unit (MPU), which was destined for use in a NATO fighter aircraft (the MPU, not my model). This was a big and hairy model for that time. The way we usually tested our models was to create a testbench that applied stimulus to the inputs and monitored/checked responses at the outputs. But using this technique with this model would have brought me to my knees (both figuratively and literally). So, I was rather proud of myself when I came up with the idea of creating a higher-level model of a simple system involving my MPU model in conjunction with RAM and ROM models, all connected via address, data, and control buses. I wrote my test programs in assembly language, assembled them into machine code, loaded the machine code into the ROM, loaded the entire thing into the simulator, pulled and released the MPU’s reset input, and “let her rip!”
The reason for my rambling ruminations is that I was just chatting with Devid Kelf, who is the CEO at Breker Verification Systems. As part of its founding by Adnan and Maheen Hamid in 2003, Adnan was asked what the name of the company was going to be. When he had no answer, Adnan was asked, “Well, what does the company do,” and he responded, “We break things,” which evolved into “Breker” (I love learning this stuff).
Breker is in the business of test suite synthesis. As depicted in the illustration below, they take high-level abstract models of scenarios and specifications used to represent test content, and they synthesize these models. This System Verification IP (SystemVIP) library is created in the Portable Test and Stimulus Standard (PSS) language and C++.
Test Suite Synthesis (Source: Breker)
In addition to stimulus, the synthesis tool also generates self-checking tests, coverage for tests, and debug information, making this a complete test bench package. The output from the synthesis tool is fed into something called Synthesizable VerificationOS, which allows the testbench to be ported to different verification platforms: simulation, emulation, prototyping, final silicon, and virtual platform SystemC models.
Breker has a long and storied history with respect to the verification of System-on-Chip (SoC) designs featuring Arm and X86 processor cores (also SoC FPGA designs featuring Arm cores). In these cases, designers have high confidence in the integrity of the processor cores themselves, leaving them free to focus on the verification of the SoC as a whole. The reason this confidence is justified is that Arm spends ~$150M a year running 10^15 verification clock cycles per core (by which we mean each core in the Cortex-A, Cortex-M, and Cortex-R families). I’m sure Intel does much the same with their X86 cores.
But what about RISC-V processors? I’m glad you asked. The problem here is that RISC-V is an open-source instruction set architecture (ISA). People can take this specification and generate their own processor cores. In turn, this means designers may be developing both the processor core and the SoC at the same time. Eeek! This has led Breker to develop the verification stack depicted below.
RISC-V verification stack (Source: Breker)
The reason for the “suggested” annotation in the above image is that RISC-V International (which is the global non-profit home of the open standard RISC-V ISA and related specifications) is currently developing a certification program to provide an assurance metric for RISC-V devices, and Breker (which is a member of both PSS and RISC-V International) is proposing this stack as a starting point.
Also note that the “Core operation integrity” and “System operation integrity” items (where the latter basically means the core(s) in the context of the SoC) both include things like cache coherency and security.
Taking cache coherency as an example, I wrote about this some time ago in my Verifying Cache and System Coherency in 21st Century Systems column. As I said at that time, one measure of the success of Breker’s approach is to compare the efficacy of handcrafted tests versus those created using the Cache Coherency portion of Breker’s SystemVIP Library (note that the “TrekApp” shown in this image is what they called things back in the day when I wrote that column).
Efficacy of Cache Coherency SystemVIP (Source: Breker)
As I also said in that column, in the case of the directed coherency tests that were created by hand, there’s only a limited amount of concurrency because that’s all the verification engineers can wrap their brains around. By comparison, the Cache Coherency SystemVIP generates a wealth of concurrent tests that really stress the system.
All this leads us to the fact that, a couple of months ago at this year’s Design Automation Conference (DAC), Breker proffered the first public demonstrations of the following:
RISC-V CoreAssurance SystemVIP
RISC-V SoCReady SystemVIP
Together, these bodacious beauties provide a complete range of tests to satisfy the requirements of the entire RISC-V verification stack. Starting with randomized instruction generation and microarchitectural scenarios, this RISC-V SystemVIP includes unique tests that check all integrity levels, ensuring the smooth application of the core(s) into an SoC regardless of architecture, and evaluating possible performance, power bottlenecks, and functional issues.
This RISC-V SystemVIP can be extended for custom RISC-V instructions, fully incorporating custom tests into the suite, cross-multiplied with other tests. All tests are self-checking and incorporate debug and coverage analysis solutions. They may be seamlessly ported across simulation, emulation, prototyping, post-silicon, and virtual platform environments.
It’s important to note that, although the first public demonstrations were at DAC 2024, Breker’s RISC-V SystemVIP has already been deployed at multiple companies working on RISC-V cores and SoCs that use home-developed or third-party RISC-V cores. These SystemIPs have proven to be instrumental in the discovery of complex microarchitectural and system integrity bugs, as well as ISA specification misunderstandings not found using other verification means. On several occasions, bugs have been discovered late in the development cycle, which would have resulted in design failure in the field (Eeek squared!!!).
And, as is usually the case when I complete a column, I’m left thinking, “Things have certainly changed since those far-off days when I was a bright-eyed, bushy-tailed, newly-minted engineer!” What say you? Do you have any thoughts you’d care to share on anything you’ve read here?
I think that somebody who break things is a “breAker”.
There’s a reason for the spelling without the ‘a’ — but I can’t remember what it is LOL
While we are here, what does the “D” in HDL stand for?
D stands for “description” as in “Hardware Description Language”
Thanks, I totally agree.
Unfortunately because Verilog could be simulated, management overlooked this simple fact and forced designers to use HDL for design entry!!
Yes, I have VScode. C# is the most popular language supported and I have found that C# is very useful for design entry. In many ways it is like opening up the old TTL manual, designing AND debugging, and finally
doing HDL build.
So either way, get to VS/C#, and Boolean conditional assignments and don’t look back.
One hiccup is that this methodology leads to using latches which are foreign to HDLs.
BUT latches are less sensitive to clock skew and setup/hold times.