Electronic system level (ESL) design has struggled to convince doubters that it’s more than a marketing TLA. But the most visibly productive ESL tools in recent years have been those that synthesize C into lower-level RTL. Doing logic synthesis from C has been a long-time vision for raising the level of abstraction of design. But the history of the technology, which predates the ESL phenom, is checkered and has left a sour taste in many designers’ mouths. Each new offering has had to convince rather dubious prospects that they were different from what had come before. And yet, gradually, very gradually, there seems perhaps to be a bit of traction. Certainly enough to provide encouragement to new purveyors of C synthesis tools.
Now, it’s one thing for new startups to continue to feed this beast. That’s where many of our more daring ideas come from. You know, the kind of out-there ideas that don’t provide the level of guarantee that larger-company managers want to be able to give their bosses to make sure that everyone looks good. So it might not be so surprising to see yet another small company take a new stab at it, with bigger companies remaining aloof, saying, “It sounds great in principle, but it just doesn’t work.” (That is, until it does work, and then the big company buys the little one.)
But in this case, it’s another big company diving in: Cadence has announced their C-to-Silicon product, integrating it in with their overall flow and accompanying it with an equivalence checker from Calypto to give designers a way to confirm that what they got in logic was indeed what they asked for in C. They are trying to leverage the tool infrastructure they already have in place, such as their ECO capability, to manage the overall design and to provide more accurate timing models for simulation.
But before we look at ways in which they are trying to differentiate their version, there are a few standard questions for any C-to-RTL technology. These have to do with some of the limitations and gotchas of previous incarnations.
-
Timed or untimed? In other words, does the user have to restructure the design to annotate or otherwise indicate the timing of events and the parallelism? And the answer here is, untimed C. In other words, no reworking of the program for timing. In fact, this leaves the compiler free to optimize the timing.
-
Full ANSI C? Early generations of the technology would limit what could be done. Pointers were probably the most notorious omission. The problem with pointers is that you can do so much with them, and with the prospect of arbitrary pointer arithmetic, you never know whether the resulting address will point within legitimate memory or somewhere in the Gobi Desert. But C programs typically make extensive use of pointers, and disallowing them means a lot of program restructuring (typically to use arrays instead). More recent offerings have included pointer support, and, indeed, the Cadence offering covers the full ANSI C language. The only gotcha – and this is going to apply to anyone, for the most part – is that you’re not dealing with an “unlimited” amount of heap memory. So memory has to be static. You can use malloc, but it has to pull from a pre-allocated “sandbox.”
-
Any proprietary additions to the C language? Some C synthesis attempts have augmented the language with special domain-specific constructs. Cadence has not done that.
-
C++? Yes. In fact, they say, the higher the level of abstraction, the more flexibility they have in synthesizing an optimal circuit. You can also use SystemC (which has C++ as a foundation), allowing the synthesis of abstract designs done by system architects without recasting into some more specific form.
-
How sensitive is this to design style? Some C synthesis approaches require very specific coding practices in order for efficient implementations to be inferred. This implies rehashing legacy code and a learning curve for new designs. Cadence claims no such requirement, saying they can generate high-quality results from more-or-less untweaked abstract code.
So now we have two big guys that can pretty much check off all these elements: Mentor (Catapult C) and Cadence (C-to-Silicon). Cadence draws some further distinctions, however – in some cases leveraging other elements in the design flow.
The first big issue they raise is that of datapath versus control path. Now, from a hardware standpoint, we’re kinda used to that distinction, and we frequently draw a line between one and the other. But when you’re looking at a C program, I mean, it’s just a program. You can’t segregate out the control path from the data path. But that’s also pretty much reality; no design is all datapath or all control, but Cadence claims that existing solutions favor datapath designs.
What does that mean? Well, in a program, you take in data and you put out data, so anything you do to the variables that represent that data is a datapath operation. But you typically have flow-control structures in the program as well: if/then/else, switch, and loops. (Not to mention goto… but no one really would use that, would they?) Those constitute the control path. Essentially, the decisions are the control path and the assignments and operations are the datapath. (As to whether operations on logical flags that are used for decisions constitute data or control is pedantic and left to the reader.)
No practical program is all datapath, and a program that is all control makes even less sense (since it would make lots of decisions but do no work… hmmm… any parallel here with Dilbertian concepts is coincidental and unavoidable.) Any realistic program will lie somewhere on a spectrum of heavily data-oriented (few decisions and lots of manipulation) or heavily control-oriented (simple data operations, but complicated to decide what to do). Cadence claims to be able to address this entire spectrum with equal effectiveness, and that this capability is unique.
The next area of focus is that of timing estimation at a high level, and here’s where Cadence invokes its full flow. During the design process, they employ their tools all the way down to the gate level to get actual timing for the intended target. That gate-level representation doesn’t reflect a quick-and-dirty cobbling together for estimation’s sake or a pre-canned model, but is the actual implementation that will be generated when the design is committed. So, in fact, the timing estimates aren’t so much estimates as a preview of what the final result should be.
They’ve also invoked their ECO capability: all decisions and relevant data during processing are captured in a database so that when a small change is made to the C or C++ program, the exact same flow can be followed as far as possible, diverging only where needed. This provides much-needed repeatability of results for portions of the circuit untouched by the change.
They are also trying to enhance reusability by allowing the segregation of design functionality and constraints. The kinds of things that might be considered constraints could be the bit widths of fields, clock rates, and other such items that might be declared by #DEFINE statements or pragmas or the like. The intent is to foster “purity” of the algorithm and to provide for easier retargeting of the algorithm into different systems having different requirements. It’s not completely clear exactly what this means in practical terms, although they describe it as being somewhat like having a side settings file, along with a tool design that works with that file. Described in that fashion, it sounds pretty standard, but this – as well as some of the other features – have apparently warranted patent applications, so presumably there’s more there than meets the eye.
The compiler also creates a “fast hardware model,” which is a SystemC representation of the design that can be used by verification folks and software writers so that they can start evaluating code and system behavior without having to wait for silicon. They also can wrap the generated RTL in a SystemC interface in order to compare the behavior of the generated RTL with that of the original input model of the system.
Taking that “did the tool compile this right?” concept one step further, Cadence and Calypto have collaborated to integrate Calypto’s SLEC equivalence-checking technology into the C-to-Silicon solution. The SLEC tool will provide confirmation (presumably) that the ESL code and the RTL code describe the same thing, providing extra confidence that the function realized by the compiler is a faithful representation of what was requested.
So there is clearly a serious new player in this arena. Whether this is a response to demand or a response to a vision intended to spur demand will be seen as Cadence and Mentor duke it out in the market.
Of course, don’t even ask how all this would be affected by a Cadence/Mentor merger, were it to happen…
Links for more information on any companies or products mentioned:
C-to-Silicon
Catapult C
SLEC