feature article
Subscribe Now

A Clean-By-Construction Deck

Sage Introduces a DRC Compiler

So you’ve spent many months on a chip design project that’s winding down. Layout is done, or at least you hope it’s done, and it’s time to make sure that you did it right. (Yes, I know… most designs today will involve many people doing different things, so the “you” here is intended to refer collectively to all of you.)

That means it’s time to run your design rule checks (DRCs). And as the computer hums away on your DRC deck (presumably so-called because at one time it was a deck of Hollerith cards), inspecting every nook and cranny (you hope) for violations, did you ever wonder where that deck came from?

It’s easy to assume that there is this methodical process by which some committee of technology dudes (“dude” in the gender-neutral sense) sat around planning a technology and a set of rules to live by, which they then codified that into a DRC deck and pronounced it good.

Well, to hear a new startup, Sage, describe it, seeing where your DRC deck comes from is probably more like watching sausage get made. It may not be something you want to witness. And, in fact, with the size of the decks these days, there’s got to be a veritable sausage-fest going on in the technology houses.

It’s not a matter of codifying knowledge into specific, executable DRC rules. And here’s where we have to define some nuances more closely. I’ve always thought of DRC decks consisting primarily of rules that can be tested as pass or fail. But it’s not quite that simple. Yes, there is a description of rules, but then there’s a lot of specific code that goes into how you identify violations on-screen.

You might think that a layout program would have a layer for the layout graphics and then a separate “signaling” layer for issues, but apparently not. The DRC rule code literally instantiates polygons that point out to you the location of a violation. And it’s up to the coder to decide how best to represent that violation.

Such a rule has to be manually written and coded. Each one. One after another.

And where do the guys writing these rules get their input? From, say, a spreadsheet that carefully describes, in mathematical terms, the rule and any parameters or conditions and specifically how it should be represented on screen? Nope. They get it from a document. A written document called the Design Rule Manual (DRM… no, not that DRM).

The DRM has text descriptions of the rules. The DRC coder reads those descriptions and then decides how to devise a test for them. That means interpreting the intent of the rule correctly (there may be ambiguities). The figuring-out-how-to-test-it part may also not be trivial, and there are cases where the exact rule is too hard to codify and an approximation must do.

So here we have two levels of interpretation. The first is when the DRM author interprets a rule and casts that interpretation into prose, and the second is when the poor guy writing the code reads that prose and decides how to turn that into actual DRC deck code.

This might not be so bad if there weren’t so many rules. In fact, we’ve lived with it for a long time, and it seems to have been manageable as long as the number of rules remained in the few hundreds or so. But those days are gone. Here are some numbers Sage puts forth: a modern DRM is something like a 600-page document containing about 5,000 rules. Those rules will turn into about 100,000 lines of DRC code. All manually written. Text and code.

As a result, it can be an incredible chore to validate the DRC code against the DRM to figure out if all of the rules got covered without escape loopholes and without tightening down too far. Which means that, when the decks are first released, they typically have bugs in them, and it can take years before you can really say that they’re clean.

So Sage is proposing a design rule compiler. OK, more than proposing: they’ve built one and recently introduced it. They call it iDRM. The way it works is by allowing graphical entry of a rule, along with conditions and other complicating factors. The tool will then create an executable check.

From that rule, iDRM can then create test layouts that should pass and fail. This can be used to test the rules by running them against the test structures. They can also take existing layouts and check them against the rules, listing all violations.

The idea here is to skip the whole writing-a-War-and-Peace-tome thing and use this tool instead to codify the rules. You then get a one-to-one correspondence between each rule and a correct-by-construction executable version of the rule.

The guy using this can take advantage of the test structures and layout scans to confirm whether he or she defined the rule properly. If there are too many misses (false negatives), then you can see right then and there that you need to tighten up the rule. If you’re getting too many false positives, then you need to find some way to express the intent of the rule that’s not so restrictive.

The part that’s not done yet is taking those rules and translating them into specific code that can be run by third-party DRC engines like Calibre, IC Validator, or Assura. Sage calls that DRC Synthesis. That’s an obvious next step in providing a complete process-concept-to-runnable-DRC-deck flow. But the focus right now is on getting engineers in various parts of the silicon ecosystem started using iDRM to create the rules. There’s plenty of work to do at that stage. Which presumably gives Sage more breathing space to get the last bit done.

In case any of you are thinking that this is a foundry-only tool, that’s not necessarily the case. Yes, the initial process design kit (PDK) will be developed there, and they will probably constitute the bulk of the users. But fabless houses often supplement the basic PDK with rules of their own, so they would also be able to leverage iDRM to do that. Failure analysis folks may also want to play with it in order to resolve failures or yield issues that may turn out to be related to layout.

As for the rest of you, well, if it all works as promised, you just might sleep a bit better knowing that the deck you’re working with is clean. Or cleaner than usual, anyway. And if you want to go see how that deck got made, well, it might not seem quite as gross anymore.

 

More info:

Sage DA

One thought on “A Clean-By-Construction Deck”

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
58,145 views