feature article
Subscribe Now

Showing Your True Corners

Solido Helps Analog Designers Cope with Process Variation

A lot has been written about the increasing difficulty of optimizing a design as process dimensions have become increasingly minute. Not only is it harder to balance performance against area, but power must be considered as well. Managing yield is a constant struggle since it’s no longer a question of where to cut off a distribution tail: it’s a question of how to fix the distribution so that you don’t over- or under-design your product. Too sloppy and you lose a lot of yield; too rigid and you will chew up too much silicon.

As difficult as this is, most of the attention has been focused on digital. It’s even harder for the poor analog folks, for whom “performance” may have diverse meanings according to the intent of the circuit. In the digital world, performance means “speed.” But performance in an analog circuit might include things like gain or phase margin or signal-to-noise ratio (SNR) or bizarre-sounding beasts like “spurious-free dynamic range” (SFDR). Easy for them to say…

Part of figuring out your distribution is figuring out the extent of performance: how bad or good can it get? There is a particular combination of process points that will give you worst-case and best-case performance points. In the digital world, where speed rules, this is evaluated by applying combinations of variations that cause your transistors (N-channel and P-channel) to be either fast or slow. There are two transistor types, giving you two variables, meaning you get four combinations, typically denoted as FF (Fast-Fast), FS (Fast-Slow), SF, and SS. These are the process corners. Somewhere in between is the “nominal” or “typical” point.

You can think of these points as defining the corners of the sandbox within which you will play. Instead of having to sweep across a wide range of process settings, you can just work at the corners to figure out where the worst case is; this speeds up simulation tremendously since you’ve reduced a “very-very-very-many point” problem to a 4- or 5-point problem. However, the process settings that define the corners for digital performance may not necessarily be the same as those defining the corners for various analog performance metrics. Just because both N-channel and P-channel transistors are as fast as possible doesn’t necessarily mean that’s the point of best SNR for a circuit; there may be a completely different process point that acts as a corner for SNR.

What this has meant for analog designers is that they’ve had to run what are called Monte Carlo simulations. Consider this to be a variation on “enough monkeys on typewriters will eventually type up [put your favorite large impressive work of literature here].” And it’s worse than that: it’s more like, “enough monkeys on typewriters will eventually type up everything that could ever be typed up.” Which takes longer than just coming up with your favorite classic read. The monkeys have to come up with everyone’s favorite classic read – and their most hated classic reads – and the non-classic ones too. (With apologies to you youngsters that don’t know what a typewriter is; go find one in a museum. And for those of you that don’t know what a museum is, go find one in Wikipedia.)

In practical terms, this means taking random different process values and simulating the desired parameters with those different combinations. You do enough combinations that you feel confident that you’ve pretty much filled in the solution space so that you can see the range of possible values for those parameters. Think of it as trying to figure out the extent of a shape. If you know the shape is a square, then four points, one on each edge (whether or not they’re actually on the corners), will tell you everything you need to know. If the shape is an amoeba, on the other hand, you have to get a lot of points down before you start to have confidence that you know where the amoeba starts and stops. (Never mind the fact that all the time you’re calculating where the amoeba is, it’s moving…)

You can well imagine that this could take a long time if all of your analysis runs require such an approach; your monkeys (and/or your amoeba) may well die before the job is done. In fact, this may be skipped much of the time because there’s just too much pressure to get the circuit shipped. By omitting this analysis, you run the risk of a low-yielding product. So, consistent with a Monte Carlo analogy, you’re counting on Lady Luck to see you through.

Solido is putting forward a set of tools to help optimize analog circuits in a manner that doesn’t require repeated Monte Carlo runs. It’s based on a platform, called Variation Designer, that has access to your design, your SPICE models, and the computer(s) you use for simulation. You can then layer a number of specific tools on top of this.

Most fundamental of these tools is one that allows you to discover the “true” corners of your design. By “true,” they mean the ones that apply to your design and your parameters, not the digital corners, which aren’t particularly useful. Accurate assessments of those corners do require a Monte Carlo run, but you run it once, get the corners, and then use the corners for the remaining analysis.

Other types of analysis in their statistical package allow you to run the corners, sweep the design variables, find sensitive devices, analyze mismatch, and verify high sigma designs. Additional modules will be available in the future to solve new problems as processes get even more complex. The intent of the modules is to analyze the circuit and identify problems automatically, leaving the fix to the engineer. They claim, and especially in an area like analog, it’s entirely believable, that any automated attempts to fix a circuit would be met with some skepticism.

They’re not actually providing a simulator; they’re hijacking the simulator you already use and just setting up the runs and managing the data. Their integration is tightest with Cadence, but they work with others as well. They can accommodate parallel simulation to make use of as many computers as you have available to crunch all of the data.

Ultimately, if everything pans out as they expect, this tool should make it easier for designers to balance their design for good yield and good performance across the widely varying process that will be used to build it.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens investigate the details of Calibre’s shift-left strategy. They take a closer look at how the tools and techniques in this design tool suite can help reduce signoff iterations and time to tapeout while also increasing design quality.
Nov 27, 2023
61,368 views