feature article
Subscribe Now

45nm From 30,000 ft

Building a house used to be so easy. You found some flat land. You chopped down some trees. You sawed them up and put them together. Voilà! Honey, I’m home! And if a big storm knocked it down, you built another one, perhaps a bit stronger.

OK, maybe that’s not easy; it actually sounds like a lot of work, but it was conceptually simple. Today just try building a house. You’ve got permit after permit. Are the electricals up to snuff? Did the wallboard get nailed up properly? Where does the runoff from the gutters go? Wanna add a few windows to that windowless wall? OK, does that screw up the shear strength of the wall in case there’s an earthquake? How does the extra glass affect the energy rating of the house? Nobody ever cared about this stuff before; now it’s routine.

This didn’t happen overnight. It was a gradual process of trial and error, and as mistakes were made and needs changed, guidelines were added, and guidelines became rules, and more guidelines were added and they eventually became rules. And all of the rules are presumably there for some good reason (that may or may not be readily apparent). And this made building a house more expensive, so that the economics meant fewer house designs, and more houses from each design – as evidenced by row upon row of identical houses in modern tracts.

This should sound familiar to any IC designer that has watched process nodes migrate from micron-level through sub-micron, sub-half-micron, and now the nanometer realm. And the changes have been coming thick and fast since the 100nm barrier was crossed. Every couple nodes brings a major change of some sort, a big issue that grabs everyone’s attention. At 180nm, timing closure was the big issue; at 130nm, it was all about signal integrity. At 90nm, power ruled. In each case, items that used to be 2nd- or 3rd-order considerations gradually moved up until they finally achieved 1st-order status.

Today 90nm is in full volume production; 65nm is the mainstream design node, and 45nm is now the leading-edge node on which a few companies are seeing silicon. At 65nm, manufacturing became a real design consideration; at 45nm, it’s all about manufacturing. Design For Manufacturing, or more typically, DFM (or Design For Yield – DFY), is all over the product literature and websites of EDA vendors.

Each time one of these issues joins the 1st-order-issue club, there’s a paradigm shift as you have to consider something you’ve ignored before, sort of like the receding hairline you can no longer pretend is just a wider-than-average part. But the fact that, say, power isn’t the top headline grabber at 45nm doesn’t mean that power isn’t an issue any more; it still is. In fact, more than before. The only difference is, a few nodes back, methodologies had to incorporate power optimization as a completely new thing. Now we’re used to it, and we accept more modest refinements at each process node. Similarly, timing closure is still critical; signal integrity is still critical. They’re just not news anymore.

As we move towards the 45nm node, therefore, we have a combination of new issues and evolving issues, and the intent of this article is to lay the groundwork for more in-depth articles to come, all of which aim to give designers a preview of what’s changing as they get to 45nm. Designers at 65nm will confront similar issues, but not on the scale required for 45nm.

Getting a Complex

There are several themes that characterize the world in these rarified levels, where we’re close to counting atoms and electrons. One theme is that of complexity. At 45nm, you may have a couple hundred million gates on a chip: that’s staggering. With the exception of FPGA and memory manufacturers, those aren’t going to be built out of a few well-behaved, well-understood blocks that are stepped-and-repeated across the die. These are systems on a chip – with apologies for resorting to that badly-abused phrase – and they combine processors of various sorts, specialized logic blocks, and analog.

To make things more complex, it is less and less likely that all portions of the chip will be utilized at any one time. A chip intended for cell phone use, for example, may have logic for the various different bands and protocols, as well as for the different states the phone may be in (standby, browser, Wi-Fi, etc.). Each of these becomes one of multiple modes of operation – this is the “multi-mode” (MM) validation problem. The chip must meet requirements in all modes; the more modes there are, the more must be verified.

Adding to the complexity is the fact that the number of validation points has grown. Back when we were building chips by hacking trees, there were basically two corners to be validated: setup and hold, with setup being checked at the fast process, fast supply (FF) corner, and hold time being checked at the slow process, slow supply (SS) corner. Somewhere along the way we started taking interconnect into account. Then multiple supply voltages were used to control power.

So at 90nm the number of corners moved up to about 10; at 65nm, to about 20 or so. At 45, it could be as high as 50. That’s 50 separate simulations. This is the “multi-corner” (MC) problem. And all corners have to be checked for all modes, which is why you see EDA companies providing “multi-corner multi-mode” (MCMM) validation solutions. Multiply, say, 10 modes by 50 corners and you’ve got a lot of computing on your hands. And these corners need to be checked for timing, signal integrity, and power.

You Should be a Model

Another theme is aimed at reducing over-conservatism. In the past, design rules were pretty straightforward. For instance, metal-to-metal pitch would be limited to no more than x microns. The rules were established for worst-case behavior with enough guardband that abiding by the rules made it a safe bet that the chip would work over the operating range. Problem is, with all the new process corners, using similarly simplistic guardbands ends up with a far too pessimistic view of how the chip will work. This means less performance and an uncompetitive chip.

The first conceptual move was to step back and replace the rule of “x microns” with “it depends.” And then to replace the numeric limit with a model, which is basically an equation that takes into account the situation and provides a more realistic estimate of performance. So if there is a configuration that requires more spacing, for example, rather than blindly enforcing that spacing for all configurations, that spacing is specified only when it matters, with more aggressive spacing being allowed where possible. Situational ethics lives.

The downside is that the design rules in total become more complex. And layered on top of this is another theme: variability – not just across the wafer, but across the die. As examples, metal thicknesses can vary by non-trivial percentages, and lithography has become particularly tricky. Such manufacturing issues have always been important. It’s just that there used to be a dedicated group of experts buried away somewhere in the fab where the sun never shines, running detailed simulations of various critical processing steps. Optical proximity corrections (OPC) and reticle enhancement technology (RET) verification are examples of steps traditionally used to tweak a design to increase manufacturing yield. At 45nm, however, the kinds of “innocuous” changes that manufacturing might make could impact the performance of the chip, so those considerations have been pushed upstream to the designer.

These manufacturing simulations, as traditionally performed, are very exhaustive and take a long time to run. If the same simulations were simply introduced into the design flow, work would grind to a halt. Instead, these manufacturing needs are worked into the set of design rules that must be met.

Another way to reduce pessimism is to recognize that there are many shades of gray. Statistical modeling, another common theme, helps you dance that fine line between an unmanufacturable chip and one that’s overly conservative. By taking into account the statistics in the gray zone, you decide the risk of failure that’s acceptable.

Tools Retooled

The snowballing of all these requirements has a significant impact on the tools and how they interact. So another theme is the reinventing of tool architectures. Take place and route (PAR), for example. The design rule check (DRC) assesses how good a job your PAR tool did in satisfying all the rules. So at 65nm, once you’ve finished layout, you might have a dozen or so DRC violations that you fix manually. But in addition to rules that must be enforced for sign-off, there are also recommendations. At 45nm, many of the 65nm recommendations have become requirements, so with traditional tools algorithms you could literally have on the order of 10,000 violations – far more than can be manually addressed.

A 10,000-violation result isn’t acceptable – the PAR tool needs to do a better job to get the number of violations down to a manageable level. The incumbent approach uses simplified models for most of the routing steps and uses complete DRCs only for final detailed routing. For that reason, by the time the details are considered, it’s too late for the tool to take them into account, meaning the designer has to handle it manually. Which is fine for just a few violations, but it breaks with the 45nm paradigm.

There are a couple of new approaches to the PAR flow. One is to use full DRCs through the entire process, meaning that far less remains unresolved or oversimplified in the final routing phase. Another is to have a couple phases of the tool: one that does the primary routing using simpler, more pessimistic models, followed by specific optimizers that account for various process effects and automatically pack things tighter where allowed. If this all works as promised, it can take the number of design iterations from around 30 or so down to just a couple – ideally, the first pass will be correct.

Similar architectural changes are being forced at the verification stage in order to handle the MCMM situation. It’s tough to take a timing analyzer built to handle only two corners and stretch it to handle n corners. And rather than handling timing, signal integrity, and power separately, they have to be integrated together since changes for the sake of one will affect the other. This has forced a lot of development work for any tools that aren’t scalable enough to handle the increased burden, and many of the vendors are trumpeting new architectures that keep your productivity from being killed.

Run time is also being tackled by leaving the flat design world and moving into hierarchical design, as well as by parallelizing computations so that multiple CPUs can be used.

Keeping the Customer out of the Kitchen

So you now have a huge pile of new issues on your lap, with the promise that more issues will arrive with the next process nodes. But how does that affect your flow at the end of the day? Will the tools just magically take care of all of this for you? Well, no, but on the other hand it sounds like the tools vendors are trying to keep the flow as consistent as possible with past flows. For example, the fact that some rules are now models may be more or less transparent; it’s just another input to the verification engine. You don’t necessarily need to know the dynamics behind, say, metal pooling to complete your design; the design rules should shield you from the details. Or when using statistics for timing analysis, you aren’t faced with a distribution chart, moving a line to the left or right trying to figure out where the cutoff should be. Rather, the confidence level you want can be entered essentially as another rule, and the tools handle the statistics internally.

But the tools can’t take care of everything. Power issues related to layout and parasitics can be optimized by a good router, but you have to handle the other power reduction techniques. As always, placement of large blocks, pinout issues, and other such items are knobs that are controlled by humans, not by the tools, and they will have a big impact on the success of the tools. And packaging matters. And knowing more about what’s going on can make you a better designer.

If this all sounds complex, don’t worry: once you get into the details, it’s even worse. Future articles will dive more deeply into the major themes, attempting to provide more clarity on which items affect your life and which are managed for you by the tools. But it’s not too early to start thinking about this; with an estimate of 20% of design starts using the 45nm node this year, plenty of you will be diving in head first soon enough.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

ADI Pressure Sensing Solutions Enable the Future of Industrial Intelligent Edge
The intelligent edge enables greater autonomy, sustainability, connectivity, and security for a variety of electronic designs today. In this episode of Chalk Talk, Amelia Dalton and Maurizio Gavardoni from Analog Devices explore how the intelligent edge is driving a transformation in industrial automation, the role that pressure sensing solutions play in IIoT designs and how Analog Devices is reshaping pressure sensor manufacturing with single flow calibration.
Aug 2, 2024
60,238 views