feature article
Subscribe Now

Statistical Variation

Your burger bun has seeds on it, and they look as though they are randomly placed, (I am sure that in the bun factory there is a standard procedure that says how many seeds should go on each bun, and this is accurate when averaged out across the buns.). If you put a large coin on the surface of your bun, how many seeds does it cover? Try somewhere else, and then somewhere else again. Each time you will get a slightly different number. Now try a small coin. The numbers will, it is likely, vary even more.

Now, instead of two dimensions, think three. And instead of a burger bun, think of the silicon in a 22nm technology, with sesame seeds as your dopant atoms. At this level we are looking at a very small number of atoms of dopant, and while the manufacturing procedure will have an average doping level across the entire device, two otherwise identical transistors will have different numbers of dopant atoms. In fact, one transistor in 22nm technology may have ten or even fewer dopant atoms, while another may have twenty, or even more. This is going to produce correspondingly different characteristics.

Then think about a couple of issues with geometry. Not the systematic issues that are dealt with by manufacturing tricks developed in the design for manufacturability tools, but two simple concepts with potentially big effects. Look at the photo-resist used in lithography: it has very large molecules compared to the silicon atoms, and they make it difficult to create straight lines. And look at polycrystalline silicon: again a fairly large structure compared with single atoms, making an irregular edge.

So two transistors that are identical at the design level can be very different when they are finally implemented. And, to take just one example, if these are the two transistors that make up an SRAM cell, the cell may not hold data, presenting some interesting challenges for the memory designer.

This variability, now usually called statistical variability, is innate to the nature of silicon processing, no matter how clever the engineers have been at resolving the problems of manufacturability. And understanding statistical variability is at the core of Professor Asen Asenov’s work and is the driving force behind the company that he has recently launched.

Asenov was born in Bulgaria and studied solid state physics. The Institute of Microelectronics in Sofia was at the heart of the eastern bloc’s micro-electronics efforts, and in 1979 he was recruited as a high flier at Sofia University, to work on process and device modelling at the Institute. Not long after he arrived, a DEC VAX 11/780 appeared. It had reached Sofia through some very interesting channels since the cold war was at its height and the VAX was regarded by the US as a strategic technology. This gave him his first access to significant computing power, which he exploited when he created one of the first integrated process and device CMOS simulators, IMPEDANCE.

Around this time, he wrote a paper for a western conference and submitted it for clearance through the university and state bureaucracy. After a month he asked to have it back and was told he didn’t have a high-enough security clearance to read his own paper.

With the end of the cold war, Asenov left Bulgaria, working first in Germany and then joining University of Glasgow in 1991, where he set up the Device Modelling Group within the Department of Electronics and Electrical Engineering. The group is now around thirty people, including PhDs and post-doc researchers. To quote from the mission statement,

 “The Device Modelling Group develops state-of-the-art simulation tools which are not available commercially, exploiting finite element 2D and 3D methods, a realistic band particle Monte Carlo approach, and the power of parallel computing. It supports and leads device technology and design programmes in the department, partner universities and industry.”

The work of the group has been funded from a range of sources, including European Community projects, DARPA and the European Space Agency, and from collaboration with a range of semiconductor companies and international research centres, and it has a good relationship with Synopsys.

The group has, through the creation of device models (and the modelling tools to create them), developed an understanding of the problems facing those working with the next generations of nano-CMOS processes. This is where the statistical variations that we discussed earlier are making their presence very strongly felt. The understanding that they have has, for some years now, been translated into commercial form through consulting services as part of the university’s business operations. Now the modelling group has created a commercial operation, Gold Standard Simulations Ltd.

GSS is offering a range of services and tools, starting with courses on statistical variability and how to cope with it, providing simulation services, including 3D simulation of devices for both statistical variability and statistical reliability, developing statistical compact model extraction from the simulation to provide developers with an understanding of the device operation, and carrying out statistical circuit simulations. GSS also provides management software tools for massively parallel statistical device and circuit simulation.

GSS was created to make a clear distinction between the theoretical work of the device modelling group and its commercial exploitation. It also makes it easier to react to commercial realities outside the sometimes inflexible procedures of a large academic environment.

Asenov is convinced that the future of semiconductors will continue to be with CMOS for a very long time – at least twenty or more years. In fact, he says that he can currently see no other feasible route. CMOS may not continue to scale as aggressively as it has in the past, but there is still significant room to improve the quality of devices, through a better understanding of the technologies — in particular through a better understanding of statistical variation and through developing tools and techniques that cope with it.

To return to our static RAM example, where statistical variability makes it inevitable that some of the cells will not be capable of retaining a bit: if the designers have an understanding of what percentage of cells are likely to fail, they can design-in sufficient redundancy, in cells and supporting circuitry, to create a device that meets the specification. In more complex designs other approaches may be needed, but the developers need to know what the parameters of failure are likely to be before they can design circuits that work round them.

One consequence of the issues that CMOS is facing is that the classic trade-off between power and performance, which designers have become accustomed to, now may have a third dimension: that of process yield. In some cases the decision can be to accept lower yields to achieve both high performance and low power. One example that Asenov uses is that of a company that accepts that only 2% of the die they produce meets the stringent requirements of a very high-end application.

If Asenov is correct and CMOS is going to continue to be the driver for semiconductors in the foreseeable future, then statistical variability can only grow in importance. Designers who are aware of statistical variability will be able to develop ever more clever techniques to cope with it. If you want to learn more, then a good starting point is a lecture Asenov gave recently that is on the GSS web site (http://www.goldstandardsimulations.com/courses/).

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

High Power Charging Inlets
All major truck and bus OEMs will be launching electric vehicle platforms within the next few years and in order to keep pace with on-highway and off-highway EV innovation, our charging inlets must also provide the voltage, current and charging requirements needed for these vehicles. In this episode of Chalk Talk, Amelia Dalton and Drew Reetz from TE Connectivity investigate charging inlet design considerations for the next generation of industrial and commercial transportation, the differences between AC only charging and fast charge and high power charging inlets, and the benefits that TE Connectivity’s ICT high power charging inlets bring to these kinds of designs.
Aug 30, 2024
36,111 views