feature article
Subscribe Now

Starting From Scratch

A Brief Look at the Hidden World of TCAD

“You’re ready for tape-out already??”

“Yup!”

“Um… “ “And it meets all specs? In all corners and modes?”

“It should.”

“What do you mean, it should?”

“Well, we designed it to where it should. Of course, once we get silicon out, we can try it out and see if it all works. If not, well, we’ll make some tweaks and try again.”

I dare you to have that conversation with your boss. That might have worked 30 years ago, but with millions of dollars on the line for each mask set, no one in his or her right mind would think of using real silicon as the primary way of deciding whether the design is right. These days that’s just crazy talk.

Instead, we simulate. And we analyze. And we have models for this and for that, and we push and we pull, and we hope hope hope that everything works the first time out. Of course, reality shows that first-silicon success, while not impossible, is something to be celebrated, not taken for granted. But when a design starts taking three, four, and more turns to get right, the folks at the top of the engineering chain start polishing their resumes (likely omitting such lines as, “Blew $25 million in masks as we tried in vain to get the silicon to work because of our broken verification methodologies”).

But how are we able to simulate? We have models. And depending on the level of abstraction, the models may be simplified and derived from lower-level, more accurate models, and you can follow this all the way down to what is, to design engineers, the granddaddy of all models, the ur-model itself, the SPICE model. But where do the SPICE models come from?

Well, when an n-channel and a p-channel really love each other… j/k, lol. Srsly… Depending on when and where you went to school, you may have developed some SPICE models yourself. Maybe even made some devices in the school fab and then characterized them. In industry, process dudes made lots of test wafers with lots of variations on things like dimensions, implant doses, exposure, and such, measured them, figured out which ones worked the best and were the most manufacturable, developed the SPICE models, and delivered that to design.

In other words, they designed it to where it should work, but then they played around with lots of iterations and variations to zero in on the final solution. And a final SPICE model wouldn’t be available for design to use until this process was complete. Design might be able to get a jump on things with an approximate model, but they couldn’t be sure until confirmed physically.

And that gets right to the heart of the matter: SPICE models are empirical. There’s no cause-effect going on. If you tweak something in the process, there’s no way to anticipate with accuracy how the SPICE model will change; you have to measure the new device and plug in new parameters.

So imagine trying to do this today with the intensely complex and intricate gyrations needed to coax ever more performance out of silicon using light wavelengths that, by conventional thinking, shouldn’t be able to print reliable images at the dimensions we’re using. And start adding in new materials and new effects that work at the atomic or quantum level, and you’re dead in the water. Who’s going to build the multi-million-dollar prototype machine for some new process step just to figure out whether that process step will work? It would take years for each iteration and it would waste billions of dollars; it’s absurd to consider. So what’s the alternative?

The alternative is to do what designers do: simulate. The only catch is, where do you get the models? The answer is: from the font of all behavior, real or simulated: physics. And this brings us into the realm of technology CAD, or TCAD, a quiet corner of the EDA world where things look very different. And where everything starts from first principles. If you understand the physics behind what you’re trying to do (or, at least, if you think you understand it), then you can create a model that follows physical laws, and then when you make changes to what’s being modeled, the model can follow suit because the physical laws haven’t changed.

This is a surprisingly small industry. In fact, only one of the Big Guys participates – Synopsys – and they hold the lion’s share of the business, thanks to some acquisitions made over the last several years. Much of the research is done in universities, most prominent of which is Stanford, which pioneered the SUPREM series of tools for simulating processes and devices. A further search identifies Silvaco as a participant, although several attempts at contacting them for this story yielded no response.

The whole idea behind TCAD is modeling what happens with atoms and molecules as they are placed and prodded throughout the manufacturing process. This includes experimenting with things that haven’t been done before, as long as the physical dynamics are understood. It has evolved from one-dimensional modeling to three dimensions, which is important for understanding where substances end up as they spread out in all directions like a middle-aged duffer.

Most aspects of the process can be modeled. For lithography, the masks have to be modeled in three dimensions, and the impact of the light is determined by doing a complete Maxwell’s equations solution. Implant doses, annealing, and activation are modeled, as are diffusion, epitaxial growth, oxidation, chemical deposition, and etching. A thermal view is important so that the impact of various higher-temperature steps on structures can be taken into account.

The node-to-node process changes that have to be simulated these days are more elaborate than might have been necessary for an earlier generation of process jockeys. As we rush headlong into the end of process technology as we know it – again – new creative ways are constantly being dreamed up to put off yet further the ultimate day of reckoning, when we’ve pushed things as far as they can go.

One example of this is the development of new types of devices. Different ways of trying to circumvent the short-comings of really small FETs involve new structures, sometimes referred to as 3D structures. Multi-gate devices, FinFETS, and their ilk are the subject of much study, but since the equipment doesn’t necessarily exist for making these things, the experiments are done in the realm of TCAD.

Stress is another area of current focus. Transistor behavior has been improved by suitably-placed atoms in the silicon matrix that add mechanical stress. But, depending on where the stress is, it can help or it can hurt. And if there’s too much of it, it can create unwanted defects.

Yet another issue being addressed is variability. Because the physical dimensions being addressed are, in many cases, a few atomic layers, or because doses of materials may be quantified by the number of atoms, small variations can make a big difference (at least until they can figure out how to implant fractional atoms). Design-for-variability at the circuit design level is an area of ongoing training and tools development, but at the physical level, TCAD helps to understand the sensitivity of the devices to process variations and how some of those process steps might be better managed.

One area that provides an ongoing challenge is the work required to investigate new memory technologies. Many of these involve new materials and new storage mechanisms, and this is the kind of thing that challenges TCAD the most. While well-known processes can be modeled at the physical level, new ones require new models. And existing models have to be tweaked experimentally to parameterize them for new materials. This doesn’t mean that physics has changed due to the new approaches; it simply means that the physics have to be better understood in areas that haven’t been studied as deeply.

As an example, Synopsys recently announced a partnership with Ovonics, a company developing Phase-Change Memory (PCM), which uses a technology similar to that used for CDs, but integrated into an IC. The characteristics of the material, the dynamics of the phase change, and the details of how to manufacture the structure have to be studied, since they resemble nothing that has already been done in an IC.

One new area getting some help from TCAD is solar: photovoltaic cell designers are constantly trying to figure out how to improve the efficiency of their cells. This is the kind of physical process that can be modeled to help improve the materials and the way they’re put together to get more juice from the sun.

To be clear, TCAD isn’t a panacea; it’s not like you can build an engine that can handle all the basic physical equations and then let it do everything. Specific models are built for specific processes. In one specific example described by Synopsys’ Ricardo Borges, senior marketing manager for TCAD, through-silicon via (TSV) technology involves a series of depositions and etches to get a good via that’s way deeper than it is wide. Synopsys’ technology can help to explore the metal fill process and the stresses it may cause, the metal bonding, and the thermal impact during both fabrication and operation. But the details of the actual etch (the so-called Bosch process, commonly used in MEMS fabrication) is not something they feel they can help with.

In the end, a TCAD provider has to respond to the needs of a customer, like any business, to figure out which problems need solving the most. It’s a little odd, however, to see a market this small. Not only is the number of commercial providers small, but the number of customers is, realistically, shrinking. The only companies using this kind of tool will be foundries and “integrated device manufacturers” (IDMs, the fancy new buzzword acronym name for the kind of company that used to be considered normal – companies that have their own fabs, indicating a suitable testosterone quotient).

This seems to be a magnification of the challenge that EDA faces in general: who pays the gazillions of bucks it costs to develop this stuff? There’s an awful lot of work that goes into this advanced modeling, and you could just see a bean-counting MBA come on board and say, “We only have a few customers, and none of them are Fortune 500; we need to dump this and get into banking so we can qualify for TARP money.” But this is one of those industries in technology that is so fundamental that it’s hard to imagine what would happen if someone decided that it couldn’t be properly monetized. It would be like suddenly discovering we’d run out of the earth’s silicon supply; it would ripple through the system like a bad bowl of bea – um… like a bad bundle of sub-prime loans.

To be clear, this is not to suggest that anyone is suggesting that anyone should pull out of this business. That hasn’t come up in any discussion. It’s just sobering to realize that the whole IC shebang going forward is predicated on some expensive, intensive, obscure, critical work done by a very few for a very few.

Image: Wikipedia

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens investigate the details of Calibre’s shift-left strategy. They take a closer look at how the tools and techniques in this design tool suite can help reduce signoff iterations and time to tapeout while also increasing design quality.
Nov 27, 2023
60,860 views