feature article
Subscribe Now

3D IC Testing and Yields

Imec/Delft Tool Manages Tradeoffs

It used to be pretty straightforward to figure out the cost of a finished IC. You had a linear progression of steps, each of which cost something to perform, and each of which might cause some fallout. In the end, your die cost was simply the sum total of all of those steps amortized over however many dice survived the whole process.

We’ll call creating a wafer a single step, even though, obviously, it’s enormously complex – and getting more so by the hour. But some number of the chips on the wafer (hopefully a lot) will be good. You then dice up the wafer into dice [typically referred to as “die” or “dies” in this industry, for some reason]. That will damage some of the erstwhile-good dice. You then take the remaining good dice and assemble them into packages, which entails yet further fallout. In the end, some number of chips see the light of day as good, finished units.

Part of optimizing this whole thing is in deciding when and how much to test. If your wafer yield isn’t so great, then you don’t want to waste a lot of time and money further packaging something that will just get thrown away when it’s all over. So you do wafer-level testing. There are some things, like speed, that are very hard to test through a wafer probe, so you probably need to wait until the part is packaged for that. As for other tests, well, test time is money, so you have to balance how much you test at the wafer level vs. how much you’d throw away if you didn’t test that much.

Then, of course, you perform a “final test” that ensures that only good material ends up on store shelves. Ideally, you want close to 100% yield here, but you might accept greater fallout if, say, it allows you to skip some more expensive wafer-level testing.

These equations – how much to test and when to do those tests in order to optimize cost – have been solved and resolved many times by every company that makes ICs. There’s not one right answer for everyone, and the balance can change as a product and its processes mature. But the nature of the beast is relatively straightforward.

Things get more complex, however, when you start involving more than one component in a package. The process is no longer a straight line: there are multiple straight lines, one for each contributing component, that ultimately merge at various (or the same) points as they come together into a single final unit. And the equations are no longer quite so simple.

Let’s say you’re going to combine two dice into a single package. Such “multi-chip modules” are nothing new. The “obvious” way to do that would be to make wafers for each die, test the wafers to identify which dice are good, and then dice up the wafers. These “known-good dice,” as they’re called, can then be made to cohabitate, and your final yield should be reasonable.

But let’s look at a different example, this one coming from the MEMS world. MEMS often involves multiple units in a package because the MEMS elements are typically fabricated separately from the accompanying ASICs that condition the signals coming from the MEMS element. (This could be avoided by using CMOS-friendly MEMS and integrating on a single die, but, for various reasons, including cost, there is, so far, limited use of this approach.) The MEMS unit and the ASIC are then co-packaged and are sold as a single sensor. In fact, in some cases, you might even have multiple MEMS elements – one for each of three axes, for example – but let’s just stick with the simple one-MEMS-plus-one-ASIC scenario (or, better yet, a single-die three-axis implementation).

If your yields are high enough, then you can dispense with the whole dicing-up-the-wafers thing. You simply make sure that your ASIC and MEMS dice are sized exactly the same, and you take an ASIC wafer and glue the whole thing to the MEMS wafer (face-to-face or stacked using TSVs). (OK, the details are more complex than that, but they’re beside the point.) After the wafer sandwich is complete, then you dice the combined units up and test them as a whole.

The catch here is obvious. You’re not worrying about which dice are good – this is not a known-good-die approach. You’re simply slapping the wafers together. Ideally, you want to be able to sell every good MEMS element and every good ASIC chip. But by assembling them this way, you risk mating a good MEMS element with a faulty ASIC. Or vice versa. If that happens, the combined unit will, of course, be bad, and so you’ll be throwing away (or ruining) perfectly good dice.

The only way this works is if your yields are high enough that the loss at the end of the process costs less than it would cost to pretest the wafers, saw them up, put them onto pick-and-place trays, and assemble one at a time. Those are a lot of steps to skip, so it’s tempting to go that route. Which is, of course, what InvenSense does with its gyroscopes (with patents protecting some of this).

The yield at the end of this process is a compound yield: if you have 90% yield each on the MEMS and ASIC wafers, then you’ll have 0.9 X 0.9 = 0.81 – 81% compound yield. Even with only two relatively high-yielding wafers, the compound yield creeps up surprisingly quickly. Which is why you want really good yields for this to work well and to avoid occasional excursions into “OMG we lost the recipe!” territory.

All this is reasonably well-trodden ground, and I’m assuming that this little review so far has been just that – review (if not outright obvious). But let’s take things a bit further. The whole 3D and 2.5D IC game ratchets this scene up to an entirely new level. Last month I saw some analysis by Imec regarding costs and yields of multi-die assemblies. I won’t get into the intricacies of the numbers here, but even simply extending the arithmetic above to add a couple more dice takes yield from 81% to around 64%. So the tyranny of compounding can kill you really fast.

And the whole question of throwing good money after bad gets really complicated. So much so that Imec has introduced two new categories of test: mid-bond and post-bond (but pre-final). How these work into the flow is pretty straightforward.

Let’s say that you have n dice that you’d like to stack. You test all of the dice ahead of time to some degree of fault coverage. Too little fault coverage and you’ll get expensive failures later; too much, and you’ll consume too much expensive test time. So fault coverage is a knob we can turn here, independently for each die. Frankly, as we mentioned a couple of weeks ago in our coverage of Cadence’s flow, memory guys would just as soon declare their memories completely tested, ship them out as commodities, and wash their hands of the whole thing, which best suits their business model. Other dice might have different degrees of pre-test.

Let’s assume we’re stacking these dice. So we stack die 2 over die 1 – and now we have an opportunity for a “mid-bond” test to make sure that things are looking OK so far. Exactly what do you test here? Well, that’s another knob that you can turn, but it would probably be limited to connectivity checks and maybe some basic “are you still breathing?” tests.

Now you add die 3 to the stack – and do another (optional) mid-bond test. And so on until all n dice have been stacked. Now you do a “post-bond” test of the entire stack. Finally, you can complete the assembly (encapsulate etc.) and do one last “final test” operation whose content and coverage is yet another knob.

The same sort of flow applies if you’re doing a 2.5D build on an interposer. Only now the interposer yield matters. (And the cost of the interposer itself is highly impacted by yields – to the extent that more expensive processing techniques might actually result in a cheaper interposer, as I describe here.) And it’s easier to assemble something like this on a wafer substrate – that is, silicon interposers before they’ve been diced – but bad interposer yields might force a known-good-interposer alternative.

So you can see that there are lots of knobs – how many test insertions to do on which dice and how much testing to do on each insertion. And optimizing the cost of the combined, compound cost/yield problem is not easy or obvious.

Which is why Imec worked with Delft University to create a tool called 3D-COSTAR so that you can play with the various scenarios to figure out what will work best for your specific situation. And bear in mind that, as processes move up the learning curve, what might have made sense in the early days of your product might no longer pencil out. If yields improve enough, then testing can be cut back or even eliminated at various stages. It certainly provides plenty of cost-reduction opportunity.

So as the once-simple die-goes-into-package process becomes dramatically more complicated, back-of-the-envelope calculations are becoming untenable. It’s going to require tools like 3D-COSTAR to make sense of the bewildering array of options and combinations and permutations. Yes… another tool entering an already complicated flow. As Imec might say, “Graag gedaan*.”

 

*More or less, “You’re welcome” in Dutc… er… Flemish.

One thought on “3D IC Testing and Yields”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Ultra-low Power Fuel Gauging for Rechargeable Embedded Devices
Fuel gauging is a critical component of today’s rechargeable embedded devices. In this episode of Chalk Talk, Amelia Dalton and Robin Saltnes of Nordic Semiconductor explore the variety of benefits that Nordic Semiconductor’s nPM1300 PMIC brings to rechargeable embedded devices, the details of the fuel gauge system at the heart of this solution, and the five easy steps that you can take to implement this solution into your next embedded design.
May 8, 2024
39,119 views