feature article
Subscribe Now

Getting Around Limits By Getting High

If you were able to record the development of a town as it grew into a city over years and decades and then speed up the film in a super-fast-mo replay, you’d notice, assuming you weren’t thrown into an epileptic seizure by the rapid day/night flashing, that things start in a small center and move out for a while. Farmlands are replaced by tract homes, forests are cut down, hills may be leveled or developed, and the town inexorably creeps outward like mold in a Petri dish.

At some point, a limit starts to impede the amoebic outward spread. The constraining factor may be geographical; perhaps the extent of a valley has finally been covered, or open space or a greenbelt was declared, halting further encroachment. It may be sociological; commute times from the outskirts to where the jobs are may have become intolerable. If nothing else, the community may decide they’ve become dull as dirt and want to inject a little urban spirit into their wan suburban style. The reasons may vary, but little by little, the outward push will give way to an upward push.

This doesn’t come without a cost; clearly it costs more to dig into the ground in order to put in a parking garage with two underground floors and three above-ground floors than it does simply to pave over a chunk o’ mud and call it parking. Making a tall building earthquake-proof and stabilizing it against winds is harder than throwing together some ugly tilt-up walls and slapping a Wal-get sign on it. But building up eventually becomes cheaper than the alternatives.

ICs have been toying with the vertical for a long time, of course. Once CMP eliminated certain death-by-step-coverage, it seems there has been no end to the number of metal layers we can use. But it does come at an expense: long routing nets and complicated via interconnects can slow down or degrade signals. In addition, some neighborhoods don’t integrate well: certain kinds of circuits require very different processing from others, and trying to combine them on a single die is tough. And the increasing difficulty of developing new process nodes means that simple, traditional reliance on the next generation for added integration starts to feel slow, risky, and expensive.

If you can’t keep integrating on one monolithic die, then you have to use multiple dice. Clearly, just buying multiple packaged chips and wiring them together on a circuit board provides approximately zero integration. The next step has been system-in-package (SIP), where multiple dice are packaged on some substrate and wire-bonded to each other or through the substrate. But the real move up is to attach the dice directly to each other by stacking them.

Now we’re really talking vertical. This isn’t just adding more wires above the false ceiling of a one-story big box and calling it urban renewal. This is adding more floors, where any one of the floors could itself be the ground floor. The question is, how do you do this? After all, if these things can’t talk to each other, it doesn’t do much good. You need to have an elevator shaft or two or perhaps some escalators in order for stuff to get from one floor to the next.

One of the approaches being more actively developed for this is called through-silicon-via (TSV) technology. This involves drilling a hole all the way through the bulk silicon so that a die can be attached to a die underneath it. You then effectively have “bonding” (or bumping) pads on the bottom of the wafer. But once you embark on this approach, a number of possibilities are raised.

For example, if only two dice are going to be bonded, perhaps it’s easier to do a flip-chip of the top die, keeping all the connections on the surface of the die and mounting surface-to-surface. If three dice are being stacked, they could all be stacked face-up, or the second die could be TSVed and flip-chipped, and the third die could either also be flip-chipped, mounting on the bottom of the second die, or it could itself be TSVed and placed face-up, with signals between the second and third dice going through two sets of TSVs before reaching their destinations. The combinations seem endless. In theory, anyway.

IMEC, the Interuniversity Microelectronics Centre in Belgium, announced in October that they had made some substantial progress in research work on TSV. In their particular work, they didn’t actually mount die to die: when doing a two-level stack, they diced up the wafers that were going to be mounted, but they left the bottom-die wafer intact and mounted the top dice onto that wafer. This simplified handling while allowing bonding to be done only on known good dice. They ultimately confirmed the mechanical integrity of a four-level stack (although they didn’t check to make sure it worked electrically).

One simplification they found was that it wasn’t cost effective to test the individual dice exhaustively to minimize loss after bonding. Full testing was too expensive; it actually turned out to be cheaper to use “pretty good” dice and then suffer some post-bonding loss.

The dice they mounted on top were smaller than the underlying dice. Depending on wafer-cutting technique, you might think that having two identically-sized dice could be problematic, since a mechanical saw would likely rub the edges of the mounted die when cutting up the underlying wafer. But using alternative techniques like laser cutting allows any size die to be mounted as long as it doesn’t actually overlap where you’re cutting.

There are a number of other critical considerations when applying this technology. One deals with a choice of where the TSV is made: either in the wafer fab – so-called “via-first” or “3D stacked IC” – or in the packaging plant – so-called “via-last” or “3D wafer-level packaging.”

The via-last approach is coarser, with via pitch of 40 – 60 microns. After thinning the wafers, the vias are etched from the back of the wafer until the vias reach Metal 1, where the via makes contact. The via-first approach can create finer vias, 10 microns and below, and does so from the top, before the metal layers are deposited; the metal layers are too hard to etch through due to the number of different materials. In addition, the via isn’t etched all the way through the wafer; it’s essentially a blind via until the wafer is thinned for packaging, at which point the via becomes exposed at the back of the die.

Part of the decision regarding where to do the vias has to do with the temperature effects of the TSV process. Continued high-temperature processing can disturb the delicate balances already achieved in the active portions of the die. DRAMs are apparently particularly sensitive, since doing TSV after they’ve been “trimmed” can throw off the adjustments, making some erstwhile good memories bad and making some bad memories good. While temperatures have been brought down to around 250-260 °C for a copper-tin inter-metallic joint, the goal is to get the process down to below 200 °C.

Another big concern with stacking dice is an obvious one: how do you dissipate the heat being generated by these dice? There’s less than one micron between dice. It’s not like you can hire some good HVAC guy to come in and build a false floor for the server room or tweak the air flow. This is also an area of experimentation. Various thermal interposer materials could be placed between the layers. IMEC used a bonding material that itself had good thermal dissipation characteristics.

It should also be possible to tile multiple dice laterally on a single large die, although IMEC hasn’t actually tried that. If small passives or analog chips are being added, this would provide some significant flexibility for getting a lot into a small space and, for example, placing terminations even closer to signals.

While there is cautious, curious activity going on here, it is still, for the most part, all research. Eric Beyne, IMEC’s Scientific Director for 3D Technologies, hasn’t seen anyone actually starting to create architectures with this kind of packaging in mind. He sees 2012 as a more likely timeframe for commercial use, so we’ve got a bit of a wait if that’s the case.

And this is where the note of caution creeps in. Four years can be a lifetime, especially with something that’s “different” – everyone resists doing different until they have to. If, in those four years, something less different becomes available, then yet another cool idea could become unnecessary. It would be as if someone said, OK, never mind, you can go ahead and build on the greenbelt, and so a simpler workaround is found that lasts so long that, by the time the limit is hit next time, some other new solution turns out to be better than going vertical. Like bagging skyscrapers altogether and escaping to settle some other planet.

 

Link: IMEC 3D technology

5 thoughts on “Getting Around Limits By Getting High”

  1. Pingback: roofing
  2. Pingback: DMPK Studies
  3. Pingback: cpns2016.com

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Wi-Fi Locationing: Nordic Chip-to-Cloud Solution
Location services enable businesses to gather valuable location data and deliver enhanced user experiences through the determination of a device's geographical position, leveraging specific hardware, software, and cloud services. In this episode of Chalk Talk, Amelia Dalton and Finn Boetius from Nordic Semiconductor explore the benefits of location services, the challenges that WiFi based solutions can solve in this arena, and how you can take advantage of Nordic Semiconductor’s chip-to-cloud locationing expertise for your next design.
Aug 15, 2024
59,588 views