editor's blog
Subscribe Now

TSVs: Like Vias, Only 1000X Deeper

We recently looked at Applied Materials’ solution to the challenges of lining small vias: using cobalt. But those are through-dielectric vias. What about through-silicon vias (TSVs)? After all, they can be a thousand times deeper than a standard via, so if a standard via is hard to cover, imagine how hard it must be for a TSV.

Of course, we’re talking a wider via, but AMAT says that standard physical vapor deposition (PVD) tools do an inadequate job of coating the TSVs when applying the barrier, for lots of the same reasons we discussed in the cobalt story.

Their solution to the TSV issue isn’t quite as radical as a new metal; it involves tightening up the angle of dispersion for the metals, providing better coverage. With better coverage, the barrier can also be made thinner, saving cost. A thinner layer is faster to deposit, improving throughput (and reducing cost).

Figure.png

 

(Image courtesy Applied Materials)

In addition, they’ve built a production-worthy chamber for use with titanium rather than the more typical “proven” tantalum. Titanium apparently being cheaper than tantalum. Both can be integrated with the copper seed.

You can read more about their Ventura PVD in their announcement.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,813 views