feature article
Subscribe Now

Big Data in Semi Manufacturing

Rummaging for Better Results

He looked yet one more time out onto the crime scene. The answer was out there somewhere. But how many times could he come back “with fresh eyes,” hoping to see something he hadn’t seen the last time? Dried leaves turned a particular way… dirt compacted ever more slightly here than there… distinctions hard to make out with the naked eye, but, given enough data and some way to sort through it all, he knew a pattern would emerge that would lead him to the answers he needed.

A few weeks back, we looked at how Big Data was impacting the world of design management. But that’s not the only corner of the semiconductor world infected by the Big Data bug. At this summer’s Semicon West show, I met up with a couple of companies using data analysis to improve different aspects of semiconductor manufacturing.

The idea behind any of these ventures is that there’s a ton of data out there. We use it to a limited extent, but there’s tons more we could learn if we aggregated it and sliced and diced it the proper way. And, if we automate some of that – once we figure out exactly what recipe we want to follow – then we can integrate it into the actual manufacturing flow, providing control feedback as well as alerts and notifications that something might be amiss. Or we might be able to build small, inexpensive equipment that competes with much larger units.

One of the new things about Big Data is a different way of organizing the data crudely (or, more accurately, not organizing it) so that you can keep up with the firehose feeding yet more data into the system, even as impatient analysts want to see the latest NOW and make inferences and see trends before the Other Guy does. The poster child for this new approach is the Hadoop project.

However, one of the reasons for taking that approach is the broad, unpredictable variety of things that need to be cached away – it’s particularly helpful when the incoming data has little or no inherent structure. That’s not the case with the companies we’re going to discuss. They have highly structured data, so they don’t use a Hadoop-like approach. They’re still faced, however, with the challenge of processing the data quickly enough to provide low-latency feedback on an active production line.

The first company we’ll look at is called Optimal+ (in an earlier life, called Optimal Test). Their focus is on analyzing reams and reams of test data and drawing conclusions that can impact current work in progress. They recently announced that they’re aligning with HP Vertica as a database platform for supporting the processing performance they require.

Their baseline product is called Global Ops, and you might call it generic in that it’s simply a way of taking data from all aspects of the manufacturing flow, learning something, and potentially changing the flow. We’re talking, in many cases, incremental tweaks, and the trick is always walking that tightrope to improve yield or throughput without reducing quality.

The idea is to analyze the bejeebus out of the data and come up with ideas for new “rules.” Those rules can then be tested across the range of historical data to show what might have happened had those rules been in place back then. If you like the way things turn out, then – and this is key – you can push the new rules out for immediate implementation across the supply chain. That means fabless companies impacting wafer production or fabful companies impacting an offshore test house.

More specific products can be laid over this Global Ops platform. There’s one for managing the test floor, one for reducing test time, one for detecting outliers, one for preventing escapes, and one optimized for high-volume production – managing the ramp of a new product or the yield of an established product.

Note that all of these solutions are centered on test data. No process monitor data or other internal fab information is used in the analysis. Latency from new test data to completed analysis is in the range of 7-10 minutes.

The second company I spoke with is Nanotronics, and there appear to be a couple of things going on there. These guys focus on inspection at the atomic level – atomic-force microscopy (AFM). They claim to do with relatively small systems a range of things normally covered by families of big inspection platforms from folks like Applied and KLA-Tencor.

These guys instead use computational photography techniques to improve feature resolution, and they use data analysis on a die to identify miniscule features and across an entire wafer to identify macro-level features like a big scratch on a wafer. Such a defect, crossing multiple dice, would be apparent only by analyzing the results of the entire wafer rather than an individual die.

While the larger competing platforms tend to use a variety of wavelengths and approaches for different jobs in order to ensure a reliable “signal,” Nanotronics claims to be able to get that same signal without all of that varied hardware. That said, however, we are talking about a table-top setup, so, even though they provide automation tools like a wafer handler, it’s hard to imagine this running high volumes and putting the other guys out of business.

The aspect of the platform that would appear to fly the Big Data flag more prominently would be their learning capabilities – both unsupervised and supervised; their feature recognition relies on this. The other Big-Data-like characteristic is that, from a single scan, they can then run multiple different analyses on the resulting single dataset (examples they give are die yield analysis and defect detection).

While much of their focus is on semiconductors, they’re also playing in the bioscience areas as well, analyzing viruses, bacteria, and cells. Same basic approach and scale; different features.

One of the big distinctions between these two data-oriented platforms is the openness of the analysis. Optimal+ is specifically about giving data to users and letting them customize the analysis and the learning. Nanotronics, by contrast, keeps its algorithms under the hood, using them as a vehicle for improving the user’s inspection experience. In one case Big Data rises to the fore; in the other, it sinks into the background.

In either case, it changes the rules of the game. Once novel, Big Data is rapidly becoming commonplace as we figure out how to rummage through it all in a productive manner.

 

More info:

Nanotronics

Optimal+

 

One thought on “Big Data in Semi Manufacturing”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Evolution of GNSS
Sponsored by Mouser Electronics and Taoglas
In this episode of Chalk Talk, Pat Frank from Taoglas and Amelia Dalton explore the details of multi-constellations GNSS Systems. They also investigate the key characteristics of antennas and how you can future-proof your GNSS design with Taoglas antenna solutions.
Dec 11, 2024
8,459 views