Rohit Sharma is the founder and CEO of Pairpath, an EDA company with a clear mission:
“Enable customers squeeze every pico-second of performance and every milli-watt of power by efficiently providing sign-off accurate models.”
At the recent Electronic Process Design Symposium (EDPS) held at SEMI’s HQ in Milpitas, Sharma discussed the role(s) that AI might play in EDA. He started by noting that AI/ML research now consumes more than 1% of the world’s R&D budget. (Other EDPS speakers noted that the number of AI research papers has been growing exponentially, supplanting Moore’s Law for semiconductors with some other AI-research related law.)
Sharma said that the most likely use for AI in EDA was to add new features. In other words, he expects that the addition of AI to EDA will not be disruptive, but it definitely has a place. The most likely best fit for AI is in replacing algorithms that have not been successful, or not sufficiently successful.
The example Sharma gave was for cell classification—for example, characterizing a certain transistor layout as a full adder. Sharma said this is a common EDA problem and it’s an NP-complete problem. Although “NP” stands for “nondeterministic polynomial” and NP-complete are the hardest NP problems to solve, in my own mind I read “NP complete” as “not possible to complete.” At least not in any commercially practical amount of time.
It’s sort of like the dilemma that the newly reconstituted Spock faces in “Star Trek IV: The Voyage Home” (aka “Star Trek saves the whales.”) Here’s a dialog fragment from the movie to remind you:
Kirk: Mr. Spock, have you accounted for the variable mass of whales and water in your time re-entry program?
Spock: Mr. Scott cannot give me exact figures, Admiral, so… I will make a guess.
Kirk: A guess? You, Spock? That’s extraordinary.
NP-complete problems are like that. They have “high dimensionality” (Sharma’s words), so they’re hard to encode into a deterministic algorithm. AI inference used for pattern matching has no dilemma here. AI inferencing engines will happily serve up their best “guess.”
Sharma also listed the challenges associated with adding AI to EDA (generalizable to any AI use):
- A clear value proposition. (Just because you can use AI doesn’t mean that doing so is a good idea.)
- The AI use model for any specific application
- Data engineering. Be sure to look at the data set(s) before trying to apply ML.
- High dimensionality. (The Spock dilemma.)
- ML technology selection
- Integration of AI into legacy systems
- Acceptance of probabilistic results (will AI’s best “guess” suffice?)
In his concluding remarks, Sharma said that despite these challenges, he expects AI/ML will very likely alter the way EDA software is written.
Let’s see now: EDA was born when “VERILOG CAN BE SIMULATED!!!!” became the driving force Verilog should(MUST) be used for design entry. Designers were reluctant, but as usual hype and buzzwords prevailed.
No wonder “The example Sharma gave was for cell classification—for example, characterizing a certain transistor layout as a full adder. ” is a problem. They do not yet realize that a transistor layout is a Boolean thing, therefore it is a total mystery.
There were countless full adders in technologies ranging from pulse gate, to nands, nors designed, built, and used before Design Automation. In fact, the origin of Design Automation (DA) was to wire PCBs for the IBM System360 in the early 1960’s.
Automated Logic Diagrams were used for logic gates to show fan-in and fan-out for each gate.
Starting at any gate it was possible to find the input logic conditions, the gate logic function, and the gates in the network that use the output.
There were no simulators. so waveforms had to be hand drawn if they were needed.
So now EDA still has trouble characterizing an adder? Carry save and carry lookahead adders were invented, eye-ball verified, pencil and paper simulated, build, used over 50 years ago.
The key was Boolean Algebra — which EDA has nothing to do with, thankyou very much.
I was one of the first users of ALDS. I designed, debugged, trouble shot, retrofitted from the smallest to the biggest. What? Without Verilog, VHDL, simulation, synthesis?