feature article
Subscribe Now

Expanding EDA

Newer Tools Let You Do More than Just Electronics

Welcome to autumn. It’s usually a busy season – although the activity typically starts more with the onset of September and the resumption of school than with the equinox. But it also comes on the heels of a quiet season, even in the overworked US.

And EDA has seemed moderately quiet. So I started looking around to see what I might have been missing, and I’m not sure there’s a lot. But it did get me musing on why things might be quiet for the moment as well as what fills the gap – which gets to the topic of what qualifies as EDA. It’s more than you might think.

At the risk of being obviously over-simple, the legions of coders in EDA-land are doing one of two things: building new technologies or improving on old ones. The new technology category might include support for FinFETs or multi-patterning or the design kits for the latest silicon node. The improvement side of the tree is where performance and capacity and usability are juiced up – all in the name of productivity, of course.

Looked at another way, one side lets you do new things; the other side lets you do the old things better. (We won’t get into the fixing of bugs… Bugs? What bugs?)

Which of these is in play at any moment depends on where we are in the node cycle. We’ve recently seen a huge push to spin up the new technologies required in order to take us all the way down to 14 nm, but it’s very early days for that node. It becomes a bit of a gamble on the part of EDA companies with each new technology: design tools are needed to prove out the technology, but customers won’t be using it until years later.

The good news there is that the tools can start out rough-and-ready; while the fab guys are finalizing design rules and reference processes and design kits, the new algorithms and user interfaces can be smoothed out. By the time designers are using them in force, they’ve had lots of time to mature. (Although there’s nothing like having lots of designers pounding on your tool to find the weak spots…)

It feels a little like we’re at that point now where some of the heavy lifting for the newest nodes has been done, and now we wait for uptake. And it’s going to be a while before folks flock to the 14-nm node…

So what happens in the meantime? That would bring us to the other side of things, where existing stuff gets better. This can involve the re-engineering of some pesky code to blow through a bottleneck, partitioning algorithms for multi-threaded and distributed computing, or complete data model re-inventions and start-from-scratch rebirths. Or new interfaces, or integrations between tools that make a designer’s life easier or more intuitive – and, critically, let him or her get things done more quickly.

Because, even if we have no new transistor types or if cheap EUV shows up tomorrow and obviates multi-patterning (relax – no, it’s not going to happen), designs are still getting bigger and chip houses are still competing.

That means that they need to do more faster, which pushes performance, and they need to manage larger designs, which pushes the memory footprint. The cloud? Seems to have been a big “meh.” Or worse. It mostly doesn’t seem that companies are forging ahead in the skies, largely due to family jewel concerns.

So… if there’s not a lot of new-capability news going around, then… do EDA news outlets go dark for a while? Of course not. But a look around shows some of what’s happening: IP and embedded systems and even – gasp! – software are filling the void.

Now, IP and EDA have always gone together (at least for silicon IP). But embedded system design, well, that’s generally been considered a completely different beast. Especially the software part. And, in a way, software is like FPGAs: folks don’t expect to pay a lot for tools. Which is not exactly the culture of EDA. But that has everything to do with The Stakes. And The Stakes have been raised.

The thing about software in particular is this: you write it, you test it, and it works after some fixes. Easy peasy. OK, maybe it’s not quite that simple, what with the immense complexity of much software. And coders hate testing stuff. So… you write it, it seems to work, you check the obvious bits, and then you hand it to some integrator or testing group to handle the serious testing. Which they – hopefully – do.

But here’s the critical point: if they miss something – well, then you put a patch up on the web and, just like that, the problem is gone.

Entirely unlike an integrated circuit.

Embedded system hardware is somewhere between software and ICs. Board design has the potential to be rather sophisticated, depending on how much stuff you’re trying to cram where. But the costs of masks and rework for a board are nothing like those needed for an IC.

And that’s the thing that keeps people writing checks for EDA tools: the mask set is simply too damn expensive to replace (not to mention any work-in-progress, or WIP, that might have to be tossed). So you have to get it right the first time, and it takes expensive tools to do that.

But, memories and FPGAs aside, the biggest chips we’re making these days are systems-on-chip (SoCs). Or, perhaps better stated, embedded-systems-on-chip. So we have the complete merging of IC design and embedded system design. Including software. Yes, the software part may be updatable, but, for users, it’s much easier to update an application on a computer than to re-ROM low-level firmware in a device that users don’t even know has a computer in it. So getting the software right is now more urgent. Which is a big reason why emulation is in vogue.

So the role of EDA is increasingly growing to include embedded technology. Is there anything else that’s joining the party? Aside from board-layout software?

Well, there are two newish technologies increasingly being built on silicon wafers. One is something we’ve spent a lot of energy on: MEMS. Tools like MEMS+ from Coventor (whose 5.0 edition was recently released) and SoftMEMS. The thing is, “EDA” stands for “electronic design automation.” And, just to push the associative property here, I’m pretty sure that means “automation of electronic design,” not “design automation by electronic means.” And MEMS isn’t electronic; it’s mechanical. Which is usually handled by CAD tools (“computer-aided design” – and, in this case, that does mean “design done on computers”).

The other non-electronic technology peeking its nose under the tent is optical, which we’ll discuss at greater length soon. But, here again, this is a technology that’s poised to invade the silicon landscape.

Yet another is fluidics, along with other health-related technologies combining electronics with disciplines like fluid dynamics and chemistry (or biochemistry). The fluids gotta get where they gotta get, and something in that tiny space has to be able to identify the one or more substances being assayed within those fluids. Or within the atmosphere.

Finally, packaging design is merging more closely with chip design. This isn’t so much a matter of the packaging creeping onto the chip in some Escheresque involution, but rather reflects the need to include the effects of package and chip together on the overall design. The package and chip design teams are still different, but simulation now needs to cross the boundary to get from physical pin in to driving transistor. 2.5D and 3D chip stacks blur these lines even more.

So wafers, which were once the safe refuge of electronics mavens, are now hosting lots of outsiders – from embedded to mechanical to optical to plumbing to chemistry. It’s bringing together tools from different domains that used to reside in their own silos. Now they need to work together, with coordination at the boundaries where the various “physicses” are transduced into electronic form.

So, even if no new transistors are born (not gonna happen) and no radical technologies raise their heads (also not gonna happen – see DSA for an example), there is still lots of work to do to transform the multi-faceted elements that constitute a complete system into a working silicon unit.

There will be no lack of new stuff to talk about.

3 thoughts on “Expanding EDA”

  1. Given that the EDA guys are a rather non-“Agile” bunch I’m not sure they’re going to get much done.

    Personally I think you can re-apply a lot of EDA tools aimed at hardware design at parallel software design. Parallel stuff is hard to debug, so formal methods are good, and timing analysis is important for managing communication.

    Are there really “legions of coders in EDA-land”?

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
46,571 views