feature article
Subscribe Now

100+ AI-Designed SoCs and Counting!

Way back in the mists of time we used to call 2020, the guys and gals at Synopsys launched DSO.ai (Design Space Optimization AI), which they described as being “The industry’s first autonomous artificial intelligence (AI) application for chip design.”

As they said at that time, “DSO.ai searches for optimization targets in very large solution spaces of chip design, utilizing reinforcement learning to enhance power, performance, and area. By massively scaling exploration of design workflow options while automating less consequential decisions, the award-winning DSO.ai drives higher engineering productivity while swiftly delivering results that you could previously only imagine.”

Two years later in 2022, I posted a column here on EE Journal—Using ML to Mine Design Data to Speed and Improve SoC Designs—in which I touched on DSO.ai before proceeding to discuss two related tools in the Synopsys arsenal: DesignDash and SiliconDash.

Now, here we are in 2023 (where does the time go?). Recently, I was chatting with Stelios Diamantidis, who is Senior Director for AI Strategy and Products at Synopsys, at which time I discovered that we are celebrating the first 100+ commercial tape-outs of AI-designed chips, all of which were realized using Synopsys’ DSO.ai technology.

Before we proceed, a word of warning. I’m a hardware logic designer by trade, so when I hear terms like “design space exploration,” I tend to think of exploring options at the front end of the design, selecting between a slower, smaller implementation of a function versus a larger, faster incarnation of that function, for example. It may come as a surprise to other front-end designers to discover that the back-end physical design team also use the term “design space exploration,” but in this case they are talking about exploring the physical placement and routing aspects of the design.

Stelios told me that DSO.ai, which runs in the cloud on Microsoft Azure, has already been adopted by seven of the top ten semiconductor companies. These are the sorts of companies that are designing CPUs, GPUs, AI accelerators, and SoCs for mobile, IoT, IIoT, and AIoT applications, to name but a few.

Hmmm. AI accelerators designed using AI. What could possibly go wrong? May I make so bold as to remind you of my earlier The Artificial Intelligence Apocalypse trilogy, Part 1, Part 2, and Part 3.

But we digress… I understand it was STMicroelectronics that achieved the first-ever commercial design tape-out using AI in the cloud in the form of Synopsys DSO.ai running on Microsoft Azure. By doing this, the chaps and chapesses at ST realized a 3x productivity uplift for power, performance, and area (PPA). Stelios says this “3x” refers to both the amount of time taken to get to the results they were looking for as well as the amount of actual engineering “people time” invested in the project. (I just realized that this ‘3x’ could be mistakenly taken to mean that things took three times as long. The way to think about this is that “productivity uplift” is the reciprocal  of “time taken,” if you see what I mean.)

100+ AI-Designed SoCs and Rising! (Source: Synopsys)

Stelios says the folks at Synopsys originally assumed the primary audience for whom DSO.ai would be of interest would be designers working at the most advanced process nodes, such as 5nm and below. In fact, it turns out that this technology is enjoying board coverage, including designers working at 40nm and above.

Another assumption was the DSO.ai would be of interest mainly for the largest and hairiest designs involving hundreds of millions to billions of transistors. Once again, they were surprised to discover that customers were also using this technology for smaller designs.

As one more example of unexpected usage, the folks at Synopsys didn’t really expect users to employ DSO.ai on highly structured designs that had already been hand-optimized into the ground. The sort of thing we are talking about here is memory devices, for example. All of which brings us to the South Korean company SK hynix, which is the world’s second largest memory chipmaker (specializing in DRAM and Flash devices) and the world’s third largest semiconductor company. Working with one of their latest and greatest designs, which was to be implemented on one of the most advanced process technologies, and by leveraging the capabilities of DSO.ai, the team at SK hynix shrank the size of their die by 5%. Suffice it to say that a 5% reduction is extremely significant when you plan on manufacturing tens of millions or hundreds of millions of something, especially if you think your design has already been optimized as much as possible. My understanding is that this particular success took everyone by surprise.

Let’s remind ourselves that an SoC is typically composed of hundreds of intellectual property (IP) functional blocks. Many of these blocks will perform standard functions like communication interfaces (Ethernet, PCIe, USB, etc.) or memory interfaces (DDR, LPDDR, etc.), in which case they will be typically acquired in register transfer level (RTL) form from third-party vendors. The remaining IP blocks providing the “secret squirrel sauce” that will differentiate this SoC from its competitors are designed in-house.

Many people, unless they are members of an SoC’s physical design team themselves, assume that third-party IP blocks are delivered with a high-level pre-defined form-factor (“This block to be implemented as a rectangle with a 2:1 aspect ratio,” sort of thing). In fact, this is not necessarily the case. Stelios told me about one design team who wished to implement their PCIe controller in a much longer and thinner form-factor than usual because this was “all the space they had left.”

You may also be assuming that DSO.ai does little more than move the IP blocks around like pieces on a chess board (which itself is a non-trivial task). Well, DSO.ai does do this, but it also moves elements around within the IP blocks themselves.

One more assumption you may be making is that most IP blocks are square or rectangular. In fact, you can expect to meet a weird and wonderful collection of rectilinear shapes. That is, shapes all of whose sides meet at right angles. The easiest way to visualize this is like trying to build an SoC using IPs shaped like pieces in a game of Tetris.

The bottom line is that, even if your new SoC design boasts hundreds of tried and tested IP blocks, every new physical layout is going to provide a new and exciting adventure. As compared to your previous designs, the floor plan you’re working with will be unique, your constraints will be different, the target node and the flavor of that node are generally going to be different, and the process development kit (PDK) is going to be different. All of this provides almost unbounded scope for DSO.ai to work with.

Things are already exciting, but it’s important to note that we are still barely dipping our toes in the AI-enabled waters. For example, DSO.ai currently takes over after the system architects have defined the SoC and the front-end designers have created its logical implementation. At some stage in the not-so-distant future, DSO.ai or its successors will start to come into play much earlier in the development process.

Similarly, when we were talking about IP blocks shaped like the pieces in a game of Tetris earlier, these shapes are currently defined by humans. I don’t think it will be long, however, before tools like DSO.ai start to vary the shapes with which they are working, eventually escaping the bounds of rectilinear forms completely.

I’ve said it before, and I’ll say it again, we certainly do live in interesting times (we can but hope that things don’t become too interesting). How about you? Do you have any thoughts you’d care to share on any of this?

2 thoughts on “100+ AI-Designed SoCs and Counting!”

  1. Several year ago we crossed some emails. In a comment you mentioned a chap for whom a logic design, e.g. the gates, popped-up in variouse colors. Such persons are called synaethists. But you had forgotten his name and there was no chance for me to get in contact with him.
    If we now try to understand DSO.ai, for this AI tool the optimum design poppes-up like for a synaethists. The precise term is salience, so it should be named salience design system. Synopsis implemented therefore a salience algorithm. There are people around which have been born with exact this capability.

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

From Sensor to Cloud:A Digi/SparkFun Solution
In this episode of Chalk Talk, Amelia Dalton, Mark Grierson from Digi, and Rob Reynolds from SparkFun Electronics explore how Digi and SparkFun electronics are working together to make cellular connected IoT design easier than ever before. They investigate the benefits that the Digi Remote Manager® brings to IoT design, the details of the SparkFun Digi XBee Development Kit, and how you can get started using a SparkFun Board for XBee for your next design.
May 21, 2024
37,643 views