feature article
Subscribe Now

A Merger of Unequals

Magma Announces the Union of Analog and Digital in Titan

As the orks circled the tower in growing numbers, efforts to finish the weapon became increasingly frantic. The mechanical portion was almost complete: all of the strength and stress tests had passed, so the structure was ready to go. They had done practice shots with weights equivalent to the final payload, and distance and accuracy looked good. They fiddled a bit more with the pivots and joints to make sure that wear wouldn’t be excessive. But the real thing they were waiting for was the payload itself. This was a mystery concoction brewed up by some tall mysterious guy with a long beard and a pointy hat in one of the secure rooms near the top of the tower. They had no idea what it was or how he made it. They knew only that it would be poured into the carrier through some system of tubes. They had tested the tubes with water, but because they didn’t know the chemistry of the actual payload, they couldn’t be sure that viscosity wouldn’t be an issue, or even that the liquid wouldn’t react with the tubing or the carrier. And once the payload was ready, they had no time to run a whole new set of tests; the advancing hordes weren’t of the genteel sort that would put their attack on hold while T’s were being crossed. No, they simply had to assume that mystery dude knew how to brew the goo, put it in the shell, and hope it worked.

As we go through our engineering courses in school, certain patterns emerge. There’s a clean world of ones and zeros, Karnaugh maps, state machines. You decide what you want to do, create a digital model, and do it. In this highly deterministic world, the only thing separating you from certain success is how complex a problem you want to solve and how many all-nighters you want to pull solving it. Then there’s a shadowier world. One of two-ports and eigenvectors, Smith charts and convolutions, LaPlace transforms and Nyquist criteria. Purple robes and misty rooms. All is not so simple here. You can’t just march out and create new things; you must first master the ancient lore of those who have come before. A blithe design, likely as not, will harbor disaster, as the wrong resonances may push it into instability or interfere with other parts of the circuit, or perhaps an evil green light will emanate, destroying all within its reach, if the wrong words are spoken. This is not for the faint of heart: anyone can figure out digital design, but those that master analog design, they are the respected, the venerated, the slightly suspect.

Digital designers have had to incorporate more analog concepts into their designs as speeds have increased, but for the most part digital and analog circuits are created by separate people in separate universes. The problem is, today’s system-on-chip (SoC) designs need both analog and digital, so now they have to come together on a single chip and play nicely together. There’s no opportunity for old-school integration, where the wizened guru touches different parts of the circuit to locate the problem and magically cures it by taping a newt to one of the power transistors. No, this has to work at 45 nm, hopefully the first time it’s built.

I would probably invite the wrath of a dozen EDA companies were I to suggest that designing tools for digital logic was easy. OK, so maybe it isn’t easy, but it’s, well, largely deterministic. There are rules. Yes, when you get into layout issues things get a bit messier, but with enough rules, you can pretty much ensure success. Tools for analog, on the other hand, must incorporate the accumulated wisdom of the Great Initiates. They must automate the ancient dark incantations. SPICE simulations figure much more heavily here, as third- and fourth-order effects may have an impact.

Then comes the challenge of marrying and verifying the digital and analog parts. There has been no way of verifying both the analog and digital portions simultaneously. One side or the other has to be “black-boxed” during simulation. And because each side is created in a different environment, small changes on either side then have to be re-integrated, a task that can take a couple days at best.

The other challenge with analog circuits is that, because they are generally highly hand-crafted, they are hard to migrate between technologies. There’s no quick optical shrink, there’s no TCL script to port some tried-and-true piece of analog IP down to the next technology node. While digital logic is diving down to the 45-nm node, analog is trudging through mud back at the 90- and 130-nm nodes. Chip finishing activities – those last details required to integrate and unify the final elements of the chip – may happen in a different environment, and so there’s a disconnect between the analog circuit as it originally looked and as it looked after being brought into the full SoC. This disconnect makes it hard to bring the analog circuits forward as the underlying technology advances; the analog circuit pretty much ends up being redone largely from scratch.

Magma is trying to improve this scenario by unifying the design environment in what they call Titan. The first element of this is the creation of a single database containing all elements of the entire chip, whether analog or digital. Any changes made to either portion of the circuit will be visible to all the tools immediately – no conversion between domains is required. The ability to coordinate activities both digital and analog, to correlate events and artifacts in any part of the circuit, can be completely transformed by the use of that single database, all of which is accessible at any moment.

Another element of Titan is an initial focus on improving chip finishing, an often manual process, where the loop has historically been very long and small last-minute changes have been painful. The idea with Titan is that there is really no separate “integration” step, since the design pretty much starts out integrated from the beginning, and any finishing activities are applied with equal visibility to all portions of the circuit.

The routing tool also benefits – it can perform a global routing step, which gives a rough routing of the entire SoC – analog and digital. This is followed by a fine routing step, and here separate tools are used for digital and analog, but they operate on the same database. In order to accommodate the fact that analog circuits can have rather novel dimensions, the system is shape-based, not grid-oriented. Finally, but significantly, full-chip simulations can be performed with SPICE-level accuracy for the analog portions and Fast SPICE-level accuracy for the digital portion. Because of the unified database, elements in both analog and digital domains can be viewed at the same time.

The ultimate test in inviting analog to the same party as digital will be in allowing analog IP to capture the knowledge of the analog gurus through parameters and constraints. Reuse of analog IP is rare, and successful reuse will be a true indication that the mist over analog design will have lifted somewhat. The use of parameterized cells is an area that has community support in the IPL (Interoperable P-Cell Libraries) effort for digital cells, but analog becomes a new player in that effort. The extent to which analog IP can be successfully parameterized has yet to be proven. For the time being, while analog and digital realms can now coexist in the same universe, the analog magi are still critical in the process as tools try to turn black art into science.

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 7, 2024
I don't know about you, but I would LOVE to build one of those rock, paper, scissors-playing robots....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

STM32 Security for IoT
Today’s modern embedded systems face a range of security risks that can stem from a variety of different sources including insecure communication protocols, hardware vulnerabilities, and physical tampering. In this episode of Chalk Talk, Amelia Dalton and Thierry Crespo from STMicroelectronics explore the biggest security challenges facing embedded designers today, the benefits of the STM32 Trust platform, and why the STM32Trust TEE Secure Manager is an IoT security game changer.
Aug 20, 2024
39,804 views