feature article
Subscribe Now

That’s with a “B”

EVE Crosses a New Emulator Threshold

From the day we are old enough to articulate polysyllabisms, we are fascinated with big numbers. The concept of “big,” of course, being relative. Some older cultures are thought to have had three quantifiers: one, two, and many. Which is suggested in the resemblance between the French words for “three” (trois) and “very” (très). (Here you have to think the real old Europe.)

But we’ve moved beyond that. Way beyond that. Why, in my day, a million dollars was a lot of money. Being a millionaire meant something. It got you respect. We still pretend it means something, but if you can attain millionaire status simply by answering questions on a game show, then you know that some fat cats somewhere are rocking back in their chairs having a good belly-laugh at the poor schlubs who think that they’ll actually be somebody by the end of the show.

That said, there’s still a mystique to that indicator of great number, the ending “-illion.” Anything with an -illion is big. And if we’re not sure of the specific bigness of a number, we can just preface the -illion with a euphonious consonant, giving us “jillion” (or is it “gillion”?) or zillion – or, bigger yet, gazillion or bazillion (both of which indicate vague multiples of a zillion).

But, back in the real world, even if a million no longer has its prior luster, we can still be wowed by a billion. That’s a big big number in human terms. (It used to be a big number in monetary terms as well, until the current financial nightmare set a new standard and made the trillion the new basic currency.)

Just as attaining a billion of something is an amazing feat, doing two billion is also noteworthy, since it’s twice as much. At 2008’s ISSCC conference, the room was packed to hear Intel describe the first 2-billion-transistor circuit. But three, four, five billion? Well, at that point we pretty much change channels to look for something more interesting. Our attention is recaptured once there’s a trillion or when something new crosses a billion threshold.

So if we’re done with a billion transistors, what’s the next unit that would logically be billionized? In digital circuit terms, the next unit would be the gate. And, let’s face it, this is really a digital thing. Not sure if anyone wants to try to design and debug a billion- (or million-) op-amp design.

A billion gates is a lot. It’s hard to conceive of in real terms. Larry Ellison can end up in a corner with the DTs just thinking about a billion Gates. Or, looked at another way, if we could take the entire human population (more or less) and house them six at a time in nice suburban domiciles, then all of those houses in the world would collectively have a billion gates in their white picket fences.

It’s just hard to visualize.

But then again, we masters of the micro, nay, the nano, are good at doing things that are hard to visualize. So if we’re going to design a billion-gate circuit, how are we going to manage it? Verification of all the pieces together is going to be a challenge, to put it mildly.

And this is where we find our billion-gate headline: EVE is announcing their ZeBu-Server emulator, which can handle a billion gates. (And, for you healthy skeptics, even though they use Xilinx FPGAs, they don’t count using Xilinx FPGA gates.) This sounds like a monster by current emulation standards, although it’s actually incrementally bigger than what can be done today – it’s the fact of replacing the “m” with a “b” that makes one sit up and take notice.

If we compare to what’s publicly out there now (which could change next week, of course), the biggest single extant machine is Mentor’s Veloce, at 128 million gates. But it’s not so simple: these single machines can also be ganged together to provide larger capacity. You can combine 4 Veloces to get 512 million gates, but you can combine 8 of the next largest machines, the EVE ZeBu-XXL, at 100 million gates each, to get a total capacity of 800 million gates.

The ZeBu-Server box can handle 200 million gates, and up to five of them can be combined to get to the 1-billion mark. That’s 25% more capacity than the existing 800-million gates, but the thing to remember is that the 800-million reflects the top end of the older family. This is the start of a new family and, presumably, the launch of a new density ride, given EVE’s close relationship with Xilinx and their Virtex roadmap. So a reasonable expectation is that more is in the offing.

Because emulators can be big and bulky and power-hungry, their manufacturers try to watch the footprint of the box to keep that from getting out of control as density grows. When comparing what is required for a 200-million-gate design, ZeBu-Server is 25% bigger than the older version but less than 10% of the size of the nearest competitor; about 17% heavier than the prior version but about 18% of the weight of the nearest competitor; and power is half that of the earlier version and 10% of the nearest competitor.

Of course, a big question when handling huge designs is, how long does it take to compile? Here EVE brings in their zFAST compiler, which trades off some logic packing for faster turn time. They also allow the user to assist with some critical partitioning decisions to improve results. They claim to be able to do a from-scratch compilation of 200-million gates in 10 hours and a billion gates in 12 hours. There’s also an incremental compile capability for adding new logic or fixing old logic that can reduce the impact of last-minute changes as the design clock winds down.

The other big consideration with a really large box is, what if all my designs aren’t this big? How can I leverage the box in other ways? EVE has constructed each box to have five modules, and each module can host a different user. This means that, if you have a billion gates of capacity, you can host up to 25 different users at the same time. So you don’t have to do one humongous design; you can also work several smaller ones concurrently.

Flipping that around, you can also hook a different PC host up to each module. So for a giant design, you can add bandwidth by using several PCs to stream trace data, for example, and then have it all combined into one waveform afterwards. Or you can split different transactors over, say, two PCs and then put checkers on a third one. Distributing the PC host workload like this can shorten the amount of time it takes to complete a large run.

They’ve also taken steps to help multiple designs to play nicely together in the box. When you compile a design, the compiler normally positions it somewhere in the array of FPGAs that host the design. But what if another user has also compiled a design into that same position and you both want to work at the same time? EVE now allows what they call Design-Under-Test (DUT) relocation: they create five different images of the design at compile time, under the theory that at least one of them will fit when you try to load it.

With respect to what you can do with the box – the emulation, debug, and run-time environment characteristics – they leverage the capabilities developed for the existing boxes, so we won’t run through an inventory of that here. What they call their “smart debug” methodology becomes more important for tracking down issues since, presumably, with a bigger design, more things are happening over a longer period of time, and nailing the specific cycle where some bizarre bad behavior blips in and out gets more difficult. Essentially, you’ve got a bigger sandbox, so having a solid structured means of locating that little diamond earring you accidentally dropped in the sand makes it more likely you’ll actually find it.

But for now, we can just sit back and enjoy the crossing of a threshold. Which we’ll probably do again when someone crosses 2 billion. When they get to 3 or 4? Meh… not so much… [yawn]. Wake me up when we get to bailout stimulus levels, with a “t”.

Link: ZeBu-Server

Leave a Reply

That’s with a “B”

EVE Crosses a New Emulator Threshold

From the day we are old enough to articulate polysyllabisms, we are fascinated with big numbers. The concept of “big,” of course, being relative. Some older cultures are thought to have had three quantifiers: one, two, and many. Which is suggested in the resemblance between the French words for “three” (trois) and “very” (très). (Here you have to think the real old Europe.)

But we’ve moved beyond that. Way beyond that. Why, in my day, a million dollars was a lot of money. Being a millionaire meant something. It got you respect. We still pretend it means something, but if you can attain millionaire status simply by answering questions on a game show, then you know that some fat cats somewhere are rocking back in their chairs having a good belly-laugh at the poor schlubs who think that they’ll actually be somebody by the end of the show.

That said, there’s still a mystique to that indicator of great number, the ending “-illion.” Anything with an -illion is big. And if we’re not sure of the specific bigness of a number, we can just preface the -illion with a euphonious consonant, giving us “jillion” (or is it “gillion”?) or zillion – or, bigger yet, gazillion or bazillion (both of which indicate vague multiples of a zillion).

But, back in the real world, even if a million no longer has its prior luster, we can still be wowed by a billion. That’s a big big number in human terms. (It used to be a big number in monetary terms as well, until the current financial nightmare set a new standard and made the trillion the new basic currency.)

Just as attaining a billion of something is an amazing feat, doing two billion is also noteworthy, since it’s twice as much. At 2008’s ISSCC conference, the room was packed to hear Intel describe the first 2-billion-transistor circuit. But three, four, five billion? Well, at that point we pretty much change channels to look for something more interesting. Our attention is recaptured once there’s a trillion or when something new crosses a billion threshold.

So if we’re done with a billion transistors, what’s the next unit that would logically be billionized? In digital circuit terms, the next unit would be the gate. And, let’s face it, this is really a digital thing. Not sure if anyone wants to try to design and debug a billion- (or million-) op-amp design.

A billion gates is a lot. It’s hard to conceive of in real terms. Larry Ellison can end up in a corner with the DTs just thinking about a billion Gates. Or, looked at another way, if we could take the entire human population (more or less) and house them six at a time in nice suburban domiciles, then all of those houses in the world would collectively have a billion gates in their white picket fences.

It’s just hard to visualize.

But then again, we masters of the micro, nay, the nano, are good at doing things that are hard to visualize. So if we’re going to design a billion-gate circuit, how are we going to manage it? Verification of all the pieces together is going to be a challenge, to put it mildly.

And this is where we find our billion-gate headline: EVE is announcing their ZeBu-Server emulator, which can handle a billion gates. (And, for you healthy skeptics, even though they use Xilinx FPGAs, they don’t count using Xilinx FPGA gates.) This sounds like a monster by current emulation standards, although it’s actually incrementally bigger than what can be done today – it’s the fact of replacing the “m” with a “b” that makes one sit up and take notice.

If we compare to what’s publicly out there now (which could change next week, of course), the biggest single extant machine is Mentor’s Veloce, at 128 million gates. But it’s not so simple: these single machines can also be ganged together to provide larger capacity. You can combine 4 Veloces to get 512 million gates, but you can combine 8 of the next largest machines, the EVE ZeBu-XXL, at 100 million gates each, to get a total capacity of 800 million gates.

The ZeBu-Server box can handle 200 million gates, and up to five of them can be combined to get to the 1-billion mark. That’s 25% more capacity than the existing 800-million gates, but the thing to remember is that the 800-million reflects the top end of the older family. This is the start of a new family and, presumably, the launch of a new density ride, given EVE’s close relationship with Xilinx and their Virtex roadmap. So a reasonable expectation is that more is in the offing.

Because emulators can be big and bulky and power-hungry, their manufacturers try to watch the footprint of the box to keep that from getting out of control as density grows. When comparing what is required for a 200-million-gate design, ZeBu-Server is 25% bigger than the older version but less than 10% of the size of the nearest competitor; about 17% heavier than the prior version but about 18% of the weight of the nearest competitor; and power is half that of the earlier version and 10% of the nearest competitor.

Of course, a big question when handling huge designs is, how long does it take to compile? Here EVE brings in their zFAST compiler, which trades off some logic packing for faster turn time. They also allow the user to assist with some critical partitioning decisions to improve results. They claim to be able to do a from-scratch compilation of 200-million gates in 10 hours and a billion gates in 12 hours. There’s also an incremental compile capability for adding new logic or fixing old logic that can reduce the impact of last-minute changes as the design clock winds down.

The other big consideration with a really large box is, what if all my designs aren’t this big? How can I leverage the box in other ways? EVE has constructed each box to have five modules, and each module can host a different user. This means that, if you have a billion gates of capacity, you can host up to 25 different users at the same time. So you don’t have to do one humongous design; you can also work several smaller ones concurrently.

Flipping that around, you can also hook a different PC host up to each module. So for a giant design, you can add bandwidth by using several PCs to stream trace data, for example, and then have it all combined into one waveform afterwards. Or you can split different transactors over, say, two PCs and then put checkers on a third one. Distributing the PC host workload like this can shorten the amount of time it takes to complete a large run.

They’ve also taken steps to help multiple designs to play nicely together in the box. When you compile a design, the compiler normally positions it somewhere in the array of FPGAs that host the design. But what if another user has also compiled a design into that same position and you both want to work at the same time? EVE now allows what they call Design-Under-Test (DUT) relocation: they create five different images of the design at compile time, under the theory that at least one of them will fit when you try to load it.

With respect to what you can do with the box – the emulation, debug, and run-time environment characteristics – they leverage the capabilities developed for the existing boxes, so we won’t run through an inventory of that here. What they call their “smart debug” methodology becomes more important for tracking down issues since, presumably, with a bigger design, more things are happening over a longer period of time, and nailing the specific cycle where some bizarre bad behavior blips in and out gets more difficult. Essentially, you’ve got a bigger sandbox, so having a solid structured means of locating that little diamond earring you accidentally dropped in the sand makes it more likely you’ll actually find it.

But for now, we can just sit back and enjoy the crossing of a threshold. Which we’ll probably do again when someone crosses 2 billion. When they get to 3 or 4? Meh… not so much… [yawn]. Wake me up when we get to bailout stimulus levels, with a “t”.

Link: ZeBu-Server

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Introducing the TCKE9 eFuse: Advanced Circuit Protection for Modern Electronics
Sponsored by Mouser Electronics and Toshiba
eFuse ICs provide better protection performance than conventional mechanical fuses. In this episode of Chalk Talk, Amelia Dalton and Talayeh Saderi from Toshiba chat about the what, where, and how of eFuse technology. They also investigate the benefits that Toshiba’s TCKE9 eFuses bring to server power management and how you can get started using a TCKE9 eFuse in your next design. 
Oct 29, 2024
27,844 views