feature article
Subscribe Now

How Many Nanometers Do I Need?

Unraveling the Node Wars

Many of us toss the process node into casual conversation, pretending we know what it actually means…  “I hear that SilBlaster is already working on a 37nm FPGA based on TSMC’s medium-K, oxide-minimized, semi-strained, anti-dielectric half-pitch.”  (If you get enough things that sound like buzzwords in there, most people will be too frightened to challenge you.)

In reality, though, most of us designing products with programmable logic are pretty well insulated from the vagaries of semiconductor processes.  When I got my first car that had four valves per cylinder, I was stoked.  (We engineers are easily stoked by things that might not impress the general public.)  I took lots of people for rides around town, just waiting for that first passenger to comment “Wow, this is really a great car – I’m enjoying riding around in it much more than any of those lame 2-valve-per-cylinder models.”

Sadly, that day never came.

Similarly, if I own an electronic product that happens to contain an FPGA –  (Not that I’d ever open the case and look inside, because that would most definitely void my warranty. Plus, the decal on the case assures me that I will find “No User Serviceable Parts.”)  Nonetheless, I am always excited if my megawidget boasts one of the latest devices with as few nanometers as possible.  Nanometers are apparently a bit like cholesterol.  You want to keep the count as low as you can get.

So, what are we measuring with all these nanometers anyway?

Ask fifteen engineers what we mean when we describe a process node in nanometers, and you’ll probably get twelve or thirteen panicked cover-up responses:

  •  “65nm? Oh, uh, that right there is the width of your gate in nanometers.”
  • “Well, it tells how big a transistor is in that process technology.”
  • “Oh, that’s the seasonally-adjusted dimension of the, uh, square root of the dialectric thickness at its widest point.”
  • “It tells us how far we’ve come on Moore’s Law.”

Of course, none of these are really correct. 

Ask them what the DRAM half-pitch means and you might get some similarly confused answers:

  • “Well, I think sometime in the 90s, a pitcher, uh, whose team plays at DRAM field? Anyway, it’s sort of a cross between a fastball and a slider… or maybe that’s a hamburger.”

Let’s start here:

The International Technology Roadmap for Semiconductors (ITRS) is a reference brought to us by a group of experts from various Semiconductor Industry Associations (SIAs) that gives us a “best guess” at timelines for reaching various process nodes.  It uses the DRAM half-pitch (half the distance between cells in a DRAM device) as the benchmark for the density of a process technology.  It is a common misconception that this has something to do with transistor width or length, but (as we all should quickly realize from EE school) transistors need to be all different sizes and shapes for different purposes, regardless of the process node.

The ITRS (in 2007) predicted that the DRAM half-pitch should reach 11nm by sometime around 2022.  Intel (of course) has an even more optimistic prediction.  With FPGAs currently at 90nm, 65nm, and (kinda’ sorta’ coming to) 40nm, an 11nm signpost is definitely too far down the road to read, but it hints that FPGA companies will be fighting node wars for more than the next decade, at least.

Process nodes follow a pre-designed sequence based on a popular misinterpretation of the meaning of the 1965 article penned by Gordon Moore that gave birth to the term “Moore’s Law”.  Much like the interpretation of legal and religious doctrine, the actual words (and the intent of them) written by Mr. Moore all those years ago have become distorted and almost irrelevant.  What lives on, and matters, is the popular perception of what Moore’s Law said and meant. 

The public distilled Moore’s Law down to “The number of transistors on a chip shall double every two years.”  If we are semiconductor companies and want to make that happen, we need our benchmark linear measure of feature size to decrease by a factor of the square root of two every two years (so that the number of objects-per-area will double).

Is everyone still following along?

This means we need to set targets for ourselves to work toward, and (in recent years) that set of targets has followed (and should follow) the sequence 250nm, 180nm, 90nm, 65nm, 45nm, 32nm, 22nm, 16nm, 11nm.  Semiconductor manufacturers treat these as two-year targets, and work to overcome the plethora of obstacles limiting them at each level.  The ITRS tracks progress in about thirteen different areas of technology – projected 15 years into the future.  This gives us a nice “crystal ball” prediction of what problems we’ll encounter as we reach each process node, and an “early warning” system that might let us know when Moore’s Law will grind to a final halt.

Most experts believe that Moore’s Law will hit economic feasibility barriers long before it runs into the technical wall.  The cost to set up a fab facility to produce each successive process node is increasing exponentially.  Already, there has been tremendous consolidation in the semiconductor industry such that only a handful of companies can fabricate devices on the latest node.  In the very near future, only governments and conglomerates of the world’s largest companies will have the financial resources to set up a cutting edge fab.  Some predict that by the time we’re at 11nm, a single-digit number of fabs will exist that are tracking the Moore’s Law curve.

Looking specifically at FPGAs, the trend gets even more interesting.  All FPGA companies today are what are known as “fabless” semiconductor companies.  They engineer their own parts but contract the manufacturing out to much larger organizations that have leading-edge semiconductor fabs.  Looking at the biggest three FPGA companies – Xilinx has a “multiple fab” strategy and uses both UMC and Toshiba to manufacture their latest-generation devices.  Altera has a “single fab” strategy and relies solely on TSMC.  Lattice Semiconductor inked a partnership with Fujitsu a few years back and has used that relationship to build a competitive FPGA offering – expanding beyond their bread-and-butter CPLD business.

Actel, because their products depend on unconventional processes (like flash and antifuse), is in a different category on the process node train.  Because fabs generally start with the most regular, straightforward structures to prove a new process (DRAM, for example) it takes a long time to get down to more exotic and esoteric structures like flash memory cells.  FPGAs that depend on flash typically ride at least one node, and sometimes two or three process nodes, behind the current SRAM-based FPGAs like those produced by Xilinx and Altera.

The only two companies (currently) that you’re likely to hear much process-node rhetoric from are Xilinx and Altera, so let’s take a closer look at their respective strategies and positions.

Xilinx’s multi-fab strategy has the advantages of diversification and the accompanying risk reduction.  If one fab company has troubles or falls behind, the other one can continue on building devices.  It’s also a bit like betting on multiple horses in a race – your chances of picking the winning horse are quite a bit higher.  Also, many customers are comforted by the notion that there are two independent companies able to produce the devices they’ve designed into their products.

Altera’s single-fab strategy also has compelling advantages.  By working only with one fab, they can focus their resources on wringing the absolute most and best out of that company’s technology.  A multi-fab strategy, in contrast, means that you have to always consider the limitations of all your fabs during design and (in some measure) to design for the least common denominator.  A single fab, if it’s the right one, lets you focus and optimize.  In recent times, Altera’s choice – TSMC — has been the undisputed “right” one.  Their higher-risk strategy may carry higher rewards at times and higher penalties when the luck flounders.

Watching the two companies dance the process node tango is fascinating.  At 90nm, Xilinx decided to take an unprecedented step and release their low-cost devices before their flagship Virtex FPGAs.  There is rampant speculation on the reasons for this.  Some claim that Altera had made such inroads in low-cost FPGAs with their Cyclone family that Xilinx needed the advantages of 90nm to keep in the game.  Some claim that it was much easier to bring up a tiny Spartan device than to work out the subtleties and special features of a Virtex-class FPGA.  Regardless of the reason, Xilinx launched the Spartan-3 family first, and the high-end Virtex-4 family later. 

Fast forward to 65nm, and the picture changed.  Xilinx launched Virtex-5 as the first 65nm family.  Altera soon countered with Stratix III.

Then we waited…

Altera, as we might guess, then launched Cyclone III – a 65nm version of their low-cost family.

We waited more…

Now, we’d expect that Xilinx would announce a 65nm low-cost family, but Spartan-3 continued to carry the flag – for an uncharacteristically long time.  We editors went to well-crafted presentations where Xilinx marketers explained that “90nm FPGAs have advantages over 65nm – like lower leakage current.” 

Really?

“Well,” we wondered, “Why doesn’t Altera just keep selling Cyclone II, then – instead of these supposedly leaky and lame Cyclone III devices?”  Apparently they thought that increased density, higher speed, lower dynamic power consumption, and lower cost might be attractive to customers.  Next, we heard rumors that Xilinx had cancelled their 65nm low-cost program and might resurrect it at 45nm. 

“No comment.”

 Next, Altera announced their 40nm “Stratix IV” (which won’t be in volume production for quite some time.)

Xilinx quickly summoned the press to point out that there is much more to life than process nodes (which is increasingly true, by the way), that Altera’s 40nm Stratix IV wouldn’t be in volume for quite a while yet, and that their 65nm Virtex-5 was on the market long before Altera’s corresponding 65nm Stratix III.

All of these are perfectly reasonable points.

As we move forward, all we should expect is that no single company will stay ahead in this interminable game of process node leapfrog.  As such, if you put all your energy into one vendor, you’re likely to have many years when you’re not designing with the latest generation technology. Likewise, if you try to jump back and forth to follow the current leader, you’ll have to master the design flows and quirks of multiple vendors’ environments.  Which will yield better long-term results is anybody’s guess.

Finally, remember that, like my “four valves per cylinder” car – process node is only one narrow measure of an FPGA.  Of equal importance are the robustness of the design tools, the usefulness and breadth of the available IP, and the array of features and services that accompany the silicon platform.  Only by weighing the whole equation can we make an informed decision.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Power Modules and Why You Should Use Them in Your Next Power Design
In this episode of Chalk Talk, Amelia Dalton and Christine Chacko from Texas Instruments explore a variety of power module package technologies, examine the many ways that power modules can help save on total design solution cost, and the unique benefits that Texas Instruments power modules can bring to your next design.
Aug 22, 2024
43,014 views