feature article
Subscribe Now

(Moore’s) Law of Diminishing Return

When Do We Have Enough?

Following the semiconductor industry for the past few decades, we’ve seen something unprecedented in human history. There has been a sustained exponential growth that has survived for over four decades, with resulting numbers that are absolutely mind-boggling. Analysts and writers have struggled to find the appropriate metaphors: “If the auto industry had done this, all cars would now travel faster than the speed of light and get over a million miles per gallon.” The attempts all seem to fall short of giving the audience a grasp of the magnitude of this accomplishment. 

In the FPGA industry (which is about three decades old), there has been even greater than Moore’s Law progress. FPGAs started out toward the end of each process node. The new upstart companies were not at the front of the line for the merchant fabs, so they got the latest technology later than the leaders. As time went by, the FPGA companies migrated to the front of the line. In addition, FPGAs have made architectural gains that have helped them outpace Moore’s Law in a number of ways. As a result, the programmable logic devices of today bear scant resemblance to what we had a few years ago. With the almost incomprehensible increase in density, a “good enough” improvement in speed, and remarkable transformations in power consumption, today’s FPGAs fill a completely different set of design and market needs from those of the past.

FPGA companies have had to adapt to this change, and it hasn’t been easy. Field applications engineers (FAEs) – always the “secret weapon” of the FPGA industry – used to be universal. They could show up at your location with the design tools on their laptops, tweak a few configuration options, swap around a few lines of VHDL, and get your design humming right along in an hour or two. Nothing you would be doing with their FPGA would surprise them. They could handle it all.

Today, the FAE has to be more specialized. FPGA users may have problems that require a deep knowledge of specific areas ranging from signal integrity and board design with multi-gigabit transceiver links, to DSP algorithms being implemented with hardware acceleration in FPGA fabric, to embedded operating systems running applications on multi-core embedded processing subsystems inside the FPGA. Any one of those topics could be a career-long study for a true expert. FPGA companies have had to divide and conquer – training teams of FAEs in different specialties.

Also, as this evolution has progressed, the FPGA has moved from being a tiny part of most systems – “glue logic” thrown down at the last minute to bridge incompatible standards on a board, to “programmable system on chip” where most of a system’s capabilities are integrated into the FPGA. Now, the biggest reason to put anything NOT on the FPGA is a requirement for some special process. Analog, memory, and other “specialized” chips are some of the last holdouts that couldn’t be put into your FPGA design.

With each of the past four or five generations of FPGAs, the industry has declared victory. “This time,” they say, “FPGAs are TRUE ASIC replacements.” Each time, it’s at least partially true. With each new process node, ASIC, COT, and custom chip designs in general become exponentially more expensive, and fewer and fewer companies have the resources and/or the need to design a custom chip. As applications fall off the ASIC truck, they generally land softly in the FPGA net. They have to make some compromises, of course. Unit costs are much higher but are offset by dramatically lower NRE and design costs and risks. Power consumption is far worse than full custom chips, but usually “good enough” – and the gap is closing with each new generation. Performance is nothing like full custom, but it is also “good enough,” and the lack of ultra-high clock frequencies can be offset by clever use of parallelization.

However, this repeated claim of “This time, FPGAs have arrived” has started to have the feeling of crying wolf. From the early days when FPGA vendors boasted millions of “system gates” only to have the reality shown to be mere thousands of “ASIC-gate equivalents” the FPGA companies have tarnished their own credibility with extravagant claims. The thing is, now that the reality of those years-long claims is actually coming to be, will anyone believe them? The latest 28nm and soon to be 22nm FPGAs (with smaller geometries already in the works) have a remarkable amount of capability. They can certainly keep pace with custom chips that are only a process node or so behind them, and for many of their functions (such as high-speed serial connectivity) they are at the forefront of capability. 

FPGAs, by most any measure, have arrived. With densities now reaching 2 million look-up tables, they can replace custom devices in all but the most demanding applications, and they can bring that capability to market years before comparable ASSPs can follow with standardized, mass-market chips. With each passing process node, the “FPGA penalty” grows smaller. Unit prices decrease, power disadvantages diminish, and functional and performance capabilities pass the “good enough” line for a larger and larger subset of potential applications. Now, with heterogeneous processes being possible within single FPGA packages, even process-incompatible functions like analog and non-volatile memories can potentially be included in FPGAs.

This brings up the next logical question in the evolution of FPGAs: When do we hit the point of diminishing marginal return? Already, we are seeing a narrowing of the list of applications that require the biggest-baddest FPGAs that the vendors can produce. Ironically, one of the “killer apps” for the biggest FPGAs is prototyping custom chips. If we hit the point where custom chips on the latest process nodes are out of reach for everyone but a tiny set of elite companies, will the need for the largest FPGAs disappear as well? If so, that leaves us with the rest of the FPGA family lines to battle it out for the market. 

In this scenario, no longer will “world’s largest” or “world’s fastest” be worth much, except as bragging rights. The vast majority of designers will be selecting their FPGAs from the middle of the range, and the company that provides the best fit of capabilities at the right price for any particular application will win the socket. Emphasis will shift from “bigger, faster” to “cheaper, more efficient”. At some point, when the BOM contribution of the FPGA and/or the power consumption of the FPGA becomes irrelevant in the big picture of system design, FPGAs could truly enter the realm of commodity – much as DRAM memory devices are today.

It also becomes a bit wolf-crying-like to constantly be claiming that FPGAs are at a crossroads or a turning point. However, in the lifespan of this interesting technology, it seems to be often true. Perhaps that is the inherent nature of the sustained exponential. No matter how amazed you are at what you’ve already seen – you ain’t seen nothin’ yet.

2 thoughts on “(Moore’s) Law of Diminishing Return”

  1. There has been a lot of debate over the years about when Moore’s Law will end. However, another interesting question might be: When will we stop caring?

  2. As Moore’s Law gives us more and more logic every node, the question is not only “When FPGAs will replace ASICs for the most demanding applications?” but also “When will single chip solutions eliminate the need of multi-board, multi-chip systems?”. In the networking industry we are still very far away from that point. You still see many switch/routers implemented as complicated chassis with many line-cards with multiple chips on them. As time goes on it is possible to build more capable “single-chip” (putting aside memories phy-devices etc.) switch/routers. However in an ASIC this might not be economical. FPGAs seem to be a good way to proceed replacing expensive and complicated chassis based solutions with a more compact system with FPGA and some peripherals. Toward that end a significant improvement in frequency and amount of LUTs is still desired.

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Selecting the perfect Infineon TRENCHSTOP™ IGBT7 in Industrial Applications
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Jason Foster from Infineon and Amelia Dalton explore the characteristics and tradeoffs with IGBTs. They also investigate the benefits that Infineon’s 1200 V TRENCHSTOP™ IGBT7 H7&S7 650 V TRENCHSTOP™ IGBT7 H7&T7 bring to the these kind of designs, and how Infineon is furthering innovation in the world of Insulated-gate bipolar transistors.
Nov 18, 2024
3,426 views