feature article
Subscribe Now

The Sun Sets on Moore’s Law

Are FPGAs Harbingers of a New Era?

The title may have put you off. In fact, it probably should have. After all, most of us in the press/analyst community have – at one time or another during the past decade or two – been walking around like idiots wearing sandwich signs saying, “The End is Nigh!” And, we got just about as much attention as we deserved. “Yawn, very interesting, press and analysts, and now back to planning the next process node…”

It gets worse. Predicting that Moore’s Law will end is pretty much a no-brainer. It’s about as controversial as predicting that a person will die… someday. There is obviously some point at which the laws of physics and the reality of economics will no longer allow us to double the amount of stuff we put on a single chip every two years. The question is – when will we reach that point, and how will we know we are there? 

The end of Moore’s law won’t be like a sudden train derailment – sending cars crashing into one another while the whole thing explodes into a fiery ball. It will be more like – sunset, with the white-hot light of day slowly fading through an array of vivid colors into the long, warm darkness of ubiquitous commodity semiconductor processes.

FPGAs may well be the cicadas whose fading songs signal the beginning of that technological twilight. For the better part of three decades, the FPGA industry has been both driving and driven by Moore’s Law. On each two-year cycle, the company that was able to claim the next node first could claim victory, and that triumph would be felt in their future financial performance. Being the first to roll out chips with double-the-everything-for-cheaper was a sure-fire formula for winning the high-end sockets that would lead to later big-volume, high-margin sales.

The parade of press releases chronicled the biennial battle with the enthusiasm of home-team sportscasters at a championship game. “Ours are first! Ours are fastest! Ours are YOURS!” The flurry of superlatives never failed to captivate and confuse us, and the motto “Declare victory early and often” seemed to be eternally engraved in the rulebook for FPGA market competition.

Now, however, that frenzied communications cacophony has taken on a slower pace and a more studied tone. Yes, there is still a perfectly viable next-node race going on, with FinFETs sweetening the pot – Xilinx riding on TSMC’s 16nm FinFET process vs Altera’s Intel 14nm Tri-Gate mount. Both companies and both semiconductor fabs have had challenges with that node that have slid back original schedules, and it is not the least bit clear at this point which of the two big players will emerge with the first and/or best FinFET-having mind-boggling FPGAs.

But the landscape surrounding that race is what has changed, dramatically. And it has changed enough where one can hear the din of the cicadas fading slowly in response to the setting sun of fifty years of Moore’s Law.

What’s different?

For starters, Altera completely took a pass on the 20nm node with their high-end Stratix family, choosing instead to roll only their Arria 10 mid-range devices on that process. Their Stratix line held with their 28nm offerings, with all the effort on the upcoming Intel 14nm project. Xilinx relied heavily on interposer-based multi-die technology to get the impressive numbers with their latest announcement. Both companies are hedging their FinFET bets, and they are laying the groundwork for a new status-quo by explaining to customers how 28nm will be a “long-lived” node. 

28nm will be “long-lived” for several reasons. First, the next node will most definitely be later than usual. The Moore’s Law bell has rung, and we’re not looking at any new chips yet. And, we almost certainly won’t be until sometime next calendar year. Even then, only the super-big, super-fast high-end FPGAs will be fabricated on the profusely-bleeding-edge FinFET technologies. For smaller devices, the economics are better at 28nm, and probably will be for a while.

For the past several nodes, the gifts of Moore’s Law have been less lavish, and the cost has been considerably higher. In the good ‘ol days, we got double everything – double speed, double power efficiency, double density (and therefore half the cost) – there was nothing not to like. Then slowly, node-by-node, Mr. Moore became increasingly stingy. Chip designers had to trade off between leakage current and speed, between density and power consumption, between everything and everything else. Instead of getting everything doubled, we got to choose our favorite two, then our favorite one, and finally, with 20nm, there was a bit of a question as to what we were really gaining, and whether it offset the dramatically higher costs.

For semiconductor companies, FPGAs have been the go-to test pilots for each new process. FPGAs have enough variety of structures – LUT fabric, memory, processor cores, DSP cores, fancy IO, SerDes transceivers, and even some analog. With an FPGA, you could try out your newest semiconductor process on most of the elements that would be included in the really heartbreaking SoCs coming down the pipe from companies who make things like smart phones, with a much lower level of overall complexity. All of that step-and-repeat replicated-cell LUT fabric made for some nice low-drama die-filling. And, FPGAs had the economy of scale to ramp up to decent production numbers without pulling a millions-of-units vacuum on the fab before the process was mature.

As the non-recurring-engineering cost for each new node has continued exponentially upward, propelled most recently by soul-crushing complexities like double-, triple- and even possibly quadruple-patterning, the number of companies with the resources to actually complete a chip design on those nodes has diminished. We may be in danger of, to paraphrase Yogi Berra “Nobody designs high-end chips anymore because it’s too expensive.” At the same time, packaging has reached record levels of cost and complexity, leading us to a point where the resources and wherewithal required to produce a leading-edge chip (or chips) packaged in a state-of-the art module are simply staggering. 

As the field of players designing SoCs (or SiPs) on leading processes narrows to the very largest system suppliers like Apple, Qualcomm, and the like, and with FPGA companies leading the charge into each node ahead of those massive-volume players, we expect to see the FPGA companies as the first to blink (maybe several times) at the idea of continuing to the next node. Sure, we are likely to see 10nm and maybe even 7nm ICs at some point (although most definitely not at the historical 2-year tempo), but it’s pretty unclear if there’s a step after that.

The FPGA companies are already starting that blinking here at the 14/16nm node (unless they just got something in their eyes for a minute there), by spreading their offerings out on a range of historical processes, relying more on interposer-based packaging to ratchet up the capabilities, slowing down the pace of new family introductions, and beginning to market their software, IP, and other differentiators far more than the bare-metal capabilities of the silicon itself.

The world will not end as the light fades on Moore’s Law, however. We should remember that sunset in one place is sunrise somewhere else. When we can’t sit back in our recliners and get 2x everything for free every two years, we may have to earn the next level of innovation on our own – and that will be exciting to see.

13 thoughts on “The Sun Sets on Moore’s Law”

  1. A bigger question is simpler … do we even need 2X every two years.

    We have seen a rapid rise in User Interface complexity from serial RS232 character terminals, to real-time computer generated, high resolution stereo, streaming video in virtual reality games enabled by Moore’s Law over the last 30 years.

    What is the next commodity application that could possibly pay for, and be enabled by two or three more generations of Moore’s law? Even the most intensive Virtual Reality gaming applications are solvable today and only cost reduced by another generation or two of Moore’s law cycles.

    Can we even engineer applications to take advantage of more complex dies over 6-10 more Moore’s law cycles, given finite productivity of engineers and programmers? Can engineering team sizes even grow at (2^N) for large die projects?

    At some point with Moore’s law, the cost of the silicon is significantly cheaper than the engineering costs to use it, even for commodity high volume applications. IE SiliconCost/(2^N) as die size shrinks, while fixed/large die size projects see EngineeringCost*(2^N) for N more Moore’s Law cycles.

  2. Interesting article and well written Kevin. Let me open by stating that the FPGA vendors have already blinked. For example, in the latest Xilinx product release not all devices were scheduled to release to the 20nm node. In fact, the FPGA vendors are generally downplaying “I’m at the next node first” chest thumping and are focusing much more on hardware feature differentiation and also becoming more “software ready” for a potentially new breed of FPGA users.

    The sun has been setting on Moore’s Law, from an economical standpoint, for many years now. The higher cost of the next node, coupled with risky hardware development/deployment has played a big role in seeing more and more overall electronic system content move to software.

    What may extend Moore’s Law beyond the traditional monolithic 2D die shrink will be advances in 3D processing technology coupled with advances in 2.5/3D packaging technologies. If the industry can truly collaborate so that these technologies can present themselves reliably and more affordably we could see more interest in hardware implementation.

  3. Pingback: GVK Biosciences
  4. Pingback: Dom
  5. Pingback: GVK BIO
  6. Pingback: Boliden
  7. Pingback: engineering
  8. Pingback: try this web-site
  9. Pingback: iraqi coehuman
  10. Pingback: satta matka

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

Evolution of GNSS
Sponsored by Mouser Electronics and Taoglas
In this episode of Chalk Talk, Pat Frank from Taoglas and Amelia Dalton explore the details of multi-constellations GNSS Systems. They also investigate the key characteristics of antennas and how you can future-proof your GNSS design with Taoglas antenna solutions.
Dec 11, 2024
8,459 views