feature article
Subscribe Now

Intel – Flourish or Flounder?

Moore’s Law Confuses Again

“If the end of Moore’s Law is a wall, the first one to slam into it will be whoever was ahead.” — Me, today

Days after we published our “No More Nanometers” article discussing the perils and pitfalls of our arcane node-naming conventions, Intel made their quarterly earnings release for Q2, 2020. Here at EE Journal, we pride ourselves in being a technology publication serving professional electronics engineers, and as such seldom delve into “business news,” but here we have a situation at the nexus of technology, perception, and cash. Intel announced their quarterly results on July 23, and within a few days, their stock dropped from around $60 per share (where it had been hovering for months) to around $50 per share – a loss of more than 15% of the company’s value. For a company with a market cap of around $200B, that’s a lot of money.

Yawn. This happens every day. Large companies make their quarterly announcements and the market reacts. 

But this is a little different. Intel announced revenues up something like 20% year-over-year, beat analyst expectations, and posted a whopping 34% increase in “data-centric” revenue – which is the company’s key market opportunity for long-term growth. Why the big drop in stock price on what sounds like a great report? 

The company also mentioned that 7nm volume production was delayed 6 months. Financial news reported this with explanations along the lines of “Circuit widths on chips are measured in nanometers, which are one-billionth of a meter. Smaller circuits make faster, more power-efficient processors. Taiwan Semiconductor (TSM), who manufactures chips for Intel rival Advanced Micro Devices (AMD), is leading the race to make chips at smaller process nodes by mass producing chips at a scale of 5 nanometers.” Non-technical or semi-technical people who try to follow this news conclude something along the lines of – “Intel is now on 10nm and TSMC is on 5nm, which is 2 process nodes. That means Intel is something like four years behind on making data center processors” – which is, of course, completely wrong.

Intel also hinted that they may have some of their product groups use external (non-Intel) fabs to make chips for them in the interim. This doubled-down on the notion that the Intel ship is going down. One might (erroneously) conclude that not only is Intel four years behind in technology, they are actually waving the white flag and surrendering – having rivals like TSMC or Samsung build chips for them. Of course, this too is wrong. 

Since Intel was founded, the company has been inextricably linked to Moore’s Law. After all, Moore was one of the company’s founders. And, consistently throughout the modern history of semiconductors, Intel has maintained a clear lead in “Cramming more components onto integrated circuits” (the title of Gordon Moore’s 1965 article that is seen as the origin of Moore’s Law.) For decades, Intel’s entire persona as a technology company has been centered around their manufacturing leadership, and their ability to out-Moore the competition. 

The company built a dynasty around PC processors, and later took over and dominated the data center processing world. It is that data center dominance that most likely represents both the future opportunity and the key vulnerability of Intel as a company. With Intel’s Xeon-based systems having a commanding market share in the data center, there is little to no room for growth by increased market share, so the company needs the market itself to grow to see growth there to offset the long-term decline in the PC market. They also need to continue to defend their dominance against a growing crowd of capable insurgents.

As far as growth in the data center market, things are looking rosy for Intel. With the huge acceleration of cloud-based services, AI training and inference, explosive growth of data-driven applications – the list of drivers for data center expansion is long and distinguished. If Intel could hold their commanding market share, they stand to ride an enormous wave of growth. Even if their market share slips a few points, they stand to garner the lion’s share of easy money as the world’s data centers expand and upgrade to meet the challenges of the next two decades. That opportunity is already showing up in Intel’s bottom line, accounting for the 34% year-over-year “data centric” growth.

But, holding market share in the data center of the present-to-future involves far more than simply keeping Xeon a few FLOPS faster than the latest AMD chip. In fact, having a slightly faster processor, a few more cores, or a little more power efficiency has gone from being the central technical issue in the data center to being almost irrelevant. (Just don’t tell the financial community that.) The next generation data center will not be driven by the performance of conventional von Neumann processors like Xeon. It will be based on a heterogeneous architecture with a variety of workload-specific processing elements (CPUs of various types, GPUs, FPGAs, AI accelerators), and the task of feeding those processing elements massive amounts of data will be far more challenging than cranking a few more instructions through a processor pipeline.

Intel’s data center crown has historically been defended by three things. The first of these is a “marketing” defense, which boils down to basically “Nobody ever got fired for buying Intel.” If you’re running a data center and you buy the system that 90% of the data centers are buying, you’re really not taking any big career risk. The second defense is the x86 instruction set architecture (ISA), which has for decades been the industry standard and has been the default target for most software compilers, giving us a universe where most legacy software executables are x86. Venture out of the x86 lane, (to ARM, for example) and you have always been left with a big question mark as to whether all the software you need will work. The third and final pillar of Intel’s defense has always been the aforementioned process technology leadership. Intel’s chips have usually been one notch better than the competition simply because they were fabricated on more advanced semiconductor processes.

Now, both of Intel’s historical technological data center defenses are under attack. The rise of ISA-independent computing and the huge increase in demand for heterogeneous computing in the data center has effectively nullified the x86 lock-in on software. Intel themselves are contributing to the demise of the x86 defense by embracing the heterogeneous future with their oneAPI initiative, allowing software to be developed and easily re-targeted to a variety of mixed-architecture computing elements. The proliferation of virtual machines and containers and other now-mainstream technologies has basically leveled the playing field when it comes to ISAs. And, with the heterogeneous nature of the next-generation data center, sophisticated retargeting of workloads across multiple architectures and ISAs is a fundamental requirement. In the new data center, x86 just doesn’t represent the kind of “lock-in” that it once did, and that trend will only continue in the future.

Regarding the manufacturing and process technology superiority, the end of Moore’s Law is imminent, and any company’s reliance on being the first to produce a new, denser process is destined to fade into inconsequentiality. In fact, for several process generations, the “cramming  more components” part has yielded a diminishing return. The big gains in process technology have been from innovations such as finFETs and other advances that don’t relate to geometry shrinks. And, as we discussed in our earlier article, the whole “7nm” vs “10nm” vs “5nm” terminology is bogus anyway. More of the gains traditionally seen from process shrinks will now be shifted to areas like packaging, where Intel does have potentially significant advantages with technologies such as their Embedded Multi-die Interconnect Bridge (EMIB) and their FOVEROS 3D die stacking technologies

“But, where is Intel actually in the semiconductor technology race?” (Sigh, if you’ve been paying attention the last few paragraphs, you might realize this isn’t a very important question.) It’s a little difficult to tell precisely. From a power, performance, area point of view, Intel’s “10nm” node is very similar to TSMC’s “7nm” node. If (as we expect) Intel’s “7nm” is similar to TSMC’s “5nm” – which just went into production, Intel is several months to a year behind TSMC based on the recent announcement that they are having yield issues with 7nm that are delaying its roll into volume production. Of course, TSMC is not a direct competitor with Intel in the data center business, but they fabricate chips for AMD, who is Intel’s primary competitor for conventional CPU slots in the data center. Intel appears to be hedging their bets, by announcing that they might rely on other fabs such as TSMC to manufacture some chips or chiplets (this outsourcing manufacturing is actually not a “new” thing for Intel, as TSMC has been manufacturing some lines of Intel FPGAs, such as the “Arria” devices, for several years.) The take-away there is that even if Intel’s fab encounters new and unanticipated problems, they will still be able to have their chips manufactured with the same technology AMD is using. 

In the data center race, AMD has said they expect to be shipping TSMC-5nm “Genoa” processors by the end of 2022. Intel says their (comparable) “7nm” CPUs will not debut on the market until “late 2022 or early 2023.” So, from a pure process-node perspective and just looking at CPUs, the schedules could end up being pretty close. Meanwhile, Intel now appears to be shipping their 10nm Xeon processors, so the two companies probably are close to process-parity for the next few years. By cutting a deal with TSMC (or Samsung) to manufacture CPUs, Intel hedges their bet. They can fabricate Intel CPUs at whichever fab is winning at the moment. 

But, as we pointed out above, process parity is only a shrinking part of the picture when it comes to the data center – and computing in general. To summarize, it ain’t about the nanometers any more. The servers that win in the data center of the future will be those with an architecture that supports heterogeneous computing for diverse workloads, puts vast memory resources local to those processing elements, and provides very high-bandwidth connections between all of the above. In that world, things like packaging technology, chiplet architectures, accelerators such as GPUs, FPGAs, and dedicated AI engines, advanced memory and storage technologies, and architecture of the whole computing system become the factors that drive performance far more than the “nanometers” of monolithic CMOS technology.

Intel clearly recognizes this and has taken a more holistic approach on defending the data center for a while. Their numerous acquisitions of AI and FPGA companies and technology, their development of Optane non-volatile memory, their Embedded Multi-die Interconnect Bridge (EMIB) for in-package high density interconnect of heterogeneous chips, their FOVEROS technology for 3D die-stacking, their oneAPI initiative – the long list of non-nanometer-based Intel efforts – is impressive. It shows a company that realizes they won’t continue to own the data center if their only defenses are the x86 architecture and a vanishing lead in CMOS process as the industry slams into the end of Moore’s Law as we know it.

Will Intel lose some percentage points of market share in the data center in the coming years? Almost certainly. When you have a commanding market share in a valuable and growing market, there is really no place to go but down. The incentives for competitors to find ways to cut in and take a piece of the pie is too strong. Will Intel’s data center revenues continue rapid growth? Again, almost certainly. Even if Intel’s market share shrinks, the overall market is positioned to grow at a dizzying rate, and the most growth will be experienced by the company with the biggest market share – and that will likely be Intel.

Most intriguing will be the dynamic that plays out in the architecture of the data center. There are two opposing forces at play here. First, the proliferation of numerous novel architectures for workload-specific acceleration opens the door for large numbers of new players to bring exciting innovation to the party. The number of startups currently developing various AI accelerators is clear evidence of that. And, the opening up of the architecture to accommodate those new processing elements will obliterate a lot of anti-competitive technological “lock-ins” that have been in place for decades. 

It pays to understand the difference, however, between the large “Super 7” data center customers and the rest of the world. The Super 7 are Facebook, Google, Microsoft, Amazon, Baidu, Alibaba, and Tencent. The Super 7 are so large, savvy, and well-funded that it is economically feasible for them to do almost anything to obtain advantages in data center capacity, throughput, or efficiency. They can afford to develop their own custom chips and processors, create their own server platforms, and partner with bleeding-edge startups to get access to novel technology. The rest of the data center market is just about the opposite. They want to build their data centers with the most standardized, interchangeable, well-supported platforms they can, and they want to come back once every five years or so and replace those with the next generation of standardized, interchangeable, well-supported platforms. Thus, when one of the startups makes “big news” by winning a deal with a Super 7 company, that does not translate in any way into the promise of taking over the larger data center market, or even any more of the Super 7. 

In the future data center market, huge leverage will belong to whomever owns and controls both the hardware and software architecture for running applications on decentralized heterogeneous computing architectures, and can deliver that in standardized, interchangeable, well-supported platforms. And we have yet to see a clear direction for who will drive industry standards and will own manufacturing for critical non-Moore technologies such as the advanced packaging/chiplet ecosystem required to move data between memory and these yet-to-be-defined heterogeneous computing elements. Intel is clearly trying to capture that territory before the crowds arrive, but they will likely not go uncontested. It will be interesting to watch. 

 

5 thoughts on “Intel – Flourish or Flounder?”

  1. yes, I agree on the heterogeneous programming. It will be interesting to see if oneAPI succeeds. The programmer still needs to be aware of the target device to get the best optimization. However, there was also an article yesterday on “Machine-Programming Code Similarity System”, using AI to write code, with the aim of supporting heterogeneous programming. From the article:
    “While Intel is still expanding the feature set of MISIM, the company has moved it from a research effort to a demonstration effort, with the goal of creating a code recommendation engine to assist all software developers programming across Intel’s various heterogeneous architectures.”

  2. Great article Kevin, nicely covering all the bases here.
    It defines the exact challenge that my new venture, Nallasway, is hoping to tackle.
    We need a first principles grassroots approach to this, to truly benefit from this once in a lifetime shift in our industry.

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Advantech Dual Band WiFi
Sponsored by Mouser Electronics and Advantech
In this episode of Chalk Talk, Amelia Dalton and Monica Goode from Advantech investigate the what, where, and how of dual band WiFi. They also explore the benefits that dual band WiFi can bring to a variety of embedded designs and how you can take advantage of Advantech dual band WiFi solutions for your next design.
Jul 31, 2024
84,014 views