feature article
Subscribe Now

Too Big to Fail

Intel’s Itanium chip is 10 years old. Ten years of designing and building one of the biggest, fastest, and most complex microprocessors ever made. And 10 years of making excuses for it, too. For Itanium has been a colossal disappointment, not to say embarrassment, for the chip company. It was intended to upend the whole microprocessor industry and finally spell the end of the hated x86. Instead, here we are 10 years on, and Intel is selling more x86 chips than ever while support for Itanium, which was always a bit meager, continues to wane.

What went wrong? How could Intel—a company with more brains than a zombie Thanksgiving—have fouled up so badly? Microsoft is dropping support for Windows on Itanium. Red Hat Linux will no longer support Itanium in the next version. Even Intel itself has discontinued its C compiler. Itanium chip sales have never come close to their expected level, and 95% of the chips that do sell go directly to HP, the company that helped design it in the first place. Like a certain passenger liner, the “Itanic” was the biggest and most advanced design of its day. Now it’s more like a jewel that’s sunk to the bottom of the sea. 

None of this is a reflection on Intel’s engineers. They designed a brilliant and technically advanced device. The newest Itanium chip (code-named Poulson) has a staggering 3.1 billion transistors. It has 50 MB of on-chip memory just in cache. This thing’s so big it beeps when it backs up.

But hardly anybody’s buying it, which tells us that advanced engineering isn’t a guaranteed route to success. What can we, as mere mortal designers and programmers, learn from this?

Take a gander at the chart below (data courtesy of market-research firm IDC). As you can see, Itanium sales (in blue) were supposed off take off faster than a bride’s nightie. Alas, actual sales (in orange) were flat and disappointing. 

20110309_itanium_img1.jpg

Sales of Intel’s Itanium processor family have consistently—and spectacularly—failed to live up to expectation. Ten years on, Itanium sales still don’t meet the projections expected for the first six months.

 

Let us take a moment to savor the implications of the leftmost line on this chart. If Itanium sales had followed that projected curve and reached $5 billion back in 2000, $15 billion in 2001 and about $37 billion in 2002, today’s sales would be literally off the chart. These breathlessly optimistic projections suggest some manner of pharmaceutical intervention.

Next year’s curve, shown just to the right of the first line, shows barely a modicum of circumspection. Naturally, it shows sales starting a year later, but the ramp is even more optimistic, reaching $30 billion in just two years.

Year by year, we see the projections gradually begin to dwindle and flatten out. The slope gets less aggressive and the sales figures slump by a factor of five or so. Hope and expectation give way to reality and disappointment, in graph form.

Even after Itanium chips actually did start shipping (orange line) and the market researchers presumably had real data to rely upon, sales projections were still off by an order of magnitude. Hope springs eternal.

What Can We Learn From This?

Learned and carefully researched papers can, have, and will continue to be written about Itanium, but we can focus on just a few points that affect us as designers and programmers. With any luck, we can learn from Itanium’s mistakes.

Lesson #1. What problem are you solving? Itanium solved Intel’s problem of how to build a faster chip to compete with the RISC vendors, but it didn’t solve customers’ problems of how to make their PCs run faster. In fact, it did just the opposite. Itanium sacrificed performance on x86 code for performance on (mostly nonexistent) IA-64 code. As a designer or marketing manager, you need to always ask yourself, “What problem am I solving?” If you can’t answer that question, put down your tools and step away from the workbench.

Lesson #2. Better technology doesn’t matter. At least, not all the time—or even very often. Intel and HP were replacing the old x86, the worst CPU architecture in the world. How could they not succeed? The technology, engineering, and design philosophy behind Itanium were all brilliant. But it didn’t matter because customers don’t buy technology. They buy a product, and Itanium wasn’t a product they wanted. Unless you’re a research scientist, technology is a means to an end, not an end in itself.

Lesson #3. Momentum matters. Even though Itanium chips can run x86 code, the early ones didn’t do it very well. Its half-fast performance was on purpose, but the plan backfired. Intel didn’t want Itanium’s x86 performance to be too good, or people wouldn’t have any incentive to switch to IA-64 software. But the company underestimated people’s attachment to their old code. Itanium wasn’t an upgrade for them. From an x86 user’s point of view, Itanium was more expensive but slower—obviously a bad “upgrade.”

Lesson #4. Volume trumps technology. Like any brand new product, Itanium started from zero: zero installed base, zero available software, zero programmer experience, zero history. Compare that to x86, which had (and still has) an awesome ecosystem surrounding it. Practically everyone has used x86 chips or software at some point, and there are gobs of tools, support, and talent to go around. It was like night and day: the best-supported (though hardly best-loved) CPU in the world versus the newest and least-known CPU in the world. They both had Intel logos on top, but otherwise were worlds apart.

Lesson #5. Be careful what you improve. From a technical perspective, Itanium was, and still is, a vast improvement over x86. How could it not be? It includes all the latest thinking about CPU architecture; it had the input of the best minds in the business; it had Intel’s awesome financial and marketing resources behind it. Absolutely everything was new and improved. It was a technical tour de force. And yet what people wanted was a faster x86.

Even what we think of as the x86—that is, a 32-bit CISC processor—is an evolution of the earlier 8086, which was, in turn, an evolution of the 8080, which was based on the 4004, and so on. It’s hard to even count the number of times the x86 has been “stretched” beyond its original design. It’s the ultimate Hamburger Helper processor.

Which is entirely the point. Is it any coincidence that one of the oldest CPUs in existence is also one of the most popular, best-known, and most profitable? Longevity and compatibility do really count for something. The brilliant, modern, clean-sheet design of Itanium failed to even make a dent in sales of the wheezing, clattering ironmongery of the x86.

The history of Itanium is a perfect illustration of Clayton Christensen’s observation that technology improves faster than people need it to. Itanium overshot peoples’ expectations of what a processor should do. So did most RISC processors of the past few decades, which is why they’re not around anymore. Sure, they were all “better” chips from an engineering perspective, but they weren’t better along any axis that the market was measuring. 

Then, as now, nobody wanted to throw away their PC every 2–3 years for entirely new machines just because those machines are “better.” We retain our QWERTY keyboards in spite of “better” and more ergonomic options. We cook in iron pots over open flames when “better” options surely exist. Better isn’t always better.

Among the lessons that Itanium can teach us are to distrust our engineering instincts; to view products from our customers’ point of view; and to respect momentum and inertia. We can easily “improve” our products faster than customers want us to, and we can even more easily deceive ourselves into improving them in entirely the wrong ways.

Engineering can be an exciting means of self-expression, but it needs to be leavened with a dose of old-fashioned humanity. Just because we build it doesn’t mean customers will come. 

One thought on “Too Big to Fail”

  1. IBM System360 learned Lesson #2 when previous 1401 users did no have source code to re-compile, therefore built and sold emulators to run machine code:
    Lesson #2. Better technology doesn’t matter. At least, not all the time—or even very often. Intel and HP were replacing the old x86, the worst CPU architecture in the world. How could they not succeed? The technology, engineering, and design philosophy behind Itanium were all brilliant. But it didn’t matter because customers don’t buy technology. They buy a product, and Itanium wasn’t a product they wanted. Unless you’re a research scientist, technology is a means to an end, not an end in itself.
    Also, this is not the problem:
    “Data dependencies and load/use penalties are just as hard to predict in software as they are in hardware. Will the next instruction use the data from that previous one? Dunno; depends on the value, which isn’t known until runtime. Can the CPU “hoist” the load from memory to save time? Dunno; it depends on where the data is stored. Some things aren’t knowable until runtime, where hardware knows more than even the smartest compiler.”
    The fact is that the next instruction does NOT use the result a significant number of times. And general purpose computing has a branch to non sequential location frequently enough that branch penalties are also significant.
    The GPU and heterogeneous accelerators work because many algorithms only need the amount of data that can be streamed to on chip memory. They do not need to access data that is scattered all over a 64 bit address space. And that data does not have to be shared, therefore multi-level cache coherency is unnecessary in those cases.
    The whole premise of cache was that matrix inversion would access data within the same cache line frequently AND that main memory had to have updated data shared by every user on the system, therefore cache coherency was also a must.
    And now we have RISC V where there are 2 source registers and a destination register or a small immediate constant operand can be in place of one source register for a register immediate.
    And the whole world is enamored with RISC V.
    Give me a break!
    But there is an open source compiler that identifies every variable and constant in the order that they are used. Makes it pretty obvious if the result is used by the next operator. Also branches and loop targets identify potential out of order execution.
    But RISC V is going to save the world with an open source ISA that is based on an assembler so it does not have to do a couple of compares because the assembler can swap which register are used.

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Industrial Internet of Things
Sponsored by Mouser Electronics and CUI Inc.
In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from CUI Inc explore power supply design concerns associated with IIoT applications. They investigate the roles that thermal conduction and convection play in these power supplies and the benefits that CUI Inc. power supplies bring to these kinds of designs.
Aug 16, 2024
50,903 views