“I didn’t attend the funeral, but I sent a nice letter saying I approved of it.” – Mark Twain
Friends, engineers, countrymen, lend me your ears. I come not to bury Itanium, but to praise it.
In case you missed the news – and it was easy to miss – Intel quietly pulled the sheet over the still-warm corpse of Itanium, the company’s fabulously expensive yet spectacularly unpopular microprocessor family. In its place, Intel will encourage everyone to keep buying regular ol’ x86 chips.
The notice crept in quietly on little cat’s feet, mentioned almost in passing by Intel’s Vice President and General Manager of Data Center Group and IT Transformation, Lisa M. Davis. Apart from the modifier “final” in a sentence announcing the release of the new 9700 series of chips, there’s little indication from her or from Intel that this is the end of the line for its high-end processor family.
Itanium may not be dead yet, but those who hold it dear have been asked to assemble. It’s not as though the end was unexpected, however. Itanium has been on life support almost since it was born. Rarely has a project with such high expectations fallen so short.
It all seemed like such a good idea at the time.
Back in the late 1990s, RISC and VLIW were all the rage, and x86 chips from Intel, AMD, and others seemed woefully underpowered for workstations and servers. Besides, x86 was old. Its roots went back to the 4-bit 4004 processor from 1971. Nobody seriously thought that the old battle-axe could be extended for yet another generation, adding 64-bit features. No, it was time for x86 to retire and to be replaced by an entirely new processor family that incorporated all the latest thinking in compiler technology, reduced instruction-set computing, and very long instruction words. All the cool college students were doing it; now it was time for Intel to step in and produce a serious commercial version.
Intel even partnered with Hewlett-Packard (now HP Enterprise). Between the semiconductor company and the systems company, they had the best technical and commercial minds in the business. What could possibly go wrong?
The first Itanium processor (codenamed Merced) was years behind schedule, ran over budget, needed massive amounts of cooling, and wasn’t even very fast. But none of those would have been fatal flaws. After all, how many ambitious engineering projects don’t come in behind schedule and over budget? Itanium was expected to change the course of computing history, if not the world. This was going to be the processor architecture for generations to come, eventually spanning embedded systems as well as high-end servers and minicomputers. What’s a little delay when you’ve got that kind of responsibility on your shoulders?
What all that investment and all that development overlooked was a simple property of physics: inertia. Newton’s Third Law of Motion applies to apps as well as asteroids; programs as well as planets. Anything in motion tends to remain along its chosen trajectory, and it takes a lot of energy to deflect it. More energy than an entire industry could muster.
You see, the whole attraction of x86 is its backward compatibility. It’s precisely because of its advanced age, not despite it, that x86 is enduringly popular. It runs yesterday’s code without recompiling. And Itanium, with all its new thinking and modern, cutting-edge features, required recompiling everything. Itanium wasn’t backward compatible with x86 (not initially), so it required all new code. No problem, right? Just fire up the new-fangled EPIC compiler and run your C source through it. How hard can it be? Surely that trivial bit of extra effort is worth a massive performance upgrade and modern future-proofing?
Nope.
Lots of code running on “big iron” is old, and it’s mission-critical. It can’t be allowed to break. Tampering with important, running, complex code is a big no-no, never mind what the ministers of architectural purity say. Porting and/or recompiling every operating system, every driver, every database, and every application just did not appeal – at all.
Still, the Itanium juggernaut scared the business pretty good. It seemed like such a slam-dunk that computer companies like Sun and DEC started designing Itanium-based systems, sight unseen. Competing processor companies scaled back or canceled their plans, assuming that Itanium would soon eat their lunches.
One frightened competitor was AMD, which had been doing a commendable job of keeping up with Intel’s 386, 486, and Pentium chips. But Intel was taking its ball and going home, and threatening to tear up the playing field as well. If the world switches to Itanium from x86, what hope does AMD have for survival?
Itanium backed AMD into a corner. The company couldn’t copy Intel’s new architecture the way it had with x86, so it did the only thing it could: it engineered a 64-bit version of x86. Ironically, this proved to be the better strategy, and it handed AMD a rare opportunity to beat Intel at its own game. It was a desperation play, but it worked, and probably better than AMD itself expected. To this day, operating systems identify 64-bit processors as “amd-64,” not “Intel” or even “x86-64.” AMD accomplished what Intel couldn’t, or wouldn’t, because of its focus on Itanium.
The first Itanium chips shipped in 2001 amid high expectations but low sales. Its performance was disappointing, there was little software for it – surprise! – and it wasn’t obviously better than the SPARC, MIPS, PowerPC, or Alpha chips that it was intended to leapfrog. Embarrassingly, it ran existing x86 code slower than contemporary x86 processors, which were comparatively simpler, cheaper, and less power-hungry. But the lack of native Itanium software meant that many customers were forced to do just that, running x86 binaries on their shiny new (and expensive) Itanium systems, somewhat tarnishing their early reputation. What was the goal of this project, again?
By 2008, just seven years after Itanium’s debut, it was already in deep trouble. HP was secretly making reverse ransom payments to Intel to convince the latter company to keep Itanium on the development roadmap. A princely $440 million (that’s $88 million/year, or $241,000 per day) changed hands over five years in an effort to keep Itanium alive through 2014. The two companies then re-upped their Faustian bargain, for an additional $250 million, to extend Itanium’s life support through 2017 – i.e., until this year. Hence, the subdued Intel announcement that the new crop of Itanium chips will also be the last.
The new/final 9700 series of Itanium chips are socket-compatible with their predecessors, which is probably a very smart strategy. Few computer makers (namely, HPE) would want to design a new system around a walking-dead processor. Better to just plug the new chips into the existing sockets and breathe a few years of additional life into the installed boxes.
Are there lessons to be learned from the rise and expensive fall of Itanium? Sure, but they’re not always obvious. In her statement, Intel VP Davis praises the supposed virtues of Itanium, saying, “Customers finally had an alternative to being locked-in to expensive mainframe and RISC systems…” This statement seems supremely ironic, if not downright delusional, since Itanium has always been a proprietary, single-sourced processor family and the RISC chips it was created to combat are, for the most part, open and licensed architectures. MIPS, PowerPC, and SPARC may not have set the server market on fire, but they’re certainly more successful than Intel’s in-house Itanium ever was.
There’s also the lesson of persistence. Few thought that the creaking x86 architecture could be propped up for another decade, much less become a performance leader. But AMD’s success in creating the 64-bit Opteron, followed by Intel’s humbling adoption of AMD’s extensions, show that you can make anything work if you try hard enough. Like stories about mothers lifting entire cars off their trapped children, companies can suddenly be endowed with superhuman strength when their survival is at stake. The x86 architecture is still ancient, still obscenely complex, and still hugely inefficient, but nobody can argue that it isn’t successful, fast, and fully a 64-bit machine. It still dominates the computing world, despite the best efforts of basically everyone, everywhere. Even its own creators can’t kill it off.
Then there’s the lesson of inertia. Not only Itanium, but also almost all of Itanium’s contemporary competitors, underestimated the role of software compatibility and customer inertia. An entire generation of computer science and engineering was wasted on RISC and VLIW machines that went nowhere, all because they sacrificed compatibility in the name of elegance, architectural hygiene, or magical compiler thinking. Most of those CPU architectures are still with us, but they’re powering thermostats and toys now, not supercomputers.
Finally, there’s the lesson detailed in Clayton Christensen’s The Innovator’s Dilemma: watch your rearview mirror, because your biggest competitor is probably sneaking up behind you. Itanium wasn’t really competing with the other big computer processors from IBM, DEC, and Sun. It was competing with smaller, cheaper, and plentiful processors from two or three rungs down the product ladder. One after another, computer makers figured out that two, four, or eight smaller processors could do the work of one big one, and for a lot less money, all while using the current software and infrastructure. Itanium overshot the market, just as MIPS and Alpha had done. (Comparatively) cheap and abundant x86 chips could do the job just as well, and they were a lot easier to deal with. That same thinking may someday lead to ARM-based clusters of server processors, but that has yet to play out.
Itanium, we hardly knew ye. But that’s because ye hardly knew us, or what we really wanted.
3 thoughts on “Intel Pulls the Plug on Itanium”