feature article
Subscribe Now

Sinking the Itanic

Why is Intel’s Fastest Processor Also Its Biggest Failure?

“This is the way the world ends / Not with a bang but a whimper.” Those last lines from T. S. Eliot’s The Hollow Men were written about war, but could just as easily apply to one of the biggest failures in electronics engineering.

Actually, I shouldn’t say the project failed. The engineering part was actually pretty successful. It was a commercial failure. “The operation was a success, although the patient died,” goes the old surgeon’s joke. And so it is with Itanium. 

Fans of heavy metal will know Itanium as Intel’s biggest, loudest, and most electrifying microprocessor. But like vinyl LPs of Dokken, they’re not selling. 

The whimper came a few weeks ago, when Intel posted a minor two-sentence update on its Web site. In it, the company quietly says it’s “updating the definition” of the next-generation Itanium chip, codenamed Kittson. The “update” is that Kittson won’t be manufactured in 22nm silicon as originally planned; it will be a 32nm chip. While that 10nm difference may not seem like much, it’s a huge change in Itanium-land, where Kittson was supposed to be a faster upgrade from the current chips, mostly by virtue of that 22nm silicon. Now that Kittson will be made using the same process as Poulson (the current chip), there’s not much reason to upgrade. In large part, the whole point of Kittson was to move Poulson to 22nm production line. Without that, what’s the point?

It gets worse. Intel’s second sentence says that Itanium will “converge” on the same socket and motherboard design as the company’s x86-based Xeon processors. What’s this? Itanium and Xeon to use the same socket? The only reason Intel (or anyone else) would do that would be to make it easier to switch from one chip to the other. And you get no prizes for guessing the winner and the loser in that swap. 

So, in a nutshell, Intel is quietly notifying the world that its top-of-the-line product won’t be getting much faster next year, and that it’ll start sharing a socket with its older sibling. Sounds like Mom & Dad have moved a spare cot into Junior’s room and started painting the second bedroom. So long, Itanium, it was nice raising you.

What a huge letdown for the engineers working on Itanium. Although it’s not as if they didn’t see it coming. Itanium was never the hot seller Intel wanted it to be, and more than a decade of improvements never made it any more attractive.

Itanium was years in development, and it engaged the best minds in the engineering business. It was a joint venture between Hewlett Packard and Intel, and both companies put some of their best people on it. For many of those developers, Itanium was going to be the highlight of their career. Imagine being asked to join Intel’s next-generation microprocessor team back in 1994. The follow-on to the hugely successful Pentium franchise! The next ultra-RISC processor for HP workstations and big servers! That must have sounded like a dream job.

Okay, so there were some delays in the project. What major breakthrough development effort doesn’t have the occasional schedule slip? This is difficult stuff. We’re redefining the computer world here. Itanium was a few months late, then a few years late. Meanwhile, old-school x86 processors got faster and faster. When the first Itanium chips finally hit the street, Intel and HP were already apologizing for them, and that’s never a good sign. It’s like calling your own baby ugly. Somehow they knew. 

It wasn’t supposed to be this way. Itanium was based on the latest thinking in computer science. It was beyond RISC; it was EPIC (explicitly parallel instruction-set computing). Hewlett Packard was no slouch at computer design, after all, and Intel knew a thing or two about making fast chips. And the whole world knew—just knew—that x86 was heading down the tubes. No way could Intel and AMD (and the various surviving clone makers) keep the wheezing old x86 on life support much longer. Itanium was the ultimate hammer blow that would establish Intel’s dominance atop the microprocessor heap and HP’s atop the market for big iron. They couldn’t lose.

Except they did. The first Itanium chips were late (which was no big surprise) and not very fast (which was a big surprise). “Just wait for Itanium 2,” Intel said. So everyone did. And it still wasn’t what people wanted. Because it was too new.

Itanium was designed to run all-new software, written to take advantage of its radical new EPIC architecture. But what people actually had was old x86 code, dragged through generations of Pentium upgrades. Taking advantage of Itanium meant porting all that software, and that’s too much work for a 10% performance boost. If Itanium had been twice as fast as contemporary x86 chips, or even 50% faster, developers might have taken the bait. But to redesign an all-new hardware platform and write or port all-new code required a leap of faith in Itanium that few who weren’t on the payroll were willing to make. HP dutifully produced Itanium-based systems (and still does), but outside of HP’s sphere of influence, Itanium never made a dent. The world’s most ambitious and expensive chip was a miserable failure. All because of ratty old x86 code.

Now Intel is tacitly admitting defeat by suggesting that future Itania will have x86-style sockets. Build your Itanium system today and we’ll have a nice reliable Xeon chip for it tomorrow. Without that crossover option, Itanium-based systems would probably die off even faster than they already are. At least this way Intel has a chance of preserving some of its customer base.

Microprocessor design is a funny thing. You can throw all the talent and transistors at it that you want, but it still comes down to the invisible, intangible code that people want to run on it. It’s like telling jokes, writing books, or publishing an engineering journal: all the effort is wasted if no one speaks your language. Itanium wound up being the Vogon poetry of microprocessors – the third-worst in the universe. 

9 thoughts on “Sinking the Itanic”

  1. In the 80’s I became a Master in “microMainframe”, Intel 432. Beautiful 5th generation, object oriented Architecture. It never sold one indeed…

  2. Cool because I have a copy of Intel’s publication “Programming the iAPX-432”. May as well put it on the fiction side of my book shelf 🙂

    Getting back to the Itanium article, the only way the Itanium will take off now is if the price is substantially lowered. Rumor has it that HP already pays many millions to Intel each year to keep Itanium alive, and I am sure HP wants to recover their costs so price reductions will probably not happen. I hope the HP folks in India are busy porting OpenVMS to x86-64 because no one want to run an enterprise OS on an emulator.

  3. Pingback: tkhi 2017
  4. Pingback: GVK BIO
  5. Pingback: DMPK Services
  6. Pingback: Bdsm
  7. Pingback: IN Vitro ADME

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

SLM Silicon.da Introduction
Sponsored by Synopsys
In this episode of Chalk Talk, Amelia Dalton and Guy Cortez from Synopsys investigate how Synopsys’ Silicon.da platform can increase engineering productivity and silicon efficiency while providing the tool scalability needed for today’s semiconductor designs. They also walk through the steps involved in a SLM workflow and examine how this open and extensible platform can help you avoid pitfalls in each step of your next IC design.
Dec 6, 2023
59,031 views