feature article
Subscribe Now

News Flash: ARM Still Designing CPUs

Company’s Processor Roadmap is, uh, Vague

“I looked up my family tree and found out I was the sap.” – Rodney Dangerfield

The tease looked promising enough. “ARM will be making an announcement… that will redefine the user experience… while maintaining industry-leading efficiency…” Hey, that sounds like a big deal! Sign me up.

Being the savvy and well-connected journalist that I am, I ducked out of the standard press conference for mainstream journos who can barely spell CPU, and instead finagled a private one-on-one with the product manager. ARM was happy to agree, and the next day I got my audience. “So, what’s the big news?”

“We’ve got a roadmap. Look, it goes up and to the right.”

And… that’s all. I’ve even reproduced it here, so you don’t miss any important technical details. Feel free to start work on your next designs.

If you look closely, you’ll see the two new processors lurking near the bottom-right corner, codenamed Deimos and Hercules. That’s it. That’s the whole announcement. The next two processors have officially been given codenames. Oh, and they’ll apparently be faster than their predecessors. Imagine! A new CPU that’s faster than the old one. These guys are brilliant.

Actually, they are brilliant. I don’t mean to disparage ARM’s engineering work in any way. Nor the company’s business acumen. You don’t get to be the world’s most popular CPU vendor by accident. ARM is on top for a reason. But with that success comes… a bit of attitude, perhaps. I get the sense that ARM has learned that they can yank the leash and the press will come to heel. If so, it’s a tactic that doesn’t do the company any favors.

If you’re a long-time ARM Kremlinologist, you’ll have noticed that the firm doesn’t usually release a product roadmap at all. Unlike Intel, MIPS, or almost any other processor vendor, ARM doesn’t really make a habit of charting out its next moves. This was the first time it had done so, and the company was making a big deal of the change in policy. But the very fact that most observers didn’t know (or care) that this was a break from tradition just underlines how trivial it was. It’s like revealing a bit of ankle when everyone else is parading around in swimsuits.

Moreover, the “roadmap” didn’t reveal anything apart from the two codenames. No core counts, no architectural changes, no speeds, no wildly speculative power estimates, no ISA updates… no nothing. Just a PowerPoint slide with a hand-drawn curve. Compare that to the plethora of well-understood CPU codenames that populate Intel’s roadmap and this looks more like a torn-off piece of pirate map with an X drawn in the corner.

A closer look at the chart makes it even more vague. “Graph not to scale,” it says, which makes the whole vertical performance axis meaningless. Helpfully, the fine print does say that performance represents single-core performance based on SPECint2006. But, since the ARM figures are estimated, and the axis isn’t labeled, and it’s not to scale anyway… what are we to learn from this?

Like all good PowerPoint presentations, there’s a knee on the curve (I think that’s a built-in PowerPoint macro now; hit F3 and it inserts an inflection point), and the sharp uptick comes right about now. Lucky us! We’re living at exactly the right time to see these new processors “redefine the user experience!” Imagine my surprise.

“Why the sudden performance gain?” I asked. “What’s about to change in ARM’s product family that you’ve abruptly altered your own historic performance trajectory?”

The answer, as nearly as I could interpret it, was that the design teams for Deimos and Hercules have been allowed to turn the dial toward the performance end of the scale and away from power efficiency. In other words, these will be truly high-performance processors without many of the compromises that have been an ARM hallmark from the beginning.

The changes will come from many quarters, including different cache design, more memory bandwidth, higher clock frequencies, and smaller process geometries. The ARM people were careful to use phrases like “laptop performance” and “laptop form factor” throughout our discussion, so these two cores are clearly not aimed at smartphones. (Presumably, there’s a different – and still secret – roadmap for that branch of the product tree.) So, Deimos and Hercules will leverage future 7nm and 5nm process nodes to squeeze more silicon and wider buses into their designs, while relying on those processes’ naturally lower power consumption to keep energy in line. ARM will design for performance; it’s TSMC’s problem to minimize power.

Looking back at the chart, it’s hard not to notice that the yellow line for Intel is, shall we say, less vigorous than ARM’s. ARM evidently believes that it will overtake Intel’s performance lead around 2019. That’s next year! We really are living in interesting times!

Yeah, about that.

First of all, the horizontal axis – the one that actually is labeled – is misleading. When the chart places Deimos in 2019, that’s the date of first IP release. Same goes for Hercules in 2020. It takes another year or two before those IP releases get turned into silicon, and another year or more before they’re in any consumer products. In contrast, the Intel dates are solid ship dates. Intel’s Core i5-7300u – the newest and fastest part on ARM’s graph – has been shipping since 1Q 2017, which was more than 18 months ago. In contrast, there are still no consumer products based on Cortex-A75, which is shown immediately below it in the same timeframe. Kaby Lake is shown in 14nm silicon, versus ARM’s assumption of 10nm for Cortex-A75, 7nm for Deimos, and 5nm for Hercules. They’re comparing apples and orangutans. Shift those lines by about two years and you’ll be closer to the truth.

In ARM’s defense, they don’t make chips, so they can’t predict silicon ship dates (and their licensees get cranky when they try). But comparing the date you ink an IP license agreement against the date you ship working silicon seems a bit disingenuous.

Learning there’s a new ARM processor in the works is like hearing there’s going to be another Pirates of the Caribbean movie: a mixture of excitement and trepidation, mixed with grim inevitability. You always knew it was coming, you just didn’t know when. Both will make a lot of money, and both will look just like the ones that came before. Yo-ho-ho.

4 thoughts on “News Flash: ARM Still Designing CPUs”

  1. I think these guys are more clued-in than ARM –

    http://etacompute.com/news/press-releases/eta-compute-launches-industrys-first-neuromorphic-platform-ultra-low-power-machine-intelligence-edge/

    The process technology (apart from being stalled) won’t let you do faster clocks, and you can’t make the caches bigger without needing more cycles to get to them – those numbers have been flat for ages.

    Intel get more speed by dedicating more transistors (and power) to things like branch prediction and speculative execution to keep the caches filled and ready – which got them into trouble with Spectre/Meltdown.

    Bottom line: the best ARM can do is match Intel on speed, if they burn as much power, unless they go the asynchronous route ETA have taken. If they want 2x+ performance over Intel they’ll need to do something radical, and given they were last to get to 64bit among the major CPUs, that seems unlikely.

    https://simonkidd.wordpress.com/2010/08/12/if-i-were-you-i-wouldnt-start-from-here/

  2. Thank you Jim for sharing your views. It brief is incisive.
    As you rightly stated, Arm is in the business of IP licensing to SoC design companies. The timelines, performance, functionality and target applications would be heavily shaped by Arm customers. May be Arm should have explained the flexibility and scale in terms of ranges in various dimensions, i.e. core count, frequencies, possible interface speeds, special purpose accelerations etc., if it did not already do so in expanded versions of data sheets/brochures. Just two cents.

  3. Seems like “If you cannot dazzle them with your brilliance, then baffle them with your marketing hype b.s.”
    It is past time to keep doing the S.O.S. and expecting different results.
    It simply is not sensible to keep designing computers at the gate level.
    FPGAs and micro-coded control which IBM used for decades demonstrate that memories can be used for more than data and program storage.
    Use embedded memory blocks for control, literal and variable storage, execution stack, and for local memory.
    Three true dual port memories and a few hundred(no not millions or billions) LUTs can execute C statements and evaluate expression statements. (I can show how).

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Machine Learning on the Edge
Sponsored by Mouser Electronics and Infineon
Edge machine learning is a great way to allow embedded devices to run applications that can collect sensor data and locally process that data. In this episode of Chalk Talk, Amelia Dalton and Clark Jarvis from Infineon explore how the IMAGIMOB Studio, ModusToolbox™ Software, and PSoC and AURIX™ microcontrollers can help you develop a custom machine learning on the edge application from scratch. They also investigate how the IMAGIMOB Studio can help you easily develop and deploy AI/ML models and the benefits that the PSoC™ 6 Artificial Intelligence Evaluation Kit will bring to your next machine learning on the edge application design process.
Aug 12, 2024
56,189 views