feature article
Subscribe Now

Differentiation versus Diversity

Tilera’s Acquisition Means One Less CPU Company

There once was a time when every company had its own unique CPU architecture. Then there was a time when pretty much everyone used the same CPU architecture. Guess which era we’re living in now.

Actually, we’ve experienced both of those extremes multiple times. We have the makings of an industry cycle here. Really early computer companies (Burroughs, National Cash Register, Amdahl, International Business Machines, Data General, Digital Equipment Corporation, etc.) each invented and supported its own proprietary computer architecture. Each processor was implemented in discrete logic and occupied an entire printed-circuit board. Probably several boards, in fact. Software had no commonality at all. IBM machines couldn’t run any DEC software, which didn’t understand NCR code, which was incompatible with DG equipment, and so on.

Much later, we had homogeneous machines based on de facto standards: Think IBM PC and the x86 processor family. PCs were – and pretty much still are – interchangeable. Every PC runs the same software as every other PC.

We had almost the same thing with engineering workstations in the 1980s. Sun Microsystems made a point of using standard, nonproprietary, commercial devices. Where Daisy, Xerox, Mentor and others used proprietary hardware and software, Sun built boxes around Motorola’s 68K microprocessors and standard Ethernet interfaces. It wasn’t quite a monoculture, but it was close.  

Then we went through the RISC boom, with lots of choices. That was followed by the inevitable bust: fewer choices. Graphics processors (GPUs) went nuts in the 1990s. Now we have just nVidia, ATI (AMD), and some Intel.

That wave was followed by a raft of gonzo network processors, most of which are no longer with us. Qualcomm, Broadcom, Marvell, and a few others rose to prominence; most of the others slipped under the surface. Processor innovation comes in waves, and waves have a habit of scrubbing the beaches clean.

The receding tide of processor diversity recently swept out Tilera, one of the salty barnacles clinging tightly to the networking pier. (End of tortured metaphor.) Tilera was interesting in part because of its massive CPU core count. The company’s Tile-GX chips currently boast up to 72 identical processor cores, with more promised. Each core is a full-on 64-bit processor, capable of running even Linux. Neighboring CPU cores can run multicore operating systems. Of course, the core-to-core interconnect fabric and the shared caching structure were just as important, and just as complex. In all, Tilera pulled off an impressive engineering feat.

But the company is due to be acquired by EZchip, where its CPU architecture will be absorbed into future EZchip parts. And although the acquisition price is somewhere in the high eight to low nine figures, it’s not clear that that’s a win. That amount barely covers the startup cash that the company raised during its growth phase. In other words, the investors will get their money back (maybe), but no more. In finance speak, there’s no multiple. The company is worth only what was put into it, ten years of effort notwithstanding.

What was Tilera’s problem – if indeed, there’s a problem at all? After all, getting acquired by a major player is generally considered a pretty good exit strategy, and it’s hard to look askance at a check with that many zeros on it. But it feels a bit hollow to me, as if someone had bought the furniture and fixtures but left the computers behind.

Tilera’s engineering was remarkable, and its performance looked impressive, too. It was one of only a handful of massively parallel processors that actually made it into the market, with real people using them in real products. So we have an existence proof of the concept. But as with so many innovative processors, it was too ambitious. It was too difficult to program, too difficult to model, and too different from what developers were used to. Yeah, you could get the chip to perform miracles, but you really had to want it.

That’s not a comfortable position for most programmers, nor for their bosses. It’s generally safer to use a “normal” chip based on ARM or MIPS or Power and tweak your software to provide some differentiation from all the other ARM-, MIPS-, and Power-based products. Those sorts of projects are well understood and (comparatively) easily managed. Launching a product based on an entirely new and massively parallel CPU architecture? That has “high risk” written all over it.

Moving forward, my suspicion is that EZchip will encapsulate Tilera’s technology in such a way that the scariness disappears. The on-chip mesh network is easily concealed; the processors less so. They’re more likely to become anonymous “accelerators” that aren’t directly visible to the programmer/developer. EZchip will likely develop its own in-house firmware layer to screen the CPU from curious eyes, while downplaying their provenance and architecture. A firmware interposer also allows the company to tinker with the CPU architecture without changing the interface that programmers see. Freescale has done a similar thing with its Power-to-ARM transition, adding a level of indirection that abstracts the processor.

So although Tilera’s parallel processor architecture will live on, it will operate behind a mask, like Japanese Noh actors. The industry will have gained some differentiation, but lost some diversity. 

One thought on “Differentiation versus Diversity”

  1. Hi Jim, nice to see your very positive comments about Tilera’s architecture and performance achievements. And we’ve got over 100 designs at companies like Cisco, Brocade, ZTE, Checkpoint who agree with that. But it’s worth correcting a couple statements:
    First, the Tilera processors are not at all hard to program… in fact, that is one of their strongest selling points. The programming model is completely aligned with programming any multi-threaded, multicore processor with coherent memory and running Linux. Consider that an Intel Ivy Bridge can have up to 15 cores and 30 threads with perhaps 60 threads in a dual-socket system, so the modern programmer already has to master programming for parallel execution. And the Tile programming tools are completely mainstream: C/C++, Java, gcc, Eclipse, gdb, etc. One of our Cisco customers stated that Tilera had the best multicore programming SW tools he had ever used.

    And as for the future, the synergy between EZchip and Tilera is tremendous and Tilera’s architecture is not going away at all. The current TILE-Gx family continues to attract new design wins, and the new processors on our roadmap will be leveraging the best of the technology that each company brought to the transaction. Rather than the ‘least common denominator’, I think you’ll see that our future processors are superior to what either company would have produced independently. Stay tuned… our customers are very excited about the direction we’re going.

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

BMP585: Robust Barometric Pressure Sensor
In this episode of Chalk Talk, Amelia Dalton and Dr. Thomas Block from Bosch Sensortec investigate the benefits of barometric pressure sensors for a variety of electronic designs. They examine how the ultra-low power consumption, excellent accuracy and suitability for use in harsh environments can make Bosch’s BMP585 barometric pressure sensors a great fit for your next design.
Oct 2, 2023
28,523 views