There once was a time when every company had its own unique CPU architecture. Then there was a time when pretty much everyone used the same CPU architecture. Guess which era we’re living in now.
Actually, we’ve experienced both of those extremes multiple times. We have the makings of an industry cycle here. Really early computer companies (Burroughs, National Cash Register, Amdahl, International Business Machines, Data General, Digital Equipment Corporation, etc.) each invented and supported its own proprietary computer architecture. Each processor was implemented in discrete logic and occupied an entire printed-circuit board. Probably several boards, in fact. Software had no commonality at all. IBM machines couldn’t run any DEC software, which didn’t understand NCR code, which was incompatible with DG equipment, and so on.
Much later, we had homogeneous machines based on de facto standards: Think IBM PC and the x86 processor family. PCs were – and pretty much still are – interchangeable. Every PC runs the same software as every other PC.
We had almost the same thing with engineering workstations in the 1980s. Sun Microsystems made a point of using standard, nonproprietary, commercial devices. Where Daisy, Xerox, Mentor and others used proprietary hardware and software, Sun built boxes around Motorola’s 68K microprocessors and standard Ethernet interfaces. It wasn’t quite a monoculture, but it was close.
Then we went through the RISC boom, with lots of choices. That was followed by the inevitable bust: fewer choices. Graphics processors (GPUs) went nuts in the 1990s. Now we have just nVidia, ATI (AMD), and some Intel.
That wave was followed by a raft of gonzo network processors, most of which are no longer with us. Qualcomm, Broadcom, Marvell, and a few others rose to prominence; most of the others slipped under the surface. Processor innovation comes in waves, and waves have a habit of scrubbing the beaches clean.
The receding tide of processor diversity recently swept out Tilera, one of the salty barnacles clinging tightly to the networking pier. (End of tortured metaphor.) Tilera was interesting in part because of its massive CPU core count. The company’s Tile-GX chips currently boast up to 72 identical processor cores, with more promised. Each core is a full-on 64-bit processor, capable of running even Linux. Neighboring CPU cores can run multicore operating systems. Of course, the core-to-core interconnect fabric and the shared caching structure were just as important, and just as complex. In all, Tilera pulled off an impressive engineering feat.
But the company is due to be acquired by EZchip, where its CPU architecture will be absorbed into future EZchip parts. And although the acquisition price is somewhere in the high eight to low nine figures, it’s not clear that that’s a win. That amount barely covers the startup cash that the company raised during its growth phase. In other words, the investors will get their money back (maybe), but no more. In finance speak, there’s no multiple. The company is worth only what was put into it, ten years of effort notwithstanding.
What was Tilera’s problem – if indeed, there’s a problem at all? After all, getting acquired by a major player is generally considered a pretty good exit strategy, and it’s hard to look askance at a check with that many zeros on it. But it feels a bit hollow to me, as if someone had bought the furniture and fixtures but left the computers behind.
Tilera’s engineering was remarkable, and its performance looked impressive, too. It was one of only a handful of massively parallel processors that actually made it into the market, with real people using them in real products. So we have an existence proof of the concept. But as with so many innovative processors, it was too ambitious. It was too difficult to program, too difficult to model, and too different from what developers were used to. Yeah, you could get the chip to perform miracles, but you really had to want it.
That’s not a comfortable position for most programmers, nor for their bosses. It’s generally safer to use a “normal” chip based on ARM or MIPS or Power and tweak your software to provide some differentiation from all the other ARM-, MIPS-, and Power-based products. Those sorts of projects are well understood and (comparatively) easily managed. Launching a product based on an entirely new and massively parallel CPU architecture? That has “high risk” written all over it.
Moving forward, my suspicion is that EZchip will encapsulate Tilera’s technology in such a way that the scariness disappears. The on-chip mesh network is easily concealed; the processors less so. They’re more likely to become anonymous “accelerators” that aren’t directly visible to the programmer/developer. EZchip will likely develop its own in-house firmware layer to screen the CPU from curious eyes, while downplaying their provenance and architecture. A firmware interposer also allows the company to tinker with the CPU architecture without changing the interface that programmers see. Freescale has done a similar thing with its Power-to-ARM transition, adding a level of indirection that abstracts the processor.
So although Tilera’s parallel processor architecture will live on, it will operate behind a mask, like Japanese Noh actors. The industry will have gained some differentiation, but lost some diversity.
Hi Jim, nice to see your very positive comments about Tilera’s architecture and performance achievements. And we’ve got over 100 designs at companies like Cisco, Brocade, ZTE, Checkpoint who agree with that. But it’s worth correcting a couple statements:
First, the Tilera processors are not at all hard to program… in fact, that is one of their strongest selling points. The programming model is completely aligned with programming any multi-threaded, multicore processor with coherent memory and running Linux. Consider that an Intel Ivy Bridge can have up to 15 cores and 30 threads with perhaps 60 threads in a dual-socket system, so the modern programmer already has to master programming for parallel execution. And the Tile programming tools are completely mainstream: C/C++, Java, gcc, Eclipse, gdb, etc. One of our Cisco customers stated that Tilera had the best multicore programming SW tools he had ever used.
And as for the future, the synergy between EZchip and Tilera is tremendous and Tilera’s architecture is not going away at all. The current TILE-Gx family continues to attract new design wins, and the new processors on our roadmap will be leveraging the best of the technology that each company brought to the transaction. Rather than the ‘least common denominator’, I think you’ll see that our future processors are superior to what either company would have produced independently. Stay tuned… our customers are very excited about the direction we’re going.