feature article
Subscribe Now

Calling All Universal Translators

Why Isn’t There a Generic Chip to Run All Types of Code?

“You’d be surprised how hard it can often be to translate an action into an idea.” – Karl Kraus

“Siri, launch my compiler.”
“Alexa, run a.out.”
“Hey, Google, execute that old CP/M program for me.”
“Cortana, halt and catch fire. Ha-ha.”

Wouldn’t it be cool if your personal digital assistant could execute any task, and any program, on any operating system? We’ve been designing computers for several lifetimes now. Why can’t they be universal?

We all remember reruns of Star Trek and the occasional appearance of the “universal translator” device. Whenever the script called for it, an Enterprise crewmember would whip out an Amazon Echo–like device and wave it around, instantly translating any alien’s spoken language into… well, into whatever language Starfleet crews speak in the 23rd Century. Apparently, Enterprise had (will have?) a wireless broadband connection to Amazon’s servers.

But back here in the early 21st Century, we’re still writing and compiling code for specific CPU instruction sets. Our programs are tied to specific hardware architectures the way that VHS and Beta cassette tapes operated only with their respective players.  “Hey, Mister, can I rent that copy of Saturday Night Fever on VHS, please?”

So why don’t we have universal microprocessors – chips able to execute any code from any source? If CPUs were universal, we’d have competition based on price, power, and availability instead of binary compatibility. The whole universe of CPU chips would open up to us, and we’d no longer be beholden to one (or perhaps a small handful of) CPU vendor(s). Open competition is a wonderful thing. We have a universal ASCII code for information interchange, universal hydrocarbon fuel for cars (almost), universal AC and DC power requirements for portable electronics (again, almost), and other universal standards.  Why not for processors?

Transistors are so tiny now, and the average SoC is so comparatively large and complex, that today’s CPU cores occupy just a tiny fraction of the chip’s real estate. Surely there’s room for a slightly larger, more universal version.

And think of the software savings! Plenty of teams spend more time and effort developing their software than they do on their hardware. Most companies developing consumer electronics or industrial systems employ more programmers than hardware engineers. Software is expensive. Hardware, while not cheap, isn’t the cost determinant it used to be. If you could make your hardware, and thus your software, infinitely transportable, you could run any code on any system! No more recompiling; no more porting; no more dusting off that old assembly-language manual to try to figure out what the code is doing.

How hard can it be?

John von Neumann proved that any computer can operate on just two or three basic commands. It may not be very fast or efficient, but it’ll work. So, if all programs can ultimately be reduced to just this core set of operations – the ultimate RISC, if you like – then surely any code can theoretically run on any processor? Just a little hardware translation here and there, and voila! Instant universal translator!

There are even designers on the street with the relevant experience. Dozens of companies have built binary-compatible clones of the x86, ARM, MIPS, PowerPC, 8051, SPARC, and nearly any other machine architecture you can think of. So it is doable.

The x86 is surely the most complex modern processor family to be reverse-engineered and duplicated, so simpler architectures like ARM and MIPS would be a doddle in comparison. After that, adding in an 8051, 6805, AVR, and all the other 8-bit MCUs would hardly add any complexity at all. And, as long as you’re at it, toss in a Java interpreter (or most of one, anyway), a DSP or two, and your favorite GPU for graphics acceleration. Oh, and maybe an encryption engine. And a network packet processor! That ought to cover it.

Or you could go the other route and have just one actual CPU core, but with a bunch of runtime-translation hardware in front of it. All your ARM code, x86 code, AVR code, et al, would be translated on the fly right before it gets thrown into the maw of the processor engine. You could even implement the translation hardware in FPGA so that it’s field-upgradeable. You could sell updates! Want to enable 8051 translation, Mr. Customer? That’ll cost you an extra $1.95/month subscription fee. Order a whole year and get it for just $20.

You could even define your own hardware API and encourage third-party development. Your chip would become a platform for other developers to create and market their own binary-translation plugins running on your chip. Naturally, you’d take a cut of the revenue – say, 20 percent – but the independent community would do the technical dirty work for you. The more development they do, the more useful your processor becomes.

What’s holding us back? Alas, history proves to be our enemy here. History and economics. And patent law. History, economics, patent law, and marketing are all agitating against us. Historically, a lot of processors – in fact, almost every processor, at one time or another – has been duplicated and cloned. It’s hard work, but it can be done – mostly. Most CPU cloning projects stall at about 85% completion. It’s all smooth sailing right up until that point, but then the project hits the rocks. Getting your ADD and SUB operations to work is trivial, and that fuels optimism. Getting those last few instructions to behave properly takes even more work and saps all that earlier hopefulness. Most CPUs were hard to design in the first place. Repeating that whole process, including side effects (intentional and otherwise), is the real trick.

And then there’s the patent-infringement issue. Mnemonics and operations can’t be patented, but specific hardware implementations can. Some CPU operations have such peculiar and nonintuitive side effects that it’s difficult or impossible to duplicate them all without infringing on someone’s patent. Some low-level CPU operations seem almost designed to be patentable.

Then there’s the invisible hand of economics. Sure, you might get your universal CPU to work, but who’s gonna buy it? Everyone, surely. Except that the evidence doesn’t bear that out. Cloned processors have rarely, if ever, done well in the market. Some of that is due to customer skepticism and reluctance. If a CPU isn’t 101% compatible in every conceivable (and inconceivable) way, customers will always wonder whether that obscure bug they’re chasing is due to their code or to your hardware. They’ll never be sure, and being able to sleep peacefully at night is worth a lot to a product manager.

And then there are the intangible effects of branding, marketing, and reputation. CPU chips are supposed to be unlovable lumps of silicon and copper designed to accomplish a job – mere hardware engines. But we get attached to our engines, and loyalty runs high among most programmers. Changing CPU architecture is asking a lot, but even changing vendors for the same architecture comes with an intangible cost. Anything that might sap productivity isn’t worth the risk.

Nobody’s done an all-in-one, truly universal processor. But many teams have tackled various parts of the whole, building Java processors in hardware, x86-translating processors, ARM clones, and other knock-off CPUs. They all did the job, more or less. And they all lost millions of dollars for their investors.

Universal processors are like Esperanto: a nice idea for a universal communications standard that nobody actually uses. As with so many other parts of our business, we allow inertia to undermine progress. Rats. Guess I’ll have to abandon that big breadboard project I’ve been working on.

13 thoughts on “Calling All Universal Translators”

  1. The ultimate nightmare for QA regression testing … every CPU architecture and every OS release, all in one tiny package. Wonder who is really willing to take on the same complexity at deployment?

  2. A universal CPU is absolutely tenable. Apple has shown that software translation can work on a large scale–tons of users, tons of applications–not once, but twice (68K to PowerPC, PowerPC to x86). Some believe they’re preparing to do a third time (x86 to ARM).

    So yes, emulating/translating/cracking instruction sets is absolutely tenable. The REAL pain in the a** is the API. A universal API translator (Linux, Mac, Windows, CPM, OS/2) would be at least an order of magnitude tougher.

  3. @kleinman – there is a significant difference between Instruction Set Architecture (ISA) migration (AKA cross architecture porting) as compared to a fully heterogeneous architecture that spans not only multiple instruction sets, but multiple operating systems.

    Your example of Apple’s hardware migration (68K to PowerPC, PowerPC to x86), is far from concurrently supporting all three ISA’s and derivative OS’s in a single product. The closest Apple got to heterogeneous was support for x86 Microsoft Windows under their operating system, as a cooperating VM partition.

    Long before that we (the UNIX community) ported AT&T UNIX V7 and SVR4 to more than a dozen diverse ISA and machine architectures, and we (the FreeBSD and Linux communities) continued that progress across nearly every ISA that would support memory mapping and process isolation. All the way to heterogeneous NUMA clusters, with multiple distributed file-systems and operating systems.

    There are some significant barriers to a universal binary execution environment, that spans vendors, ISA’s, and operating systems. The first is that nearly every proprietary vendors software environment is not legally licensed to run on competitors hardware … if you need a demonstration of that, publicly announce a computer that will transparently install Apple or IBM’s enterprise operating systems. Even in microprocessor land, you are going to see some legal challenges to using Vendor A’s proprietary software to sell Vendor B’s chips.

    The closest we (the UNIX, FreeBSD, and Linux communities) have gotten to that is running our own operating system on nearly every vendors ISA and machine products. And one small step past that, which includes running/emulating multiple architectures on a host system with QEMU, to execute binaries from other architectures, and VM partitions of other architectures. This was one way of getting Microsoft Windows/Server applications into a UNIX/Linux/FreeBSD system legally, simply because we could buy a Microsoft product license, and install/run it under an emulated x86 ISA VM partition.

    As Kevin notes, the “hypervisor business” exploits this legal approach for the few cases where a particular vendors legal licensed binary can be run.

    And while many vendors openly use Linux/FreeBSD open source products, most have proprietary additions and changes, which are ONLY licensed for use on their hardware devices. Thus you can not simply take the binary products Samsung produced for their Android/Linux phone, and run them on your custom device or 3rd party device.

    See how far you might get taking Apple’s iPhone IOS, and selling it with your iPhone clone.

    Back in the mid-80’s several of us (3rd party Apple hardware vendors) flirted with finding the fine line of providing 3rd party motherboard upgrades for Apple II and Mac products … to the point of taking the proprietary Apple chips off the Apple motherboards, and dropping them into sockets on our boards, so the Apple OS’s would run unchanged. Joel incurred Apple’s legal wrath as a test … was expensive. After releasing MacSCSI via Dr Dobb’s Journal, I almost released a M68020 design into the public domain, that was a drop-in replacement for the Classic Mac512 motherboard, with sockets for the Apple chips, that would boot the Mac512 OS. In the end, the legal challenges where a bit too expensive.

  4. @Kevin — “But back here in the early 21st Century, we’re still writing and compiling code for specific CPU instruction sets. Our programs are tied to specific hardware architectures the way that VHS and Beta cassette tapes operated only with their respective players.”

    Actually we moved past that about 30 years ago, with C/C++/Java coding standards that will execute on any 16/32/64 bit ISA with a POSIX compliant host OS, and compiler tool chain.

    But that doesn’t stop people from purposefully developing code that is clearly non-portable if needed.

    But for developers that have cross platform interests for the products, it’s easily accessible to reliably compile to over a dozen architectures, and a dozen OS’s … including Mac OS X, Linux, and MS Windows Applications.

    Believe me … a bunch of us spent years in standards meetings to make this happen.

  5. Pingback: URL
  6. Pingback: Poker Sembilan Bet
  7. Pingback: Casino Online
  8. Pingback: Scr888 Register
  9. Pingback: DMPK Services
  10. Pingback: More Bonuses

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Advantech Dual Band WiFi
Sponsored by Mouser Electronics and Advantech
In this episode of Chalk Talk, Amelia Dalton and Monica Goode from Advantech investigate the what, where, and how of dual band WiFi. They also explore the benefits that dual band WiFi can bring to a variety of embedded designs and how you can take advantage of Advantech dual band WiFi solutions for your next design.
Jul 31, 2024
84,014 views