“Fool me once, shame on you. Fool me twice, shame on me.” – traditional proverb
Let the culling commence.
There is apparently no “IoT” in “Intel.” For the umpteenth time, Intel has killed off an embedded product that it all-too-recently hyped as the greatest thing since canned beer. You live and you learn.
Just a few days after it toe-tagged Itanium, Intel whacked three products from the opposite end of its line card, namely Galileo, Edison, and Joule. It’s a bad time to be a 19th-Century inventor in Santa Clara.
All three products are board-level modules intended for embedded or IoT devices. Intel isn’t discontinuing the microprocessor chips they’re based on, just the boards themselves, although the CPU chips rarely appeared anywhere else.
Galileo is the oldest of the trio, tracing its origins back to 2013. It’s the most conventional board of the three, about average in size and with familiar, exposed connectors. It’s based on the Quark X1000, an early embedded variation of the x86 architecture that has since been updated and obsoleted by other Atom variants.
After Galileo came Edison, a tiny Atom-based board that was barely larger than an SD card. Its small size made it suitable for embedding in production systems, but… its small size made it difficult to actually do so. Users complained that the tightly spaced expansion connector was impossible to use as-is, necessitating a mating connector to a breakout board that was bigger than Edison itself. Okay for production, perhaps, but a real nuisance during development.
Joule is the youngest of the three: it’s less than a year old. It’s also Atom-based and, like Edison, also tiny in size with tiny connectors. Like the other two, Joule was envisioned as a competitor to Raspberry Pi or BeagleBoard or Arduino, bringing the x86 architecture to embedded Linux developers. Unlike Raspberry Pi, however, Intel’s boards were expensive, difficult to use, and unpopular.
Ironically, Intel’s biggest problem seems to be support. Users of all three boards complain bitterly that there isn’t any. Or, at least, not the kind of support they see with competing, open-source development boards. Documentation is obscure or missing; interfaces aren’t fully described; and knowledgeable tech-support people are scarce. The world’s best-known CPU architecture, from the planet’s biggest processor company, seems to have a shortage of supporters.
If so, Intel is not the first chip company to suffer from big-company syndrome. For most of its corporate life, Intel has catered to extremely large and fast-growing markets dominated by a few big vendors. The company is geared toward huge PC makers, not… makers. Individualized support, in-depth timing diagrams, detailed register maps, and accessible pinouts for low-budget developers are not in the company’s DNA. Many other semiconductor firms operate the same way.
Counter-examples abound, too. During the early days of DSP development in the 1990s and early 2000s, Texas instruments was just one of many vendors supplying these weird and mysterious processors. Most DSP vendors were focused on huge potential wins at government telecom centers or networking giants. If you weren’t ready to place an order for a million parts, most DSP vendors didn’t want to talk to you.
On the other hand, TI was happy to process an order for 1 unit. (Or, more accurately, its distributors were.) TI provided training for DSP beginners, made signal-processing software available for download, and published a wealth of DSP-related application notes and technical bulletins. It became the friendly vendor for DSP newcomers. Consequently, TI’s DSP architecture became almost a de facto standard. Its chips showed up in every low-volume DSP product for a decade. And some low-volume products eventually become high-volume products.
When the DSP world was new, training and hand-holding were everything. Nobody was competent to judge the qualities of one vendor’s DSP architecture over another’s. Newcomers barely knew what they were looking at, much less how to estimate performance or applicability. If one vendor could lift them up out of ignorance and train them on the basics of DSP programming – while coincidentally instilling TI-specific prejudices – well, who’s going to complain about that?
Similarly, Intel needed to train its embedded customers, not just agree to supply some boards and take their money. Everyone knows how to program an Intel-based PC; not so an Intel-based embedded system. There’s no BIOS, no interrupt controller, no decades of accumulated knowledge, wisdom, and lore. The fact that Galileo, Edison, and Joule had x86 processors was almost irrelevant (to the customers). The rest of the boards were utterly unfamiliar, and users needed detailed documentation, help, and support. The fact that Intel’s boards cost about twice as much as competitors’ hardware didn’t help, either.
TI’s “product” was DSP support and on-the-job training for programmers and engineers. The hardware was just how TI paid for it all.
If you’re a loyal Galileo, Edison, or Joule customer, you’ve got less than three months (until September 16) to place your last order, and you must take delivery by December. Orders are non-cancelable, and nonrefundable. Order soon – they’re going fast! Or not.
This is just a recognition that CPUs are not the best embedded devices. “Intel” is also Altera. Intel-Altera has the Max 10 line. These devices are powerful, low power and cheap ($5 in volume I think). The Max 10 is really a non-volatile FPGA. If Intel takes advantage of the FPGAs and pushes them into the data center you will see a major shake up on how we look at computing. The current view is a ALU and a register file. A better, more efficient, view is a pipelined machine that matches your algorithms requirements at the hardware level.