COTS (Commercial Off The Shelf) seems like a great idea on the surface. Rather than designing custom, one-off complex electronic systems from the ground up for each new application, we can save considerable time, money, and mistakes by taking advantage of pre-engineered, open, standards-based components and technologies for the bulk of our project. Then, we can spend the majority of our precious engineering time and talent adding the “special sauce” that makes our application unique.
However, the COTS concept is a bit more complex in practice than in theory. Realizing COTS means boiling down a wide range of applications and distilling out high-value commonalities – things that are similar enough across the entire gamut that they can be developed into useful, off-the-shelf technologies. Before we can deploy those technologies, however, we need standards. Then, we need to develop robust, generic pieces that play nicely together under a wide range of circumstances. It’s not easy.
When it comes to embedded computing systems for critical applications, COTS suppliers bring us the kinds of building blocks you’d probably expect – enclosures, backplanes, connectors, cooling, power supplies, and a wide range of boards – including processor boards, GPU and FPGA boards, IO boards, and a range of interfaces. As you’d hope, these components plug and play nicely, mostly due to standards like the venerable VME and the more modern VPX.
Last week in New Orleans, COTS suppliers gathered for the annual VITA “Embedded Tech Trends” (ETT) conference, where the major companies who make up the COTS ecosystem share their vision and experience, predicting where the standards-based embedded computing world is headed next. You might think a conference where speakers are presenting only to competitors and the press would make for an odd experience, but ETT delivers a solid, useful experience and a forum where COTS suppliers can network, share and drive their vision for the industry, and also inform the press in a technology area that is often overlooked by the electronics mainstream.
This year, a clear theme of the conference was the continuing migration of the industry into the newer, more robust VPX. VPX is a well-conceived standard that does an excellent job of integrating the numerous complex, high-speed, high-reliability interfaces that are rapidly proliferating today – making allowances for everything from traditional low-bandwidth signaling to ultra-high-speed serial protocols. Flexibility comes at a price, however, as the amount of customization involved in a VPX deployment can be a bit daunting for the design team drifting into COTS waters for the first time. In this case, the C in COTS may bend more toward “customizable” than “commercial,” as standards-based products aren’t exactly Lego blocks in their simplicity and interoperability.
Presenters included Mercury Systems, Altech Defense Systems, Pixus Technologies, LCR Embedded Systems, IHS Markit, ADLINK Technology, TE Connectivity, Artesyn Embedded Technologies, Parker Aerospace, Reflex Photonics, Interface Concept, Curtiss-Wright, Elma Electronic, Bishop & Associates, Abaco Systems, Alligator Designs, Concurrent Technologies, MEN Mikro Elektronik, Pentek, MIT Lincoln Laboratory, NAVAIR, and VITA. If you think this sounds like a shopping list of suppliers for everything you’d need to build your own private Doomsday Machine (or autonomous car or drone), you’d be pretty much correct.
The event kicked off with a reminder of the challenges we all face securing our systems – both hardware and software. Nowhere is this issue more center stage than in this crowd, which focuses primarily on the high-reliability, critical, defense, transportation, industrial, and similar “we really don’t want the bad guys getting control” kinds of applications. In the defense market, the security radius includes not only preventing long-range system attacks via the usual means, but also physical attacks when the equipment inadvertently lands in the wrong hands. Engineering a secure system is a ground-up proposition, and security and safety most definitely cannot be added as afterthoughts to existing designs.
Heterogeneous computing was also center stage at ETT, with numerous talks on integration of various computing architectures – including GPUs, FPGAs, MCUs, and conventional applications processors – into high-performance systems. In defense, the proliferation of sensors, cameras, radar, and other massive data-gathering elements has created a monstrous data overload, and our systems need to take advantage of the strengths of different kinds of computing elements at different points in the data path in order to convert the mass of data into actionable information in a timely manner. FPGAs and GPUs clearly play a leading role in the edge of that system, powering through and distilling the mass of incoming information into something that the downstream data channels, processors, and storage elements can more reasonably handle.
Similar to the IoT world, the computing load in this market is being distributed between edge processing nodes and the cloud. But, in many of these applications, a more precise division of computing responsibilities has given birth to “fog” and “mist” computing – which move tasks that would have been associated with “cloud” data centers to computing resources that are progressively closer to (but not at) the edge node. This heterogeneous distributed processing model creates extreme challenges for application developers who must now dramatically expand the scope of the traditionally narrow embedded architecture space. Going from writing an embedded application for an 8-bit MCU to creating a software system that runs on non co-located FPGAs, GPUs, MCUs, mobile applications processors, and data center blades is an enormous leap in software engineering complexity for those in the embedded space.
In addition to the processing architecture and security challenges, ETT taught us about trends in every part of the embedded system, including enclosures, connectors, cooling, power supplies, and procurement. We even got a lesson on the future liquid-cooling-for-embedded computing systems from Parker Aerospace, who explained that pumping liquid through their enclosures may sometimes be a better way to get the heat out than the traditional combination of fans, sinks, and vents.
The primary lesson of ETT was that COTS is a rapidly evolving industry that is serving a critical need. Most of the defense, transportation, and industrial applications addressed by these COTS suppliers would be orders of magnitude more expensive to develop, deploy, and maintain without the foundation of the COTS ecosystem. Open standards play a key enabling role in this equation, and the progress in creating standards that enable system designers to take advantage of the incredible capabilities of today’s leading-edge computing, storage, networking, and interface technologies is impressive.