feature article
Subscribe Now

Treading on Thin Air

Engineering the Second Generation

Somewhere, in a nondescript cubicle in building number umpteen of a multi-billion-dollar multinational multi-technology conglomerate, an engineer sits at a lab bench staring at an eye diagram on six-figure scope. It’s the same every day. Any time he is not in a meeting or writing a status report, he sits in this lab and eats and breathes signal integrity. He has almost no concept of the end product that will incorporate his work. His entire universe is jitter, pre-emphasis, equalization, noise, amplitudes, and bit-error rates. For him, time stands still – in the picoseconds.

Across town, another engineer is also worried about signal integrity. He is including some DDR4 memory on his FPGA board, and he needs to be sure his design will work. He will worry about signal integrity for less than one day. He has prototyped his project on an FPGA development board with a couple of add-on modules plugged in with expansion connectors. His system is using an application he wrote on top of a LINUX TCP-IP stack, a six-axis gyro/accelerometer, a GPS, a camera, and some brushless motors – all coordinated by an FPGA with an embedded processor. He will check his DDR4 interface with an EDA tool that simplifies the operation almost to a pushbutton. This engineer is doing just about every aspect of the design, from the FPGA work to the board layout to the applications software. He is intensely aware of the requirements of his end product.

The second engineer relies almost completely on engineers like the first. He is truly working in a “jack of all trades, master of none” scenario. In order for him to be successful, an entire legion of engineers like the first must do their jobs, boil the essence down to a few variables or API calls, and package up the whole thing into a tight bundle of encapsulated engineering expertise. It is this kind of collaboration with those we’ve never met that empowers us to create the remarkable systems we are seeing today. 

The technology ecosystem truly has a food chain, beginning with bare-metal technologies and working its way – layer by layer – up the pyramid to the system-level design. As the structure gets taller, the lower layers become ever broader, requiring a greater number of increasingly specialized engineering disciplines to keep the whole thing supported. The number of distinct electronic engineering specialties today is greater than it has ever been before, and there is no sign of the trend letting up.

Our postgraduate engineering education is designed to focus us on particular narrow areas of technology. This is an ever-moving target. Universities fund departments to put together programs in what are perceived as the “hot” topics of the day, and, at any given time, the most popular tracks are those tackling today’s most exciting and interesting problems. Few electronic engineering programs divert students into mature, stable areas of development, even if there is considerable work remaining to be done there. 

The result is an age gradient in the technology chain, with the lower levels occupied by older engineers who have spent long careers refining their solutions at what have become supporting levels of technology. The higher levels trend toward younger engineers who went to school when various higher-level challenges were in fashion. Almost like sedimentary rock, the layers of technology are attached to the generations of engineers who produced and refined them. 

This presents an obvious and disturbing problem. What happens when the bottom of the pyramid goes into retirement? Does our whole technological structure run the risk of collapsing onto itself because nobody is left who truly understands the deepest and most fundamental levels of technology?

Fortunately, in semiconductors, the base level has been re-engineered every two years for the past five decades, which has kept a good deal of fresh blood circulating to the bottom of the pyramid. The engineers designing with 14nm FinFETs could not rest on the laurels of those who understood one-micron planar CMOS. The bottom-to-top churn of Moore’s Law has had the side effect of constantly renewing our attention on every level of the food chain, preventing knowledge decay at the lower levels. 

But what about software? Does our arsenal of algorithms become so robust and canned that we lose touch with the fundamentals? Will there be a time in EDA, for example, when nobody understands the innermost workings of some of the most sophisticated tools? Is there a risk that the brilliant and subtle optimizations and refinements of engineers of the past will be lost as art, existing only deep in the lines of the legacy code they left behind? 

We are just now reaching the time where the first generation of engineers who created the Moore’s Law revolution have retired. We have not yet seen an age like the second and later generations will experience, where our base technologies were all optimized by engineering ghosts from the past. 

As our systems become more complex, constructed by plugging-and-playing scores of independent components whose entangled fractal complexity is completely understood by no one, will we reach a point where our systems take on organic characteristics? When nobody fully understands a system, will its behavior becomes less deterministic, and will something more akin to a personality emerge? And, does this mean there will be a time when we diagnose and treat our designs in something that more closely resembles today’s medical practices? Will concrete engineering decisions of the distant past become the subject of future research discoveries?

It is entirely possible that engineering archeology will one day be an important discipline, and those who can reverse-engineer the lost miracles of the past will be just as highly regarded as those who create new solutions for the future. 

As I stare at the latest FPGA development board and realize that I cannot identify all of the components mounted on it, I can feel my contact with the bare metal layer begin to erode. Even though that board will enable me to create the most sophisticated designs of my career, that increased sophistication comes with a loss of innocence, a frightening failure of gravity that lifts our technological feet out of contact with the terra firma. Taking the plunge of trust and depending on technology that we do not fully understand is more challenging for an engineer than for most other personalities. Our identity is often wrapped up in our command of our domain from almost a molecular level, and feeling that positive contact slipping away can be terrifying. 

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured chalk talk

Mounted Robotics End of Arm Tools
In this episode of Chalk Talk, Rajan Sharma and Rafael Marengo from Analog Devices and Amelia Dalton chat about the challenges associated with motor control, industrial vision and real-time connectivity for industrial robotic designs. They also investigate the variety of solutions Analog Devices offers for mounted robotics designs and the role that Gigabit Multimedia Link (GMSL) plays in these applications.
Dec 2, 2024
17,559 views