feature article
Subscribe Now

Hug a Data Scientist

Time for a Sea Change in Computing

It’s been a long and mutually-productive relationship, but it’s time to break up.

For the five-decades-plus run of Moore’s Law, we electronics engineers have been married to software engineers. It’s been a love/hate relationship for sure, but together the two communities have achieved the single greatest accomplishment in the history of technology – the evolution of the modern computing infrastructure. During the five decades of collaboration, we have seen orders of magnitude of progress in every aspect of the global computing machine. On the hardware side, we have seen countless orders of magnitude of progress in parameters like processor performance, energy efficiency, size, cost, memory density, reliability, storage capacity, network bandwidth, reach, and latency.

On the software side, we’ve seen the evolution of languages, compilers, operating systems, development tools, and productivity, coupled with an immense repository of powerful algorithms that can efficiently perform practically any calculation under the sun. Our understanding of the nature of information, the solvability of problems, and the limitations of human capacity to handle complexity have made quantum leaps in this golden age of software engineering.

These advancements in hardware and software engineering have gone hand-in-hand, with bi-directional feedback loops where software engineers have set the demands for the next generation of hardware, and hardware engineers have enabled new methods and advancements in software. Software engineering was always the customer of hardware engineering, pushing and directing the evolution of hardware to serve its needs.

Now, however, we have come to a fork in the road, and this partnership that has flourished for decades has to make way for an interloper. Since about 2012, fueled by advances in computing hardware, progress in artificial intelligence (AI) has exploded. Some experts claim that more headway has been made in the past three years than in the entire history of AI prior to that. AI is now being adopted across a huge swath of applications such as machine vision, autonomous driving, speech and language interpretation, and almost countless other areas. 

This dramatic uptake in AI adoption has surfed atop significant advances in hardware and software technology, including heterogeneous computing with GPUs and FPGAs, networking, cloud-based computing, big data, sensor technology, and more. These advances have allowed AI to advance at unprecedented rates and have set the stage for a wide range of applications previously in (or beyond) the domain of conventional software engineering to shift to AI-based approaches.

For us as hardware engineers, however, moving forward from this point requires a change in our primary professional relationships. Artificial neural networks (ANN) require specialized hardware that is outside the scope of conventional von Neumann based computing. And different applications require different hardware. Some particularly powerful techniques such as long short-term memory (LSTM) used in recurrent neural networks (RNNs) practically demand specialized hardware compute engines – possibly on a per-application basis.

Simply put, the hardware needs of AI and the hardware needs of traditional software development are diverging in a big way. And, while traditional software development will likely require only an evolutionary improvement of current conventional computing hardware, AI-specific hardware is a different game entirely, and it will be in a massive state of flux over the coming decades. While “software” will continue to ask us for faster, smaller, cheaper, more efficient versions of the same basic architectures, AI will require revolutionary new types of logic that have not been designed before.

Data science (and AI) as a field has advanced considerably ahead of our ability to practically utilize its techniques. The complexities of choosing, applying, and adjusting the AI techniques used for any particular application will be far beyond the average hardware designer’s experience and understanding of AI, and the complex hardware architectures required to achieve optimal performance on those applications will be far outside the expertise of most data scientists. In order to advance AI at its maximum rate and to realize its potential, we as hardware engineers must form deep collaborative relationships with data scientists and AI experts similar to those we have had with software engineers for the past five decades. 

This does not imply that traditional software development has no role in the advancement of AI. In fact, most of the current implementations of AI and neural network techniques are based in traditional software. The adaptation to custom hardware generally comes far later, after the techniques have been software proven. But, in order for most AI applications to hit the realm of usability, custom hardware is required.

The immense amount of computation required for ANN implementation, and particularly for deep neural network (DNN) implementation on real-world applications is normally considered a cloud/data center problem. Today, cloud service suppliers are bolstering their offerings with GPU and FPGA acceleration to assist in both the training and inferencing tasks in DNNs. But, for many of the most interesting applications (such as autonomous driving, for example), the latency and connectivity requirements make cloud-based solutions impractical. For these situations, local DNN implementation (with the inferencing part at a minimum) must be done at the local level, with what are essentially embedded AI engines. Apple, for example, in their iPhone X face recognition chose to use an on-board AI engine in order to avoid sending sensitive identification data over the network, and instead used their embedded AI processor to keep the information local. 

Over the past several years, much of digital hardware design has devolved into an integration and verification task. Powerful libraries of IP, rich sets of reference designs, and well-defined standardized interfaces have reduced the amount of detailed logic design required of the typical hardware engineer to almost nil. Our days are more often spent plugging together large pre-engineered blocks and then debugging and tweaking issues with interfaces as well as system-wide concerns such as power, performance, form factor, and cost.

But the coming AI revolution is likely to change all that. In order to keep up with rapidly-evolving techniques for native hardware AI implementations, we’ll be required to break away from our dependence on off-the-shelf, pre-engineered hardware and re-immerse ourselves in the art of detailed design. For FPGA and ASIC designers, a whole new land of opportunity emerges beyond the current “design yet another slightly-specialized SoC” mantra. It will be an exciting and trying time. We’ll change the world, yet again.

One thought on “Hug a Data Scientist”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
64,969 views