feature article
Subscribe Now

Driver’s Ed for FPGAs

Will Altera FPGAs Drive Your Future Audi?

Letting go of the steering wheel for the first time will be a terrifying milestone for most drivers. As engineers, we have all known for years that self-driving and assisted-driving cars were coming, and as a group we have a unique appreciation for the myriad challenges – both technical and social – that lie between us and safer roads. 

On the technical side, it is clear that a robust, safe self-driving system requires the aggregation of massive amounts of data from a diverse array of sensors, and the software that processes those inputs will be complex, performance-demanding, and in a high state of flux for many years. That means we need an unfortunate combination of massive sensor aggregation bandwidth, raw data processing, and algorithmic compute performance that can not easily be solved by any current combination of conventional processors and ASSPs.

It’s time to take some FPGAs to driving school. 

As this article goes to virtual press, an Audi A7 is self-driving its way along a 550-mile route from the San Francisco Bay area to the 2015 Consumer Electronics Show (CES) in Las Vegas. So, if you’re sitting back in the safety of your lab chair thinking that all this is a rhetorical exercise for some unlikely future scenario – well, welcome to the future. 

Advanced Driver Assistance Systems (ADAS) are the next big wave in automotive technology. Features of ADAS range from emergency automatic braking to adaptive cruise control and lane departure warnings to full-blown auto-drive capabilities. Most systems rely on a distributed architecture – different modules are added for each new capability, and those electronic control modules communicate with each other using various networking standards. Audi, however, is using a centralized control box that aggregates and processes all sensor data for all of the various ADAS features such as parking, night vision, lane departure, and even fully automated driving, which Audi calls “piloted driving.”

For piloted driving, there are three primary sensor types doing the heavy lifting – vision (cameras distributed in various locations around the car), radar, and laser. Signal data from all of these sensors has to be processed, filtered, aggregated, and put into a form where an applications processor running the higher-level algorithms can quickly determine context (where the vehicle is and what it is doing) and apply it to the task at hand – driving the car. 

Altera and Audi have just announced that Altera SoC FPGAs are being used to perform these critical functions in Audi’s new zFAS central driver assistance control unit. SoC FPGAs (or, as we call them, Heterogeneous Integrated Processing Platforms (HIPPs)) bring a unique combination of programmable LUT fabric and high-performance conventional processors all on one chip. This makes the HIPP an extremely fast, flexible, heterogeneous processor that can do complex compute-intensive tasks with very low power consumption.

The Audi zFAS is being jointly developed by Audi and TTTech. They are using Altera Cyclone V SoC FPGAs, which contain two ARM A9 processors, FPGA fabric, DSP blocks, and complex programmable IO. The Cyclone device is doing sensor fusion – processing the radar and video streams together – as well as implementing a Deterministic, time-triggered Ethernet switch that enables reliable high-speed communication between various subsystems.

In the fast-moving ADAS world, scalability and reconfigurability are critical. New algorithms and updated sensors and displays arrive at a dizzying pace, and the flexibility of FPGAs is required to adapt to the various configurations of hardware as well as to the various levels of features demanded by different auto models. This makes ADAS a “killer app” for FPGAs, and particularly HIPPs such as the Cyclone V SoC FPGA. 

As with many compute-acceleration applications of FPGAs, the big challenge for system designers is programming the FPGA. This is where higher-level languages and more abstract design methodologies such as model-based design, high-level synthesis, and other algorithmic design flows come into the picture. In Altera’s case, the Cyclone V SoC FPGA can be programmed using Altera’s OpenCL implementation. Designers can write OpenCL code (about the same as would be used for a GPU-based implementation) and compile into a high-performance FPGA implementation. In the ADAS arena, Altera implemented a dense optical flow algorithm in OpenCL and implemented it on the Cyclone SoC FPGA. The company says development required less than three weeks, whereas an RTL implementation would have required several months of development time. The resulting implementation required approximately 55K LUTs, or half of the fabric on a 110K Cyclone FPGA.

For those who may be wondering why the humble Cyclone V is being tapped for this application when the company produces much more capable devices such as the mid-range Arria and high-end Stratix device families, there are actually several answers. First, Cyclone V is able to meet the performance requirements of these ADAS’ generation 1 applications. Second, Cyclone is the only automotive-qualified family in Altera’s lineup, and the wire-bond packaging of Cyclone is the current standard in the automotive world – versus the more sophisticated flip chip packaging used in higher-end FPGAs. Finally, even in a system as expensive as an automobile, BOM cost is a huge barrier. The cost structure for even high-end luxury automobiles doesn’t lend itself to the use of Stratix-class FPGAs. 

As we go to press, the Audi A7 demo vehicle (dubbed “Jack”) has successfully completed its voyage from Silicon Valley to Las Vegas for the 2015 Consumer Electronics Show. Jack employs long-range forward radar along with rear- and side-facing radar sensors. The radar is backed up by a laser LIDAR scanner on the front as well as a front-mounted 3D camera and four additional cameras at the corners of the car. The zFAS systems FPGAs were most likely scooping up plenty of data from the desert floor as the A7 made its way toward Sin City.

Elon Musk (Tesla) is on record saying that self-driving car technology will be ready for production by 2016 (Yep, that’s next year, folks.) Even if Musk’s estimate turns out to be overly optimistic, we’ll undoubtedly be sharing the roads with robots in the not-too-distant future, and FPGAs will be a big part of that equation. It should make the world a safer and happier place.

 

Image: Audi

9 thoughts on “Driver’s Ed for FPGAs”

  1. It helps that Altera has worked with TÜV Rheinland to gain IEC 61508 safety certification for devices, IP, development tools, and design flow – giving them a flying start on safety.

    But I still have my doubts about roads full of driverless cars.

  2. Pingback: GVK BIO
  3. Pingback: pendaftaran api
  4. Pingback: Bdsm
  5. Pingback: DMPK

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Machine Learning on the Edge
Sponsored by Mouser Electronics and Infineon
Edge machine learning is a great way to allow embedded devices to run applications that can collect sensor data and locally process that data. In this episode of Chalk Talk, Amelia Dalton and Clark Jarvis from Infineon explore how the IMAGIMOB Studio, ModusToolbox™ Software, and PSoC and AURIX™ microcontrollers can help you develop a custom machine learning on the edge application from scratch. They also investigate how the IMAGIMOB Studio can help you easily develop and deploy AI/ML models and the benefits that the PSoC™ 6 Artificial Intelligence Evaluation Kit will bring to your next machine learning on the edge application design process.
Aug 12, 2024
56,184 views