feature article
Subscribe Now

Automotive Vision

When you head out onto the road, your car is managed by a sophisticated parallel processor: your brain. That computing engine is able to do an amazing number of things all at the same time, so much so that we’re still not sure how it works.

And many things have to be done in parallel when you drive a car. Obviously, you have to be able to actuate the various controls – steering and working clutch and accelerator together. You need to observe all manner of threats around you – that pothole, the soccer ball that’s rolling into the street, those pedestrians waiting to cross (you do stop for them, right?).

In fact, the number of things we really should be looking for at all times is quite enormous. Each trip is an independent event, and the probability of something going wrong on any given trip is not changed by the fact that nothing went wrong yesterday. But our cranial parallel processor seems to have this risk diminution algorithm, particularly for routes we travel often. The more nothing happens, the less we pay attention.

And what do we do with that attention? OK, let’s not get into the phone debate – for some reason, we have created the scapephone in order to give ourselves permission to do all kinds of other distracting things. Listening to music… listening to the news… here’s a really bad one: listening to This American Life or books on tape. All of these things are specifically intended to put you cognitively somewhere –anywhere – besides on this boring road that you’re tired of seeing twice a day.

But take away all of that distraction, and what happens? Your parallel processor gets bored and starts to entertain itself with your own internal songs or political musings or that old standby – sexual fantasy. And there you are again, off somewhere else when you’re supposed to be driving. Our brains are self-distracting in the absence of other distractions. You hear advocates righteously declare that you should be 100% focused on driving all the time – yet we know that that’s not really possible; it just makes for good political grandstanding.

That doesn’t change the fact, however, that we’re barreling down the road in a big ol’ hunk o’ steel (or composite), and we can do a lot of damage to ourselves and others with it. True 100%-conscious driving would involve constantly checking our speed, noting every speed limit sign (you do see them all, right?), checking around on all sides of the car many times per minute to make sure that no idiot is lurking in your blind spot, estimating the distance to the car in front of you, monitoring how wet the road is, timing when to dim your brights due to an oncoming car, and, well, the list goes on.

It’s actually pretty remarkable how good a job we do, but our failings are frequent and obvious, whether in the dents in the cars around us, the newspaper headlines, or those red sections of the road as seen on an online map with a traffic view. That’s very often because, despite all the things we should be doing in parallel while driving, we really do very few of them. It’s just that the risk is low enough that we get away with it most of the time.

But here’s where things are changing, and changing fast. (At least, in automotive terms.) All these tasks that we need to do in parallel – which we don’t really do religiously – can increasingly be done by machines. In fact, machines can probably do a much better job than we can, but from a legal and cultural standpoint, the issue of control remains a sticking point. We’re used to controlling – and being responsible for –the operation of our vehicles, and it may be tough to change that.

So technology is chipping away at this by degrees. Ford had a big display at CES, and I had a conversation with them illuminating some of the things they’re working on to increase safety – some of which are already in place, even in modest cars like the Focus and the Fiesta.

Just as happens when we do a conscientious job driving, driver assistance technologies (officially called Advanced Driver Assistance Systems, or ADAS) operate by observing artifacts and then, when needed, taking some action. We use our eyes and ears (and occasionally touch, when that rattle can be felt in the steering wheel) to tell if something’s wrong, and we use our hands and feet to take action.

Machines are inferior to the extent that any single machine can’t do nearly as much as our bodies can, but, with multiple machines, we can make a combined unit that does a far better job.

For example, when looking forward, we have a number of different machine options: visual cameras, infrared, radar, and LIDAR are examples. And each has strengths and weaknesses. Visual cameras obviously suffer in low-light conditions; radar can mistake large metallic stationary objects for other cars; and LIDAR is best for close-in detection, meaning we need something else to get an earlier warning. Each can be used for the job to which it’s best suited, however, and the inputs can be fused to draw overall conclusions about safety threats and for making other operational decisions.

Then there’s the question of what to do with the information. For example, a car with auto-braking can apply the brakes itself when it sees the need. As implemented today, this happens as a hard brake; in the future, it will become more subtle, with intermediate force being used until the end, when a hard stop can be forced if necessary.

By contrast, in a car whose driver remains more in control, the driver may be informed of a looming problem, with the brakes being automatically applied only if the car decides that the driver isn’t braking fast enough. And if you’re wondering how it knows how hard you’re braking, it’s not just about calculating closing speed – there’s a sensor gauging how fast you are pushing down on the brake pedal. If you’re being too gentle, you may get some help.

Based on some of the embedded vision technology stories we’ve heard, this type of visually-guided control would appear to be a work in progress. But it turns out that some of these vision-processing algorithms are surprisingly well-established – so much so that they’ve been cast into dedicated silicon since 2007, provided by a company called Mobileye.

Mobileye was founded specifically for the purposes of addressing automotive safety. They worked with ST Microelectronics to build a custom ASIC; they call it the EyeQ chip. The idea was to put in place enough horsepower to run certain specific algorithms in parallel, relying only on data from a single monocular camera – no fancy 3D stuff. The first version had four dedicated accelerators that they call “vision computation engines,” or VCEs, working along with two ARM946E cores. The specific tasks of this system included:

  • Forward collision warning
  • Automatic high-beam control
  • Lane departure
  • Detecting the closing distance to the car in front of you

Alternatively, you could use the engines for pedestrian detection.

While this chip was largely a proof of concept, it was used in various BMW, Cadillac, and Volvo models.

The EyeQ2 was a second version; they claim that it has six times the processing power of the original EyeQ. This has five VCEs, 3 “vector microcode processors” (VMPs), and two MIPS34K processors. This bulked up the number of things that could be done in parallel. (Note the switch from ARM to MIPS, a decision that was based on old-school benchmarking.) This chip has seen more commercial uptake, shipping in the millions of units.

And a third generation, the EyeQ3, is in development now. It adds another VMP and two more MIPS cores, along with faster clocks and memory and all of the support circuitry, resulting in another six-fold increase in processing power. This version will ship in 2014.

The main point here is that, while many vision algorithms are still finding their way via soft implementation in order to facilitate frequent changes and updates, Mobileye is on their third generation of dedicated hard silicon. (Obviously parts of it are highly programmable, but key algorithms have been hardened.) Note that they didn’t go down the custom route just to be stubborn; they couldn’t (and claim they still can’t) find off-the-shelf technology (even silicon IP) that would do what they needed with the performance they wanted. At such time as they find something that works for them, they say they’re happy to stop doing a custom solution.

So while Ford puts the entire car together, tying the sensing technology to the braking response (for example), they’ve been working with Mobileye to handle the vision portion. Such alliances seem likely to proliferate as specialist companies develop increasingly arcane algorithms and as auto manufacturers turn more and more into systems integrators.

We can also start refining some of the cruder operations we have. Ford is talking about “partial high beams” –shutting down the high beams only on the left side for the sake of an oncoming driver, leaving the right side bright. Or bit-mapping the mirrors in a way that allows images to be projected on them. This is more than just having the car automatically do more of what we do now; it’s changing what we do.

And at some point, the cars are likely to drive us. We know it can be done – heck, some states have already specifically OKed the concept. And we know Google is doing it (they’re probably looking to cut costs for generating street views – driverless cars don’t ask for raises or stop for pizza). Of course, we have to resolve who’s responsible if things get banged up.

I have to say, given the amount of driving I’ve done over the last 12 months, with several roadtrips of significant distance, that I’d be OK with having the car take over the driving (cruise control seems so crude by comparison). Except that such a car would probably hew religiously to the speed limit. Not that I ever exceed it myself… just sayin…

So it’s pretty clear that, given enough time, our biological parallel processors are going to get a break, with numerous other parallel systems taking over. These new widgets will pay strict attention all the time, never getting sleepy, becoming bored, or fantasizing about the donuts in the shop you just passed. And they will remain completely immune to both Ira Glass’s low-key vocalizations and that super-cool groove going on in the background.

One thought on “Automotive Vision”

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Telematics, Connectivity & Infotainment Integration Made Easy
Today’s automotive designs must contend with a variety of challenges including scalability, security, software integration and the increased use of different radio technologies. In this episode of Chalk Talk, Fredrik Lonegard from u-blox, Patrick Stilwell from NXP and Amelia Dalton explore how the use of modules can help address a variety of automotive design challenges and the benefits that ublox’s JODY-W3 host-based modules can bring to your next automotive application.
Apr 4, 2023
40,990 views