“You’re safer in the race car than you are driving to and from the track.” – Mario Andretti
I’ll make you a bet.
Let’s flip a coin, and if it comes up heads, I’ll pay you $100. But if it comes up tails, you owe me $100. Sound fair?
Most people won’t take that bet. Even though it’s obviously fair and unbiased, with an exactly equal chance to win or lose, and has no hidden risks or unknown hazards, people overwhelmingly refuse to play the game. In fact, you’ve got to double the payoff to $200 (that is, heads: you win $200, tails: I win $100) before even half of the population will accept the deal.
What this says about us is that we’re more averse to risk than we are enticed by reward. We’re more afraid to lose than we are eager to win. We are naturally conservative, in the original sense of the word.
That’s how we feel about the risk of self-driving cars. Most people are vaguely terrified by the idea, especially if they don’t work in technology, and most especially if they don’t work for Waymo, Tesla, Audi, Intel, Lyft, or a dozen other firms betting big on autonomous-vehicle technology.
Nearly all the fear and loathing centers around safety. The populace wonders, “What if the computer crashes my car?” instead of, “How will this lower my insurance premiums?” We’re collectively conjuring bizarre nightmares of being trapped in an out-of-control vehicle as it hurtles toward a busload of nuns and orphans. We’ve seen how often our PCs crash. Who wants to apply that technology to the family minivan?
And yet, such fears are totally misplaced. Irrational, even. Last year, more than 37,000 people died in car crashes in the United States alone (more than 1.3 million worldwide). That’s more than 100 fatalities per day; more than four/hour. A traffic death every 15 minutes, just in this country! We could hardly do worse than that.
Nearly half of those accidents are caused by inattention, fatigue, or impairment (usually through alcohol). Given that self-driving computers don’t get drunk, sleepy, or bored, we could slash the fatality rate in half without even trying hard. If a new computer upgrade could save 18,000 people per year, don’t you think we’d want it? Demand it, even? Most nations have mandated airbags, seat belts, 5-MPH bumpers, and (in Europe) taller hoods, all in the name of improved safety. Those improvements, while important, didn’t come anywhere close to halving the traffic injury rate. So why aren’t we clamoring for more autonomy in our vehicles? Where are the lines of protesters in front of the DOT or NHTSA buildings?
Until a teenager reaches their early twenties, the most dangerous thing in their life is their own car. Teen fatalities aren’t caused by drugs, disease, or guns. It’s cars – usually their own, usually while they’re driving. Most teenagers are – surprise! – lousy drivers. They’re too distracted, too inattentive, and too inexperienced. Replacing a hormonal teen with an insensitive droid sure looks like a reasonable solution. It seems criminally negligent not to.
Yet we cling to our old ways, like consulting a mechanical wristwatch instead of a cheap, hyper-accurate, digital clock. Or eating artisanal organic quinoa salad instead of space food sticks. We like the fact that, when we’re driving, we feel like we’re in control. Even if that control is illusory and lethally unreliable. It’s the same effect as sitting in the passenger seat. As any married person can tell you, we feel less safe riding than driving. When we’re holding the wheel, we’re notionally in control of our own destiny. In the backseat, we’re just a passenger, our fate in someone else’s hands. That’s partly why rollercoasters, boats, and airplanes make us seasick: we’re not in control and can’t anticipate changes in direction, speed, or attitude. Now imagine every car trip feeling like that.
Autonomous vehicles will absolutely, positively, lower the traffic accident rate for the simple reason that they won’t be allowed on public roads until they do. A self-driving car that’s just as good as a person might be an amazing technical feat, but it isn’t nearly good enough to sway the public. In fact, the head of the NHTSA has decreed it so. “We need to start with two times better. We need to set a higher bar if we expect safety to actually be a benefit here.” Sounds like he’s played the coin-flip game.
Psychologically, we’re willing to accept risk if we think it’s our risk – that we’re going into it with our eyes open. (How else to explain lottery ticket sales?) But we’re terrible at estimating actual risk. You’re 300 times more likely to be killed by a deer than a shark, but which one gets the cute Disney movie treatment? We all think we’re a better driver than the one buried inside the dashboard, so, even though we’re provably wrong, we stick to our decision. At this point, Capt. Kirk might intone solemnly, “It’s what makes us… human.”
Of course, there will still be accidents among auto-driven cars, no matter how far the technology evolves or how good it gets. Stuff happens. And the first handful of such accidents are guaranteed to get outsized attention and media coverage. Lost amid all the Facebook-fueled outrage will be the precipitous drop in DUIs. First thousands, and then tens of thousands, of people will be saved. But we’ll never know who they are or when it happens. Did you dodge a bullet on the way to work today, dear? No way to know.
The first automobiles reportedly required two men to walk in front waving red flags that warned horses and pedestrians of a large, heavy, and unreliable vehicle heading their way. (You’d think the noise and the smell would have been warning enough.) Now, developers are contemplating colored lights for autonomous vehicles that signal the car’s intentions. Pedestrians get weirded out when a driverless car approaches a crosswalk. Will the machine stop for me, or am I fated to be the first casualty of the Great Robot Uprising of 2019?
We’ve got a couple of years to figure that out – but not much more. Audi will introduce its first (and possibly the industry’s first) Level 4 vehicle in about a year. Daimler-Benz, Tesla, and Honda aren’t far behind. And Nissan says it will have fully autonomous Level 5 vehicles (no steering wheel or pedals at all) sometime around 2025.
That gives us about eight years to stock up on red flags.
So Jim, still stand by this? “Autonomous vehicles will absolutely, positively, lower the traffic accident rate for the simple reason that they won’t be allowed on public roads until they do. A self-driving car that’s just as good as a person might be an amazing technical feat, but it isn’t nearly good enough to sway the public. In fact, the head of the NHTSA has decreed it so. “We need to start with two times better. We need to set a higher bar if we expect safety to actually be a benefit here.” Sounds like he’s played the coin-flip game. ”
The Uber car killed, in a way that a human would not … unless the human was drunk or otherwise impaired. So clearly, we are allowing a self driving car on the streets, that doesn’t meet your “they won’t be allowed on public roads until they do” standard.
Yup, I do. Sober drivers hit pedestrians all the time. It’s not even news (except to the bereaved families). Same goes for solo drivers hitting immovable objects. A day doesn’t go by when that doesn’t happen, too. As gruesome and regrettable as these recent accidents are, they’re not even a drop in the bucket compared to the human-caused accidents that happen every few seconds.
So you accept the Uber and Tesla deaths this year as collateral damage for what is supposed to be a “better system” without the proof that it’s actually better than a human? When in both cases, a good driver would probably not have killed the pedestrian, or driven into the barrier?
The statistics do not support, with a reasonable margin of error, that these systems really are safer yet. Yet there are out there.
My father had a minor heart attack in 1993. When he arrived in the hospital, they administered a blood thinner that had been shown (in clinical testing) to reduce the mortality rate for heart attacks like his by something like 80%. It also carried about a 1% risk of causing brain hemorrhage and death. I got the call at about 3AM telling me he was among that 1%, he had suffered a massive hemorrhage, and he died shortly thereafter. He was killed by the drug, not by his heart attack (which he would have almost certainly survived with no issues). Making safety-related decisions based on probabilities and statistics can be brutal.
Except we are a long way away from being able to use probabilities and statistics to present a solid case, with a verifiable margin of error, that these systems are actually ready for on the road deployment that is safe.
The first part, is that the sensor system clearly has significantly less resolution, than is necessary to reliably recognize small objects (like children, or even adults positioned with minimal cross sections) within the stopping distance of the car.
The second part, is that the control system AI based on deep neural networks, has clear classification error and training issues, that are not subject to probabilities, and WILL fail without warning.
The third part, that there doesn’t exist a reliable alternative decision process that can process the sensor data in real time, and provide an AI override to correct for classification and training issues.
The fourth part, there doesn’t exist extensive off road testing of carefully constructed edge cases, with demonstratively proven successes for doing the right thing when presented with similar situations on the street.
The fifth part, is that the warm fuzzy of human backup is an outright lie, as research has shown that reliable human take over well exceeds stopping distance.
The sixth part, is that the corporate greed factor is huge, with Uber/Lyft posed to claim greater than $40B/year by removing driver wages from their taxi services … THEY ARE WILLING TO RAPIDLY SETTLE deaths as an insignificant collateral damage … are your families lives, your employees lives, your friends families lives, JUST COLLATERIAL DAMAGE?
The seventh part, is that Golden Handcuffs for the engineers that pervert safety and say YES, goes way past the engineering responsibility assertions YOU and YOUR PUBLICATION have made on other topics. In a corporate world where an engineer that says NO, is replaced promptly, there is a significant accountability problem. In a corporate world where the media watch dogs, are significantly dependent on industry revenue of these systems for ad revenue, something is broken.
I can keep going … autonomous deployment on the streets is irresponsible at this stage of testing.
Get some responsible engineers into the test certification process, and regulation process, and stop this corporate greed from killing.
And to put your post about your dad in perspective … the drug saved the lives of 99% of people facing death, and yes the 1% still die. The FDA studies for approval are rigorous and subject to peer review and oversight. We really know the safety and risks of FDA approved drugs well before release past clinical trials.
In the self driving car case, there is not a clear statistical case that the technology will save lives. There is NO rational review and regulatory framework to even clearly say what the risks and benefits are for the EXISTING techology with clear critical flaws. Just a clear “trust me, while WE make $40B by firing millions of drivers”.
And in your dad’s case, he and the family, had the ability to read the disclosures and to make the decision to accept or reject the risks.
And in the Uber victims case, where did she get the chance to read the disclosures and make a decision to accept the reject the public risks that some Uber car engineering failure will run them, or their children, down?
Where is the complete disclosure to the public about testing? about risks? about greater good benefit to society for these faulty, poorly tested, poorly engineered, self driving cars?
Where is the public debate over allowing Uber/Lyft a $40B windfall while they put millions of taxi and Uber/Lyft drivers out of work … with a product that kills?
You and your writers really wanted to hang software developers as irresponsible for poor testing that resulted in basic inconveniences for products that are 99.999% fully functional for their given market tasks.
And you and your writers give the engineers that release cars that fail basic design points to kill 3rd parties a complete pass … and no accountability.
Hypocrisy?
And yes, I’m holding you accountable for maintaining your own editorial standards, in a fair and even handed way.
Actually making safety related decision based on probabilities and statistics is hardly brutal. What’s brutal is trusting lies cloaked in false statistics and probabilities. The whole deaths per mile lie from Tesla and backed up by engineers who should know better, is a brutal awaking to just how deceitful engineers with Golden Handcuff’s or blind faith in some technology good have become.
First, let’s address Tesla vs. general population lies. The quoted statistics come from two, extremely disjoint and non-correlated population groups, and do not form truths when fraudulently combined. Most automotive deaths in the general population come from the populations “under 25” and “over 65”. These age groups are practically non-existent in the Tesla owner population.
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/810853
Then let’s address risk taking in these two populations as a function of the cars being driven. First, it’s a LOT easier to take risks when driving/racing a $2,000 disposable beater. And a lot harder, when the car is your $100,000 once in a life time dream car.
Then let’s address driving environments for the general population automotive deaths, and the limited use of Tesla autopilot only in the most favorable driving conditions. The general population numbers include adverse driving conditions, while I strongly suspect it’s rare for a Tesla owner to enable autopilot during adverse driving conditions.
https://ops.fhwa.dot.gov/weather/q1_roadimpact.htm
To be able to assert facts using statistical relationships, one needs a reasonably large sample size, and a valid standard deviation. This is known as the confidence interval, so we have some defensible position the samples actually are representative of the population. The existing small self driving car, or Tesla deaths, have too small a sample size, doesn’t provide that defensible position based on statistics.
https://en.wikipedia.org/wiki/Confidence_interval
Let’s put these lies in another form … to asset that Autopilot is safer, when it’s rarely, or never, enabled during high risk situations, is like claiming the safest driver, is the driver that has never driven since they are not responsible for any accidents at all.
Safer based on general population statistics, is when one constructs a trial where a large population of self driving car mirrors the everyday driving conditions of US drivers … From Alaska to Florida, from Los Angeles to New York …. every day, good weather and bad. Then after a year or two, you can make meaningful correlations between the two US populations of real drivers, their accidents, and deaths … and real self driving cars, their accidents and deaths.
Cherry picking numbers of two disjoint populations, is a fraud.
Cars are dangerous. Rocks are dangerous. Plants and trees and flowers are dangerous if eaten in immoderate amounts. Sitting at home doing nothing is dangerous. There is no such thing as safe. There are only levels of risk, which we are poorly equipped to understand or estimate. (Deer are statistically more dangerous than sharks, for example.) If we stopped developing cars after the first auto accident, where would we be? If space travel had stopped after Apollo 1, we’d never have reached the moon. We can count the bad accidents involving autonomous vehicles on our fingers, and in most (all?) cases the human driver ignored warnings. That’s *way* better than the human-driven cars are doing, even accounting for the relatively tiny number of passenger-miles for the former versus the latter.
When a “normal” car crashes, we don’t think anything of it. Par for the course; drunk driver; idiot BMW owner; slick road; driver probably fixing her makeup; whatever. We tell ourselves we’d never do that. When a CPU-controlled car crashes, we freak out. Human failures are routine, but machine failures are creepy.
Obviously, developers need to be careful creating and testing these things, and they are. Nobody’s doing nightly software builds and throwing them out in the street for debugging. But asking for a 0% accident rate is fanciful, especially when the alternative (human drivers) have such a lousy safety record. We’ve set a low bar, and so far the robots are hurdling it. They’re not perfect, just better. It’s early days.
YOU ARE RIGHT … is it the early days. There is no good data, and lots of significant known issues that affect public risk … not just the driver, but kids walking along the road where no side walks exist.
Put this on a ballot, clearly inform the public, and let’s make some really rational decisions about public safety that are at risk of being collateral damage to corporate greed
And when a distracted driver kills, they go to jail.
It’s time for CEO’s to go to jail when their products kill
The Uber death that killed was an ENGINEERING failure, not an accident.
The car DID NOT try to stop.
The car DID NOT try to avoid.
The car DID NOT try to slow.
The car just blindly drove right into her.
That is a complete engineering failure.
That is a complete public policy failure.
That is technology that IS NOT READY FOR PUBLIC ROADS.
If the system responded as designed, and the death still occurred, that would be an accident.
When the system fails in every regard, that is faulty engineering that is NOT READY for deployments.
There was a complete FAILURE by the greedy corporation to properly test for a clear case that should work … that is a corporate failure that needs to be held to full account.
Accidents I accept … poor engineering that places 3rd party public at risk, I do not accept.
NOR SHOULD YOU.
NOR SHOULD ANY READER OF THIS FORUM REGARDING THIS ENGINEERING FAILURE
This publication wanted to hold accountable software engineers for their coding errors, when the rush to market was a corporate management issue.
These self driving car deaths are a corporate management issue.
I appreciate the spirited dialog on this controversial issue, and the willingness to take on unpopular but statistically correct view points, but with “Sitting at home doing nothing is dangerous.” you lost me.
Also, do families of those impacted by AI induced deaths have a legal right to the test data on the revision of hardware and software involved before the firmware was released ?
Perhaps if the source code and a “hardware in the loop” lab environment were readily and legally available to reproduce the incident on a “iron bird” in a simulated environment with recorded sensor data from the accident, AI IP owners would be more careful with their releases into the wild.
Going one step further, imagine a world where automaker regulations were written such that a after the fact process of AI algorithms becoming open source was the normal engineering followup to robot car induced deaths, and the public would likely be much more able and willing to swallow these “price of progress” tradgedies, as iterative solutions and best practices would evolve.
Paraphrased, the decrease of of auto fatalities from the typical culprits will likely require investors, managers and law makers to task the engineers with straight forward approaches to letting Joe Public in on the AI testing, and not just the NTSB.
This pragmatic approach to engineering test transparency might even turn the tide on hormonal teenagers wasting all their time on video games, as they could see a alternative of joining the fight to make better soft/firm/hardware to save lives through cohesive, decoupled modules that are able to be thoroughly tested by any and all.
And the Tesla death last month has similar issues, except that the driver made the choice to place himself at risk by using the technology … and thankfully that decision didn’t kill a 3rd party, although it clearly could have caused a multiple car accident that would have. Thankfully that didn’t happen … THIS TIME.
The car left the intended lane … an engineering failure in death, where as Tesla states, thousands of times before that the car made the right decision and correctly followed the lane. I’d like to see the numbers on that one.
The car then purposefully headed straight into a barrier, without invoking any avoidance alternative by taking either of the clear lanes either side of it’s path … an engineering failure resulting in death.
The car failed to brake or slow prior to impact … an engineering failure resulting in death.
Again, three engineering failures, all leading to death. All could have lead to the deaths of several, or dozens of third parties, if the impact had started a multiple car pile up.
What if a school bus with 60 young kids had been caught in the pileup, rolled violently, and killed or maimed many of the kids?
Thankfully that didn’t happen THIS TIME … I’m not ready to wait for it to happen ANYTIME.
Three engineering failures leading to the death of the driver …. three engineering failures that could have lead to many more deaths of 3rd parties that were not part of the risk decision.
Earlier this month, Wally Rhines of Mentor was quoted as saying that validation and verification of autonomous cars could require as many as nine billion miles of test driving. He went on to do the math for us – that a fleet of 300 continuously-operating test vehicles would take something like 50 years to rack up that many miles. Of course, he’s advocating large-scale system simulations to solve that problem.
I think the deep learning aspect of autonomous vehicles is a double-edged sword. Where conventional software allows a pretty direct forensic analysis, a bunch of coefficients on NN nodes is much harder to diagnose definitively. You won’t get the “Aha! Line 206 should be “>=” rather than “>”. So, while AI may perform “better” it is definitely much harder to hold accountable.
Ethically, I think that autonomous vehicles should be deployed the instant they are “safer” than human drivers. But I think our ability to create a “safer” platform and our ability to determine whether or not it actually is “safer” are very disconnected. Right now, my sense is that we are better at building than we are at assessing and verifying, but I’m not an expert.
Right now, I’d personally feel more comfortable stepping into the street in front of a human-driven car than an autonomous one. That is unsettling.
I would buy Wally assessment … and also note that there is a rational path to get there.
First allowing the vehicle to assume control *IS* the fatal mistake for testing this technology, and the time required to reliably hand control back to the driver far exceeds braking time for a pending accident.
The ONLY rational path is to implement the fully autonomous feature in testing as a completely passive feature like “lane departure warning systems”, and then log in full sensor detail the state where the driver choose to take a different path than the fully autonomous system had planned. That data then needs to be feed back into the simulators until there is a reasonable consensus between the human and automated responses.
ONLY when correct human overrides near zero, is the system at least nearly as safe as the human. When the system is regularly still making poor choices, testing, data collection, and analysis are still mandated.
This provides a fully verifiable safety assessment that can be independently audited, and used for formal approval of a fully autonomous hand off to the public.
Using Wally’s numbers, that is a fleet of 3000 cars for 5 years, or 6000-9000 cars for 2.5-3 years. I’m a bit more skeptical that such a small fleet will actually catch the critical edge cases. Clearly giving these cars to the same drivers for 5, or 3 yrears, or two years, will not introduce much more real testing than if they had been driving for as little as a few weeks …. repeating the same good cases, over, and over, and over is not doing additional testing. Good regression testing after changes, poor design validation testing to prove safety.
I pretty much believe that it’s total routes, repeated at different times of the day, in different lighting and weather conditions that will make the difference.
Build 12,000 cars, that are leased/loaned to statistically significant drivers for periods not to exceed 30 days. Statistically significant is when the testing pool has a coverage of better than 99.9% of all road way paths, multiple times, across the full potential market space. Give the cars to a ride share operation, and heavily target riders with highly diverse routes/paths — offering several free rides, executed by different cars and drivers. Lots of companies have human mobility data. Google has that history. Uber has that history. Lyft has that history. Garmin has that history. And even Facebook has that history.
It could be completed in 5 years … and done well. The sham that is currently happening doesn’t yield that verifiable data, or direct feedback for faults to be corrected.
and then follow this up with some rigorous edge testing cases that recreate a large number of common or deadly accidents.
Set up the “accident stage”, allow the driver to drive the car into the pending accident, and see if the autonomous system provides a correct recovery.
Release this as a stage two feature, just like ABS and traction control, or rollover controls … where the car provides a firm alternative, but just soft enough the human can overpower a poor response.
Then one is close to fully autonomous.