feature article
Subscribe Now

To Err is Human

Holding Technology Accountable

When a Tesla automobile using the new “Autopilot” feature struck a semi trailer resulting in the death of the driver, the inevitable questions began: “Should Tesla disable the feature?”, “Are self-driving cars a good idea?” Of course, the driver was using the feature beyond its recommended envelope. But the incident highlights an interesting quirk of humans. We want to make our own mistakes. When human error – particularly our OWN human error – causes a problem, we are brimming with forgiveness. After all, it could happen to anybody, really. It was a momentary lapse of concentration. We were tired. The kids were acting up. The situation was just too complicated. We were unlucky. 

But when the mistake is made by someone else or – most importantly – by our technology, we are suddenly overwhelmed with righteous indignation. How could they have let that happen? What were the engineers thinking? Doesn’t anybody with half a brain DESIGN these things? 

As we engineer a world with more automation, where machines make more decisions autonomously, we are running headlong into an abyss of the entitlement of the uninformed. Unaware of the complexities and compromises inherent in any engineering solution, the public will always oversimplify the problem, propose outrageously unworkable alternative solutions, and condemn engineers whose work they do not understand. 

We are also faced with a public that is completely out of touch with the reality of large numbers and probabilities, but fed a constant stream of instant misinformation on every corner case or random event that occurs after rolling the dice tens of millions of times. Our “news” is filled with outliers, a steady stream of the extraordinary beamed 24/7 into our ordinary lives until the average citizen loses the ability to discern the difference. 

As engineers, this means simply that our work is held to much higher standards than are the humans it serves or offloads. In 2013, there were 1.25 million traffic deaths worldwide. That works out to one about every 25 seconds. The United States has a comparatively safe 12.9 fatalities per 100,000 motor vehicles. You may not want to drive in Guinea, with its 2013 death rate of 9,462 deaths per 100,000 vehicles. Finland and Sweden post world-leading safety rates with fewer than 5 deaths per 100,000.

Now bring autonomous cars onto the scene. What mortality rate will the public tolerate from our engineering marvels before they demand that humans take back the wheel? How will the companies developing technology protect themselves from potentially enormous liability claims? When a human makes a tragic mistake that results in loss of life, we can all sympathize and rationalize. When a computer makes one, there will almost always be a line of code that bears the blame. Who wrote that line? Who tested it? Who allowed it to go into production. Who hired those people? Who managed them? Who hired the managers? 

This kind of public outcry has already been experienced in much more limited venues. When Toyota replaced a simple throttle cable with a computer running software, corner cases reared their ugly heads and people died. Massive lawsuits ensued. Now, what happens when we replace not just a throttle cable, but the entire driver with software? The opportunities for failure multiply almost incomprehensibly.

The truth is, however, that even the primitive self-driving cars of today would probably be safer overall than those with human drivers. Tesla’s initial data would seem to indicate that. The number of accidents avoided by the technology must be much higher than the number of failures that lead to accidents. But there is no way to count the accidents that never happened, the tragedies avoided, and the life never lost. Those branches of the probability tree are nothing but the abstract offal of means and averages of equations with too many variables. What outrages the public are the things we know, the things that DID happen. This line of code cost these lives.

If a self-driving car is 1,000 times safer than its human-driven counterpart, will the public accept it? If 1.25 million annual traffic deaths were reduced by a factor of 1,000 with the introduction of autonomous cars, we would have 1,250 annual deaths CAUSED by autonomous cars. Public outrage would boil, even among the 1,248,750 people each year who would be unaware that they were actually SAVED by autonomous cars.

Automobile deaths in the US average about 1 per 66 million passenger miles. Commercial air travel, on the other hand, has a fatality rate of about 1 per 3.3 billion passenger miles. That makes commercial air travel about 50 times safer than riding in an automobile. Guess which one the public fears most? Why? Because they are not in control.

Modern software is the most complex thing ever created by humans. And the systems that allow automobiles to drive themselves will encompass some of the most remarkable fusions of multiple technical disciplines ever attempted, all controlled by innumerable layers of software from a wide variety of sources. Complete comprehension of such a system is far beyond the cognitive capabilities of even the most brilliant human. This will be collaborative engineering in the extreme. We will make errors – many, many errors. And our tests have no prayer of rooting out all the problems that will arise when millions and millions of automobiles start carrying families billions and billions of miles across vast stretches of inhospitable terrain year after year.

Even if we were able to produce perfect code, there will always be the threat of random mechanical and environmental events. A neutron flips a bit in an unprotected register. One of a billion transistors in a processor burns out unexpectedly. A camera lens is fogged. A connection corrodes. A Lidar signal is refracted in an unexpected way. The possibilities are endless, and we get very near the place where an infinite number of monkeys with an infinite number of typewriters produce a script to a tragedy. Or, in our earlier example, over a thousand tragedies a year. 

A single driver making an error can cause harm to a few people. A single engineer making an error could potentially harm thousands. With today’s ecosystem of re-used software and hardware, encapsulated third-party IP, APIs, subsystems, and COTS technology – our responsibility is more enormous than many of us realize. And the public is not predisposed to forgive our errors as “human.”

It’s a harsh and unforgiving world that we’re all working so diligently to improve. Be careful out there.

4 thoughts on “To Err is Human”

  1. There is NO VALID statistical basis to claim any average/mean deaths per mile. It will take multiple events to make any comparison with national or world averages. Tesla Claims their technology is safer after a single death in 130M miles, however that is a single period, single event, that does not yield a valid standard deviation with a low margin of error. The next event could kill a dozen people, in less than 20M miles.

    The next event could kill a child standing on the right shoulder with a parent after their car became disabled, and the Tesla became confused about lane markings. Who then is responsible for the murder of the child, when the driver wasn’t actively in control, and was either drowsy or distracted, and not actively engaged in defensive driving? I hope the courts hold the driver fully responsible for the murder, to send chills to all Tesla owners gambling with other peoples lives.

    The next event could kill a dozen people as the truck three seconds in front of the Tesla, changes lanes to avoid a five stopped cars. And the AI in the Tesla does not find a valid solution, that any alert driver would instantly choose for safety. I hope the courts hold the driver fully responsible for the murder, to send chills to all Tesla owners gambling with other peoples lives.

    The next event could kill both the driver and a pedestrian in the left median, as the pedestrian enters the windshield seriously injuring the driver, and the Tesla drives on for miles, as the two bleed to death. It doesn’t take much to think of worse outcomes waiting to happen, that this technology isn’t ready for.

    This technology IS NOT READY to be used by drivers that do not understand and fully appreciate its limitations. This technology does not belong on our hiways. Studies already show that the hand-off time from semi-autonomous to the driver after an alert, is near a half minute … a half mile or more at freeway speeds.

    A SAFE solution today, is one that requires the driver to remain in active control, with active defensive driving skills, and provides emergency backup for accident avoidance like ABS, active roll over prevention, active traction control, and other similar passive systems that protect less skilled or alert drivers.

    Why are engineers willing let Tesla ignore Murphy’s Law, and create a new classification of Darwin Award events? We as a profession hold the responsibility to protect the public, even from less responsible engineers that place high incomes, and stock options, in front of their responsibility to the consumers and public. Or it may be your child, or wife, that is the first pedestrian a Tesla murders.

    Being first to market with half a well tested finished product, and half the safety, isn’t being responsible. I sincerely hope the lawyers get really fat off anyone stupid enough to try avoiding extensive design validation on a test track away from unwitting victims.

    Because of the long hand-off time when the car is actively in control, this technology DOES NOT belong on the streets with the auto-pilot in control, until it can handle nearly every obvious “exception” or accident waiting to happen. Anything that allow the driver to avoid defensive driving will KILL.

  2. So Kevin, the Uber death clearly brings another death into the equation, but still not enough to make any statistically valid claims with a supportable margin of error. The Tesla death killing the driver when the car hits a median last week adds another. In both cases, this may have been a coding error, since the brakes were never applied, and there was never a defensive action to avoid the death. As you said above “A single engineer making an error could potentially harm thousands.” There we have two engineering level failures, where the car did not react to the collisions.

    Either way, there appears to be an engineering issue with the sensor resolution being used, that isn’t able to provide enough pixels for the hardware AI and software to detect and classify objects within the stopping distance.

    I again refute your claim “The truth is, however, that even the primitive self-driving cars of today would probably be safer overall than those with human drivers. Tesla’s initial data would seem to indicate that.”

    This is will only be true when the sensor resolution, processing power, and AI match the SKILLS of humans. Primitive self-driving cars will continue to kill with such STUPID mistakes, that human drivers will not make, unless distracted, drunk, or otherwise impaired.

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Electromagnetic Compatibility (EMC) Gasket Design Considerations
Electromagnetic interference can cause a variety of costly issues and can be avoided with a robust EMI shielding solution. In this episode of Chalk Talk, Amelia Dalton chats with Sam Robinson from TE Connectivity about the role that EMC gaskets play in EMI shielding, how compression can affect EMI shielding, and how TE Connectivity can help you solve your EMI shielding needs in your next design.
Aug 30, 2023
25,885 views