Sections

Commentary

The evolving safety and policy challenges of self-driving cars

July 31, 2024


  • The latest report from the National Highway Traffic Safety Administration (NHTSA) records an estimated 42,795 traffic fatalities in the U.S. in 2022. A 2015 NHTSA study attributes 94% of traffic fatalities to humans as opposed to the vehicle, the environment, or an unknown reason.
  • In April 2024, the nation’s premier professional association for computer engineers, the Association for Computing Machinery, warned policymakers that they “should not assume that fully automated vehicles will necessarily reduce road injuries and fatalities.”
  • Despite improvements in self-driving technology, the best conclusion for now seems to be that the safety advantages of self-driving cars are aspirational but have not been proven.
A self-driving GM Bolt EV is seen during a media event where Cruise, GM's autonomous car unit, showed off its self-driving cars in San Francisco, California, U.S. November 28, 2017.
A self-driving GM Bolt EV is seen during a media event where Cruise, GM's autonomous car unit, showed off its self-driving cars in San Francisco, California, U.S. November 28, 2017. REUTERS/Elijah Nouvelage

Real self-driving taxis without safety drivers are already here, on the roads in select U.S. cities, including San Francisco, Phoenix, and Los Angeles. As political commentator Matt Yglesias said in a recent blog, “autonomous taxis are no longer a hypothetical future technology. They exist, and you can ride in them.” They have even become a tourist attraction in San Francisco. 

Self-driving cars promise enormous benefits including greater road safety, increased mobility for people unable to drive themselves, more convenience for riders who will no longer be burdened with the driving task, a more efficient, less costly transportation system (in part by using fewer cars), and a smaller environmental impact because of their smoother, more controlled “eco-driving” style compared to human drivers. 

However, the challenges of self-driving cars are not in the future. They are with U.S. policymakers today. Policymakers are struggling to keep up with these developments and have not put in place an effective regulatory system that assures the public that safety concerns have been adequately addressed.  

The following preliminary observations are a first attempt at getting clear on these challenges. Further posts will delve more deeply into the safety of self-driving cars, the regulatory structure needed to bring them to the public, and the assignment of liability for accidents in which self-driving cars are involved. The aim is to summarize the ongoing safety and regulatory conversations and contribute toward the development of a regulatory regime that will allow the deployment of self-driving cars that have been shown to be reasonably safe. 

It is not obvious that self-driving cars will be safer than human drivers 

For many, it seems obvious that computer drivers will be safer simply because humans are such terrible drivers. The latest report from National Highway Traffic Safety Administration (NHTSA) records an estimated 42,795 traffic fatalities in the U.S. in 2022. A 2015 NHTSA study attributes 94% of traffic fatalities to humans as opposed to the vehicle, the environment, or an unknown reason.  

It is easy to conclude from this that dispensing with human drivers in favor of self-driving cars is highly likely to reduce a large portion of road fatalities. As a recent report from the economic consulting firm Sonecon says, “AVs could dramatically reduce the 30% of accident fatalities that today involve drunk drivers, the 22% that involve high speeds, and the 17.5% that involve collisions with fixed objects.” 

Of course, it is a mistake in logic to jump from the fact that people drive drunk to the conclusion that a computer system that cannot get drunk will be better than a human driver. Monkeys don’t get drunk either, but no one thinks we’d be safer in cars driven by monkeys.  

But the slip in logic just reveals the hidden assumptions that allow many people to make the illogical leap. Self-driving cars have 360-degree vision, they use radar and lidar to construct a map of the environment much more detailed and complete than human vision can provide, they can detect more hazards more quickly than humans can, and they have faster reaction times than humans for avoiding crashes. So, of course it is easy to think that they are likely to be better and safer than human drivers.  

However easy it is to assume that self-driving cars must be safer, it is a mistake. In April 2024, the nation’s premier professional association for computer engineers, the Association for Computing Machinery, warned policymakers that they “should not assume that fully automated vehicles will necessarily reduce road injuries and fatalities.” 

The reason for this warning is simple—and obvious once it is stated. As Carnegie Mellon professor Philip Koopman noted in 2023 congressional testimony: “Computers make mistakes too.” Law professor Matthew Wansley points out that autonomous vehicles “will make errors that human drivers would not make.” Safety engineer Mary “Missy” Cummings at George Mason University says self-driving cars might just replace human driving errors with “human coding errors.”  

Unforeseen errors in the perception and prediction systems of self-driving cars can and do cause them to do the wrong thing at the wrong time. For example, in California in 2021, a “rare combination of factors and circumstances” produced a software glitch that shut down a car’s autonomous driving system, and the car’s momentum drove it into the median strip on a city street.  

Such software glitches are rare events, but so are highway fatalities. It turns out that humans are remarkably safe drivers. NHTSA data show that Americans drove around 3.2 trillion miles in 2022. The 42,795 traffic deaths in 2022 translate into a fatality rate of about 1 in 100 million miles driven (1.35 to be exact, down slightly from 2021), and this includes the drunk, drowsy, and distracted drivers. CMU researcher Koopman estimates that unimpaired drivers are even better, with perhaps 200 million miles between fatalities, in round numbers. 

That’s the human safety record that self-driving cars must equal or exceed. But is it really so obvious that self-driving cars can do better?  

The safety record of self-driving cars 

At least one fatality has already occurred with self-driving cars. In 2018, an Uber car in autonomous mode struck and killed a woman pushing her bicycle across a highway near Phoenix, Arizona. The accident report from the National Transportation Safety Board showed that the car’s perception system first classified her as an unknown object, then as a vehicle, and finally as a bicycle, whose path it could not predict. The car’s automatic braking system was disengaged, the safety driver was alleged to be watching a TV show on her mobile phone, and by the time she hit the brakes, it was too late. 

There is some reason to think that these safety issues are intrinsic to the machine learning technology that powers the perception systems at the heart of self-driving cars. Computer vision systems have been shown to misperceive a stop sign as a 45-mph speed limit sign, under adversarial engineered conditions that mimic many real-world situations. Computer perception systems are notoriously brittle and malfunction in unpredictable ways. Safety engineer Cummings says that they “struggle to perform even basic operations when the world does not match their training data.” Such misperceptions have led to accidents such as when a self-driving car in 2023 failed to recognize the other part of an articulated bus and ran into it and when another self-driving car in 2022 stopped while making a left turn and was hit by an oncoming car.  

On the other hand, there are some indications that properly trained and tested self-driving cars have the potential to be better than humans. A 2023 study from Waymo suggests that, after having driven over seven million miles over the last several years without a safety operator, its self-driving cars are better than human drivers at avoiding certain kinds of non-fatal accidents.  

While suggestive, this study alone does not demonstrate the superiority of computer drivers. For one thing, it applies only to non-fatal accidents and only in the severely restricted areas and driving conditions in which Waymo self-driving cars currently function. Moreover, as a 2015 RAND study pointed out, road fatalities are such rare events that studies of the fatality record of self-driving cars cannot demonstrate equivalent safety without driving hundreds of millions of miles.  

It is important to realize that not all self-driving cars are alike in terms of safety. An Uber or Cruise crash does not necessarily prove anything about the safety of a Waymo self-driving car. Self-driving technology has improved since the 2018 Uber fatality, and it is likely that Uber’s lax management of its safety drivers contributed to the accident. One recent study compared the crash record of Uber taxis driven by humans with the crash rates of self-driving cars from Cruise and Waymo. It applied just to the San Francisco area over the last several years and suggested (tentatively, since the miles driven by self-driving were so low) that Waymo has a crash rate slightly lower than Uber’s, and considerably lower than Cruise’s.  

The best conclusion for now seems to be that the safety advantages of self-driving cars are aspirational but have not been proven. Beating humans in all driving conditions might not ever be likely, given the uncertainties in the underlying machine learning technology. This does not mean companies should stop trying to improve self-driving cars, but it does suggest they have a long way to go before they can show that computers match or exceed the performance of human drivers.  

The near future of self-driving cars is to provide local personal transportation services.  

After several companies passed the 2007 DARPA Urban Challenge “to build an autonomous vehicle capable of driving in traffic, performing complex maneuvers such as merging, passing, parking, and negotiating intersections,” many companies rushed to develop self-driving cars that could operate in all driving environments with the goal of selling them directly to the public. Automotive News reported that by 2022, investors had poured $160 billion into the dream of autonomous transport.  

But after the 2018 fatal accident involving an Uber self-driving car, most car manufacturers shifted their business model from selling self-driving cars to the public to providing driverless mobility services. As a result, according to the 2023 S&P Global Mobility Study, “The ability for a consumer to buy a car that will drive itself everywhere without the driver ready to take the wheel is unlikely to happen by 2035.”  

The near-term future for self-driving cars is providing local personalized transportation services. Self-driving taxi companies might be willing to pay the extra cost for self-driving functionality, says the S&P report, because they could “keep their vehicles on the road 20 or more hours a day… without the inconvenience of drivers, who need to be paid, change shifts, and often stop for breaks.” These services “will be carefully geofenced for the foreseeable future—offering revenue service only within specific areas where they have already been extensively tested.” 

Regulators grant companies approval to offer this local transportation service, but only the market will judge whether the business will be successful. Waymo is currently the only company in the U.S. with regulatory approval to run a commercial fleet of self-driving cars without safety drivers. However, it uses remote operators to handle unusual situations, which might prove to be too expensive for a profitable business. “If Waymo vehicles were constantly asking for remote guidance,” tech reporter Timothy Lee points out, “Waymo might need to hire so many remote operators that it negates the cost savings that come from not needing a driver.”  

Policymakers are struggling to put in place an effective regulatory regime 

Other than cost, another issue standing in the way of a successful local self-driving taxi business is consumer hesitation. According to annual surveys from the American Automobile Association, fear of self-driving cars jumped in 2023 from 55% to 68% and has not declined significantly in 2024. Trust in the new technology plunged from 15% in 2021 to nine percent in the last two years. When 91% of an industry’s potential customers are skeptical of its product, the industry has a trust problem.  

A 2022 international poll from YouGov revealed that only 19% of the U.S. public is looking forward to buying autonomous vehicles in the future, while 44% are not. In contrast, 51% of the public in China were looking forward to self-driving cars and only 11% were not.  

In Wuhan, China, where over 300 driverless cars are operating with local government permission, one local resident commented, “I think there’s no need to worry too much about safety—it must have passed safety approval.”  

The U.S. regulatory system generates no such comparable degree of confidence 

NHTSA regulates vehicle safety and has substantial engineering expertise. The states regulate the operation of vehicles, including licensing the drivers. This creates regulatory uncertainty when computer software is the driver. NHTSA, with its national reach and engineering expertise, might seem to be the natural place to lodge regulatory control, but so far it has not acted to establish national safety standards for self-driving cars. Its December 2020 proposal moved in that direction, but it has not been finalized. This may be because it is not clear how the agency and the courts would legally enforce some of its suggestions, such as requiring a safety case for self-driving cars. In June 2021, it issued a standing general order requiring self-driving cars and cars using advanced driving assistance software to report crash incidents. This at least begins to allow a baseline to assess safety, but it would be even better if it required vehicle miles travelled to be included in the reports.  

Rather than approving self-driving cars as safe before allowing companies to operate them on public roads, NHTSA appears to be using its recall authority to obtain changes in automated driving systems after the fact. The agency has statutory authority to order recalls or remedies for defects “related to motor vehicle safety,” and in June 2024, it obtained a voluntary update to Waymo’s maps and software to remedy a defect relating to accurately detecting and reacting to poles in or near the driving surface. More recalls may follow from its investigation into 22 incidents involving Waymo cars, which it opened in May 2024.  

In 2017, the U.S. House of Representatives passed a bill to preempt state authority to regulate autonomous vehicles and to require self-driving car companies to submit safety assessments to NHTSA. However, this bill did not give NHTSA the authority to condition deployment or testing of self-driving vehicles on review of these safety assessments. That bill did not become law, and Congress has not acted since then, except for a hearing in 2023.  

In the absence of federal action, the task of granting licenses for self-driving cars to operate on public roadways has fallen to the states. California has the most well-developed system for state regulation, but others such as Arizona are more lax. California splits responsibilities between the state Public Utilities Commission (PUC) and the state Department of Motor Vehicles (DMV). The DMV issues an autonomous vehicle driving permit, but it first requires the submission of a statement that the car has been tested and the manufacturer “has reasonably determined that [it] is safe to operate the vehicle…” It retains authority to suspend a permit it has issued if it determines that the vehicle is not safe for the public, based on the driverless vehicle’s performance. If a company wants to provide commercial taxi service, it must in addition gain approval from the PUC, which issues permits for companies to provide transportation services to the public using autonomous vehicles. 

Under this system, self-driving car companies have been operating in San Francisco and other localities in California for several years, including testing on public roads without a safety driver. In August 2023, the California PUC voted 3-1 to authorize Cruise and Waymo to operate commercial driverless taxi services in certain parts of San Francisco without a safety driver. One of the commissioners issued a statement saying, “While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway.” 

But the functioning of this regulatory system has generated widespread popular opposition, including a campaign in 2023 of disabling self-driving cars with traffic cones and an episode of mob violence in February 2024 when a self-driving car tried to drive toward a crowd during a street festival and was set on fire. The PUC’s August 2023 approval took place over the strong objections of city and county officials in San Francisco and Los Angeles, who argued that the approval “is premature and doesn’t adequately take into account safety and first-responder issues.” There have also been suggestions of conflict of interest on the part of a PUC commissioner who provided the decisive vote in the August 2023 approval and had formerly been an attorney at a driverless car company.  

After a series of incidents, including one where a driverless Cruise vehicle dragged a pedestrian 20 feet after a crash, the California DMV sent Cruise an order in October 2023 withdrawing its license to operate, saying Cruise driverless vehicles “are not safe for the public’s operation” and posed “an unreasonable risk to public safety.” Several days later, Cruise shut down its entire fleet of autonomous cars nationwide, although it has returned to supervised testing and mapping in Dallas and Phoenix.  

The concerns of local critics seemed vindicated. But Waymo continues to operate in San Francisco and has raised some additional concerns when one of its cars crashed into a bicycle in February. In March, the PUC extended its permission for Waymo to operate on highways, over the objections of local officials.  

The allocation of responsibilities between state and local officials in California is still evolving. In May 2024, the California State Senate passed SB 915, a bill to restore some measure of local control to the operation of self-driving cars, while at the same time ensuring that local authorities could not block “the safe operation of autonomous vehicle services.” A further vote on the bill is expected in August 2024. Under this approach, the practical difficulties of providing many local jurisdictions with authority over the operation of self-driving cars would have to addressed. 

Setting out the appropriate roles for federal, state, and local officials to supervise the introduction and further operation of self-driving cars is a crucial step in gaining the consumer confidence that alone will allow self-driving cars to be successfully deployed.  The most glaring defect in the current system is the absence of any regulatory requirement at any level of government for self-driving companies to make a case that their cars are reasonably safe. Perhaps because of uncertainty about legal enforcement of safety standards for self-driving cars, regulation seems confined to after-the-fact withdrawal of permission to operate for cars that have shown that they are dangerously unsafe.  

Future posts will focus on this issue of an effective regulatory system for self-driving cars, as well as a more detailed look at how the safety of self-driving cars can be evaluated, and how a reasonable case for safety might be made before permission to operate real self-driving cars is given. They will also examine how an appropriate after-the-fact liability system might efficiently compensate victims of accidents involving self-driving cars and also encourage the further improvement of safety for self-driving cars.