How safe are self-driving cars really? Experts weigh in
Autonomous systems are a net good and in many aspects have the capacity to outperform human drivers, but they are not perfect, according to experts.

How safe are autonomous vehicles?
It’s a question on the mind of many as robotaxis continue to take on the road, with some of them malfunctioning and violating traffic laws.
Waymo, the self-driving car company that spun out of Google in 2016, announced this month that it was recalling 3,067 robotaxis after multiple reports of them driving around stopped school buses in Austin, Texas.
This is just the latest in a series of small controversies the self-driving car company has found itself in over the past decade. For years, there have been reports of Waymo vehicles making illegal turns, stopping at intersections when they shouldn’t, and even striking bicyclists.
At the same time, however, Waymo continues to tout how much safer its autonomous vehicles are when put up against human operators.

On its website, Waymo states that “compared to an average human driver over the same distance in our operating cities,” its autonomous vehicles have 90% “fewer serious injuries or worse crashes;” 82% “fewer airbag deployment crashes,” and 81% “fewer injury-causing crashes.”
So, if these vehicles are safe, why do we continue to see incidents as Waymo’s robotaxi service continues to expand into more cities?
Taskin Padir, an electrical and computer engineering professor at Northeastern University and an autonomous vehicle researcher, said while he believes autonomous systems are a net good and in many aspects have the capacity to outperform human drivers, they are not perfect.
Editor’s Picks
Like humans, these vehicles will invariably make mistakes, especially in challenging “edge cases.”
“We have not achieved a level of autonomy that accounts for all the scenarios in the world,” he said.
Therefore, one of the key takeaways here is for these companies to operate responsibly and transparently and work with the proper authorities and local governmental officials to address any issues that could arise, he said.
It’s ultimately up to those regulators to determine whether a particular autonomous vehicle is safe enough to be on the road in the first place, explained Mark MacCarthy, a nonresident senior fellow at the Brookings Institution, an American think tank in Washington, D.C., that does nonpartisan research, including in the development of safety standards around autonomous vehicles.
While Waymo has, for the most part, done a good job being responsible stewards and exhibiting that they are generally safe, individual reports from the self-driving company itself are often misleading and don’t paint a full picture of an autonomous vehicle’s capabilities, MacCarthy explained.
For one, the miles that the vehicles are operating in are in heavily restricted areas. They haven’t started operating on highways until very recently, which is where many fatalities and injuries take place.
“I do think they’ve gone to some degree to indicate that the engineering they’ve done, they have produced a car that, based on the statistics, doesn’t seem to be any more deadly than a car driven by a human,” he said. “But it’s a leap to say they’re safer than humans. That’s taking their statistics one step too far.”
For now, it’s the job of individual states and local municipalities to set up the regulatory infrastructure to enforce that these vehicles operate safely, as there are yet no binding federal standards regarding autonomous vehicle safety, he explained.
There is, however, the National Highway Traffic Safety Administration, or NHTSA, which has the authority to issue recalls, forcing car manufacturers to fix problems with their vehicles or risk having them taken off the road.
NHTSA opened an investigation on Waymo in October in response to the school bus reports. This isn’t the first time NHTSA has investigated the company. This summer, it closed a 14-month investigation into the company.
California, Waymo’s headquarters and a major testbed for many bleeding-edge technologies, is one of the leading states in regulations, MacCarthy said, highlighting that the Department of Motor Vehicles requires autonomous vehicle makers to attest that they have “reasonably determined that [it] is safe to operate” their vehicles before being issued a permit that allows them to operate on the road.
“If you can convince a regulator that you’ve done the appropriate job, then being on the road is probably OK,” MacCarthy said. “But the backstop on this is that the California regulator has the authority to say, ‘I’m sorry. You are not operating in a reasonably safe manner. You’re imposing unreasonable risks on other people on the road or on the sidewalk, and because of that, we’re withdrawing your license to operate a self-driving car.’”
California has done that in the past, in the case of Cruise, a General Motors-backed self-driving robotaxi company that was embroiled in multiple controversies regarding the safety of its vehicles and ultimately shut down last year, MacCarthy explained.
While California may be a leader in the space, it is not the only state that has issued some form of autonomous vehicle regulation. As of late 2024, 35 states have passed laws on autonomous vehicles according to the legal news service and analysis firm Law360.
The existence of these regulatory bodies is undoubtedly good, Padir highlighted.
“It’s a good thing from an accountability perspective. It’s a good thing from a responsibility perspective,” he said. “It’s a good thing from a societal perspective.”
It’s being transparent, being open to investigating where the issues occurred, and adapting where innovation in these technologies occurs, he said.
“No one designs an autonomous stack (driving technology) to cause harm or to cause these situations. I truly believe that understanding what happened will only make the future better. It will only bring more resilience and robustness to these systems, and they will continue to improve.”










