Who is at fault when systems powered by AI put humans in danger?

Who is at fault when a driverless car behaves unpredictably? Woodrow Hartzog, professor of law and computer science at Northeastern, offers a framework for tracing back where things went wrong. Photo by Matthew Modoono/Northeastern University
Who is at fault when a driverless car behaves unpredictably? Woodrow Hartzog, professor of law and computer science at Northeastern, offers a framework for tracing back where things went wrong. Photo by Matthew Modoono/Northeastern University

In Tempe, Arizona, a few years back, a self-driving car failed to identify a woman jaywalking across the street in time to stop, and fatally struck her. The human driver, meant to be supervising the vehicle, had been watching a TV show on her cell phone at the time. And the artificial intelligence system within the car wasn’t designed to slam on the brakes to reduce the severity of an unavoidable accident, the way a human driver would.

Rashida Richardson joins the Northeastern University School of Law and the College of Social Sciences and Humanities with a joint appointment as an assistant professor of law and political science on July 1, 2021. Courtesy photo.

So, who is at fault for this pedestrian’s death? Is it the engineers, who designed the system that failed to identify a jaywalker? Or the developers, who didn’t equip the car with a system that could slam on the brakes? Or the supervising driver, who was distracted inside a car that was supposed to drive itself?

Woodrow Hartzog, professor of law and computer science at Northeastern, explores these kinds of questions in a new paper with colleagues from Oregon State University.

“It’s difficult to assign fault when you’re dealing with autonomous systems because they can act in unpredictable ways—it can be challenging to predict how a driverless car with an automated decision-making system is going to react in every scenario,” Hartzog says.

He and his colleagues offer a “theory of fault” in the paper, that could be used as a set of guidelines for tracing back where, exactly, things went wrong.

A person, for example, can be held accountable for unreasonable behavior within the legal framework of negligence, but that framework also includes exceptions for unforeseeable incidents, Hartzog says.

So, to determine whether an incident—such as the tragedy in Arizona—was avoidable, Hartzog and his colleagues identified four specific and foreseeable failure points in the creation, deployment, and use of autonomous systems.

Woodrow Hartzog is a professor of law and computer science with joint appointments in the Northeastern University School of Law and the Khoury College of Computer Science. Photo by Matthew Modoono/Northeastern University

The researchers call the first “syntactic failure,” which occurs when developers—the people who create and know the capabilities of a system—fail to communicate to the procurers (the people who’ve asked developers for a specific technology) the many ways in which robotic systems might fail to identify real-world objects.

Artificial sensors are not as robust as human senses, and it’s up to the developers of a system to outline where the artificial ones fall short, the researchers say.

The second category is “semantic failure,” which occurs when the goals and intentions articulated by humans for autonomous systems are not translated correctly into the software.

Then there are “testing failures,” which occur when a developer doesn’t test for situations or incidents they should have known to test for, based on the goal of the technology.

Finally, the researchers posit that there can be “warning failures,” which occur when users aren’t properly made aware of avoidable problems.

Based on this framework, lawyers might try to determine who was at fault in the Arizona accident by ticking through the possible points of failure.

Did the developers of the autonomous system tell the car manufacturer that the sensors may not be able to identify a jaywalker? Did the automaker properly communicate its goal of total autonomy to the developers? Did the developers and the automakers test the car in a battery of scenarios before putting it on the road? Was the supervising driver aware that she should be ready to slam on the brakes if a person jaywalked?

The Yavapai County Attorney’s Office determined that there was “no basis for criminal liability” for the owner of the self-driving car, Uber. Months later, the supervising driver was indicted for negligent homicide. Her trial is set to begin in August.

“One way of determining fault is to determine where these failures took place,” Hartzog says. “But perhaps more importantly, this framework can help everyone involved in the creation and utilization of autonomous systems minimize the potential for harm.”

Of course, there are also instances in which autonomous systems don’t work as intended, but in entirely predictable ways.

A study by the National Institute of Standards and Technology found that the majority of commercially available facial recognition systems exhibit bias, falsely identifying Black and Asian faces 10 times to 100 times more than white faces. And among a database of photos used by law enforcement agencies in the U.S., the highest error rates came in identifying indigenous people, the study found.

This kind of bias is to be expected from an industry in which Black, indigenous, and other people of color are vastly underrepresented, says Rashida Richardson, assistant professor of law and political science who will join Northeastern in July.

In a forthcoming essay from theBerkeley Technology Law Journal, Richardson notes that Black, indigenous, and other people of color make up almost 40 percent of the U.S. population, but only 22 percent of bachelor’s degree holders in the fields of science, technology, engineering, and math.

“This is not just a niche tech issue when you have a predominantly white workforce,” Richardson says. “Having such a homogenous workforce with such limited life experience would naturally recreate that same homogeneity in the systems.”

For media inquiries, please contactmedia@northeastern.edu