Northeastern University professor argues that San Francisco was right to ban facial recognition technology

Facial recognition technology can be wildly inaccurate and prone to replicating the racial or gender-based biases of the engineers who created it, says Woodrow Hartzog, a professor of law and computer science at Northeastern. Photo by Matthew Modoono/Northeastern University. Illustration by Kevin Deane/Northeastern University.

This month, San Francisco became the first major U.S. city to ban the use of facial recognition technology by police and other agencies, taking an “historic and important first step toward recognizing the unique danger” of the technology, says Woodrow Hartzog, a professor of law and computer science at Northeastern University.

Akin to a more sophisticated fingerprint analysis, facial recognition software—which matches photos taken by security cameras and cell phone cameras to police and government databases to determine a person’s identity—has been adopted by law enforcement officials to catch criminals and solve crimes that have gone cold.

Even these benefits come with dangerous consequences, says Hartzog, who recently made that argument at a daylong conference about the changing landscape of facial recognition at Northeastern that convened some of the foremost lawyers, journalists, and activists working in the field of facial recognition technology.

While San Francisco is the first city to ban the use of facial recognition technology, other municipalities including Somerville, Massachusetts, are considering measures to ban or curtail the use of the technology.

The computer algorithms that identify a person’s face can be wildly inaccurate and prone to replicating the racial or gender-based biases of the engineers who built it, Hartzog says.

Instances abound of facial recognition software failing to track the faces of black people when there’s poor lighting, or incorrectly identifying the faces if they do recognize them.

Incorrectly identifying a person charged with a crime “can have life or death consequences,” Hartzog says. “These accuracy issues are really concerning.”

He acknowledges that such accuracy issues could be solved with stricter government regulation or better algorithms. But more accurate facial recognition technology doesn’t mean better outcomes, Hartzog says.

“The most compelling benefits of facial recognition technology—catching criminals or finding missing people—require us to fully embrace the surveillance that comes with them,” he says. “It would mean cameras everywhere and promiscuous [widely shared] databases.”

In order to be fully effective, the technology would have to be ubiquitous, Hartzog says.

“It would eradicate the obscurity of our day-to-day lives,” he says. “If fully realized, facial recognition technology is the perfect surveillance tool. It would mean that no one could ever hide in public again.”

That’s because a database is only as good as the data in it. In order to determine a person’s identity, officials need a photo and identifying information, and they need to be able to share the information widely and quickly, Hartzog says.

The more pervasive and constant the surveillance, the more likely it will be abused “and become oppressive,” Hartzog says.

“One of the reasons facial recognition is so dangerous is because it’s bad when it’s inaccurate, but even worse when it’s accurate,” he says.

Hartzog says that by banning the use of the technology by police and civic organizations, San Francisco is also staking a claim that technology isn’t an inevitable force that we all need to learn how to live with.

“There’s this idea that technology will just keep evolving and we can’t do anything to stop it,” he says. “That’s just not true. Mind-reading technology would help solve crimes, too, but no one is saying we should pursue that.”

For media inquiries, please contact Marirose Sartoretto at m.sartoretto@northeastern.edu or 617-373-5718.