Cute robots, smart underwear, and facial recognition in church: Have we gone too far?

Woodrow Hartzog, professor of law and computer science in the school of law. Photo by Matthew Modoono/Northeastern University

In the near future, robots might be able to manipulate our emotions, everyone will be rated on everything, and we won’t be able to trust our own eyes, Woodrow Hartzog said.

“I promise I will end on a positive note,” the Northeastern professor said Thursday, eliciting chuckles from a roomful of people gathered at the university’s Charlotte campus to hear him discuss online privacy.

Hartzog, who specializes in privacy, data protection, robotics, and automated technologies, shared his insights on scoring systems, the internet of things, and police body cameras.

In recent months, Hartzog, who teaches law and computer science at Northeastern’s Boston campus, has testified before federal lawmakers about how to improve online privacy, warning that the current laws that govern privacy on the internet fall short of protecting users as much as they should.

“I think we need better security rules,” he told the audience in Charlotte on Thursday, arguing that data security is a civics issue on the federal, state, and local level.

“When we’re asking our civic leaders what are your opinions on privacy, I think it’s fair to expect a nuanced answer,” he said, “one that reflects a little thought about the ways in which data companies and data platforms and surveillance technologies are being used to ensure that we have safe and sustainable relationships with technology companies and not ones that tend to abuse the trust we’ve place in them.”

Hartzog discussed trends that he said pose concerns for privacy, and offered cautionary warnings along the way.

Facial recognition tools are proliferating widely, Hartzog said, the most accessible form of which can be found on our iPhones. Hartzog has identified classification and surveillance problems with these products that he said pose a “massive potential for abuse.”

“It often gets it wrong and it often gets it wrong in actually biased ways,” he said. “There have been some studies that show that people of color have higher error rates and a lot of that is because of the bad data that gets fed into the system.”

Hartzog also discussed deep fakes, which he described as “videos that have been edited to superimpose someone’s face or voice on a video in a very convincing way.”

Hartzog contends that this technique carries privacy and dignitary harms and is a “massive interference in our democratic process.”

“Given the appetite for misinformation out there, the fake news phenomenon, it almost doesn’t even matter if that gets corrected in the aftermath,” he said, “because what we see is that often, particularly collectively, people believe what they want to believe about videos ads and there could be some some real harm to this.”

Hartzog also touched on scoring systems. Most of us are likely familiar with how credit scores and Uber ratings work, but it doesn’t stop there. Now, there are facial recognition services specifically designed for churches to track attendance, and another that tracks how much toilet paper people are using in public restrooms.

We’ve gone too far, Hartzog said.

“It could get carried away and carried away quite quickly, particularly when combined with technologies like facial recognition systems, which allow us to immediately recognize anyone we see on the street and see their score,” he said.

Hartzog described the internet of things, the wireless connectivity built into the devices in your home, characterizing the risks surrounding these technologies as “a little bit of a dumpster fire.”

“Your underwear is going to be smart,” he said. “I found a product called ‘Vibrundies’ that was internet-connected, and every time you got a mention on social media your underwear vibrated a little bit.”

Hartzog exemplified his point with an anecdote about baby monitors.

Pointing to a black-and-white still of a hacked Wi-Fi-enabled baby monitor, he said, “The disturbing thing about this is not just that someone was able to hack a baby monitor and spy on people’s children. There is an entire search engine dedicated to stills taken from Wi-Fi-hacked baby monitors.”

Robots that act like humans were another target of Hartzog’s talk. We love our Roombas and we think Wall-E is cute. Robots designed for elder care and day care could be useful, but could our empathy toward things that act like humans (such as teddy bears, for one) work against us?

“Our ability to be wheedled and emotionally manipulated by virtual-like things is, to me, a recipe for exploitation,” Hartzog said.

Hartzog also touched on cameras worn by on-duty patrol officers and intended to increase transparency about how police do their jobs. But Hartzog puts them in the category of potentially problematic products. When does the camera turn on? Is it always on? How long does the footage exist? When can it be deleted? Can facial recognition technology be paired with it? Who are they sharing this footage with?

“We don’t have a lot of set rules,” he said. “We have policies and there are places where you can look at cities’ policies about the regulation of these body cameras but we don’t really have a clean set of rules about when they can be used, when they should be turned off, when they capture very sensitive, for example, interactions.”

For media inquiries, please contact media@northeastern.edu.