Does artificial intelligence deserve the same ethical protections we give to animals?

Marc Reibert, founder of Boston Dynamics, presented the SpotMini robot at CeBIT 2018 in Hanover, Germany, on June 13, 2018. SpotMini is a small four-legged robot with the ability to pick up and handle objects using its 5 degree-of-freedom arm and perception sensors. (Photo by Laura Chiesa / Pacific Press/Sipa USA)(Sipa via AP Images)

In the HBO show Westworld, robots designed to display emotion, feel pain, and die like humans populate a sprawling western-style theme park for wealthy guests who pay to act out their fantasies. As the show progresses, and the robots learn more about the world in which they live, they begin to realize that they are the playthings of the person who programmed them.

Viewers might conclude that humans need to afford robots with such sophisticated artificial intelligence—such as those in Westworld—the same ethical protections we afford each other. But Westworld is a fictional TV show. And robots with the cognitive sophistication of humans don’t exist.

Yet advances in artificial intelligence by universities and technology companies mean that we’re closer than ever to creating machines that are “approximately as cognitively sophisticated as mice or dogs,” says John Basl, who is an assistant professor of philosophy at Northeastern University. He argues these machines deserve the same ethical protections we give to animals involved in research.

“The nightmare scenario is that we create a machine mind and, without knowing, do something to it that’s painful,” Basl says. “We create a conscious being and then cause it to suffer.”

Animal care and use committees carefully scrutinize scientific research to ensure that animals are not made to suffer unduly, and the standards are even higher for research that involves human stem cells, Basl says.

As scientists and engineers get closer to creating artificially intelligent machines that are conscious, the scientific community needs to build a similar framework by which to protect these intelligent machines from suffering and pain, too, Basl says.

“Usually we wait until we have an ethical catastrophe, and then create rules afterward to prevent it from happening again,” Basl says. “We’re saying we need to start thinking about this now, before we have a catastrophe.”

Basl and his colleague at the University of California, Riverside, propose the creation of oversight committees—composed of cognitive scientists, artificial intelligence designers, philosophers, and ethicists—to carefully evaluate research involving artificial intelligence. And they say it’s likely that such committees will judge all current artificial intelligence research permissible.

But a philosophical question lies at the heart of all this: How will we know when we’ve created a machine capable of experiencing joy and suffering, especially if that machine can’t communicate those feelings to us?

There’s no easy answer to this question, Basl says, in part because scientists don’t agree on what consciousness actually is.

Some people have a “liberal” view of consciousness, Basl says. They believe all that’s required for consciousness to exist is “well-organized information processing,” and a means by which to pay attention and plan for the long-term. People who have more “conservative” views, he says, require robots to have specific biological features such as a brain similar to that of a mammal.

At this point, Basl says, it’s not clear which view might prove to be correct, or whether there’s another way to define consciousness that we haven’t considered yet. But, if we use the more liberal definition of consciousness, scientists might soon be able to create intelligent machines that feel pain and suffering and deserve ethical protections, Basl says.

“We could be very far away from creating a conscious AI, or we could be could be close,” Basl says. “We should be prepared in case we’re close.”

For media inquiries, please contact Shannon Nargi at s.nargi@northeastern.edu or 617-373-5718.