Building a better robot (that won’t feed you the cat) - News @ Northeastern
Skip to content

Building a better robot (that won’t feed you the cat)

It is some unknown number of years in the future. The children, playing under the watchful eye of their domestic robot, are hungry. The robot sees that the fridge is empty.

It also sees the family cat.

“If the robot doesn’t understand the balance between nutritional value and sentimental value, then you have a problem,” said computer scientist Stuart Russell, as he stood at a podium in front of a large screen displaying the headline “Deranged Robot Cooks Kitty for Family Dinner.”

Russell, who literally wrote the book on artificial intelligence, was speaking to a conference room packed with students as part of Northeastern’s Leaders who Inspire series. The talk, which was sponsored by Northeastern’s department of political science, primarily concerned how to ensure that the advent of artificial intelligence does not spell the end of the human race.

“This is a social science and humanities problem,” said Russell, who is a fellow of the American Association for Artificial Intelligence. “We have to figure out how to ensure that we remain the intellectual owners and managers of our civilization.”

Smarter robots could help us raise standards of living and remove the need for international competition, Russell said. True artificial intelligence could usher in an era of unprecedented global prosperity. But if we don’t carefully design the programming for these robots, we will end up with results we don’t want.

“We have to figure out how to ensure that we remain the intellectual owners and managers of our civilization.”

Stuart Russell artificial intelligence scholar

Remember King Midas, Russell warned, who wished that everything he touched would turn to gold and wound up gilding his daughter and nearly starving to death.

He put that in terms of artificial intelligence: if you ask a robot to cure cancer, it might decide to give everyone tumors so that it has as many opportunities as possible to find a solution.

“If I can think of how the machine can get it wrong, then I can say, ‘Cure cancer, but don’t do this,’” Russell said. “But when you’re dealing with machines more intelligent than you… you can’t anticipate every possible solution.”

If we can’t out-think the machines, then we need to find a way to program them that guarantees they will benefit humans. Russell proposed three simple tenets for future artificial intelligence systems: the robot’s only objective should be to maximize the realization of human preferences; the robot should be initially uncertain about what those preferences are; and the robot should learn those preferences by studying human behavior.

Russell is attempting to create a framework for artificial intelligence that is humble and careful. He wants a robot that sees the empty fridge and the nearby cat and calls mom to ask if feline is an acceptable menu item.

But, as Russell illustrated with hypothetical robots who delay plane flights to facilitate your dinner plans or entirely abandon the family to help impoverished people in Sudan, these machines will still require a lot of fine-tuning. And how we will handle the future of artificial intelligence is very much still up in the air.

“AI will eventually overtake human abilities, but I believe we can make them provably beneficial,” Russell said. “This is a better kind of AI.”

Cookies on Northeastern sites

This website uses cookies and similar technologies to understand your use of our website and give you a better experience. By continuing to use the site or closing this banner without changing your cookie settings, you agree to our use of cookies and other technologies. To find out more about our use of cookies and how to change your settings, please go to our Privacy Statement.

Like what you see? Sign up for our daily newsletter to get the latest stories right in your inbox.