What happens if a robot writes something libelous?

What happens if a robot writes something libelous? Photo by Matthew Modoono/Northeastern University

What happens if a robot writes something libelous?

That’s one question Matt Carroll, a journalism professor at Northeastern, hopes to answer during a conference on Friday at which speakers with expertise in journalism, law, and computer science will discuss how artificial intelligence is transforming the spread of information.

The conference, titled “AI, Media, and the Threat to Democracy,” will be held from 9:30 a.m. to 4 p.m. at the Raytheon Amphitheater

Northeastern law professor Woodrow Hartzog said that there are a lot of unanswered questions about artificial intelligence. But that’s not because we don’t have the answers, he said. We’re just asking the wrong people.

Hartzog, who will moderate a panel discussion about regulating artificial intelligence, wants to ask the speakers what role computer scientists and lawmakers should play in creating laws that govern AI.

AI is portrayed in the media like it’s some kind of unexplainable force, but it’s important to realize these machines and tools are built and trained by humans. We need to keep that fact highlighted. We can’t just blame the machines.

Woody Hartzog, Northeastern professor

“To what extent should technology experts be involved in the lawmaking process, and to what extent should we create rules that exist independently of highly-technical knowledge?” said Hartzog, who is an expert in data protection and privacy. “Right now there’s this sentiment—how can we expect lawmakers to create rules about artificial intelligence if they don’t even know how to use a computer?”

These questions also extend to news coverage, according Meg Heckman, a journalism professor at Northeastern who will moderate a panel on AI in the newsroom. How much technical expertise should we expect journalists to have when they’re covering controversies in artificial intelligence?  

“We want to give people in the journalism school the vocabulary and the background to ask intelligent questions about technology as pervasive as AI,” she said.

Hartzog said that journalists and lawmakers alike need to hold the creators of these machines accountable if they want to report and regulate artificial intelligence accurately.  

“AI is portrayed in the media like it’s some kind of unexplainable force, but it’s important to realize these machines and tools are built and trained by humans,” he said. “We need to keep that fact highlighted. We can’t just blame the machines.”

Understanding the motivations of the people who are responsible for producing robots could also help to answer Carroll’s question about libel laws. “If an AI writes something libelous, we need to look at the people who created this technology and ask: was this intentional or was this an accident? Then we can assign blame,” Hartzog  said.

For more answers to questions like these, attend the event on Friday and hear from speakers from the Associated Press, ProPublica, the Columbia University School of Journalism, and other organizations.