Skip to content

Apple’s missteps highlight risks of AI producing automated headlines, Northeastern researcher says

AI ‘doesn’t know what to do when it comes to conflicting or new things,’ says assistant professor of data science Mariana Macedo

The Apple Intelligence logo on an iPhone in a dark room.
Apple stopped its AI-powered Apple Intelligence from summarizing news notifications after the technology inaccurately reported headlines (Photo by Jaap Arriens/NurPhoto via AP)

LONDON — “Luigi Mangione shoots himself,” read the BBC News headline.

Except Mangione, the man charged with murdering UnitedHealthcare chief executive Brian Thompson, had done no such thing. And neither had the BBC reported that — but yet that was the headline that Apple Intelligence displayed to its users as part of a notifications summary.

It was one of many high-profile mistakes made by the artificial intelligence-powered software that led to the tech giant suspending the notifications feature in Apple Intelligence in relation to news and entertainment categories.

Anees Baqir says the inadvertent spread of misinformation by such an AI source “posed a significant risk by eroding public trust.” 

The assistant professor of data science at Northeastern University in London, who researches misinformation online, says errors like the ones made by Apple Intelligence were likely to “create confusion” and could have led to news consumers doubting media brands that they previously trusted.

“Imagine what this could do to people’s opinion if there is misinformation-related content coming from a very high-profile news source that is usually considered as a reliable news source,” Baqir said. “That could be really dangerous, in my opinion.”

The episode with Apple Intelligence sparked a wider debate in Britain about whether the publicly-available mainstream generative AI softwares are capable of accurately summarizing and understanding news articles. 

BBC News chief executive Deborah Turness said that, while AI brings “endless opportunities,” the companies developing the tools are currently “playing with fire.”

There are reasons why generative AI like Apple Intelligence may not always get it right when it comes to handling news stories, says Mariana Macedo, a data scientist at Northeastern.

When developing generative AI, the “processes are not deterministic, so they have some stochasticity,” says the London-based assistant professor, meaning that there can be a randomness to the outcome.

“Things can be written in a way that you cannot predict,” she explains. “It is like when you bring up a child. When you educate a kid, you educate them with values, with rules, with instructions — and then you say, ‘Now live your life.’ 

“The kid knows what is going to be right or wrong more or less, but the kid doesn’t know everything. The kid doesn’t have all the experience or the knowledge to react and create new actions in a perfect way. It is the same with AI and algorithms.”

Macedo says the issue with news and AI learning is that news is mostly about things that have just happened — there is little to no past context to help the software understand the reporting that it is being asked to sum up.

“When you talk about news, you are talking about things that are novel,” the researcher continues. “You are not talking about things that we have known for a long period. 

“AI is very good at things that are well established in society. AI doesn’t know what to do when it comes to conflicting or new things. So every time that the AI is not trained with enough information, it is going to get even more wrong.”

To ensure accuracy, Macedo argues that developers need to “find a way of automatically double checking that information” before it is published.

Allowing AI to learn from news articles by being trained on them could also mean they are “more likely to improve” their accuracy, Macedo says.

The BBC currently blocks developers from using its content to train generative AI models. But other U.K. news outlets have made moves to collaborate, with partnership deals between the Financial Times and OpenAI allowing ChatGPT users to see select attributed summaries, quotes and links.

Baqir suggests that tech companies, media organizations and communications regulators collaborating could be the best way of confronting the AI news misinformation problem.

“I think all of them need to come together,” he says. “Only then can we come up with a way that can help us mitigate these impacts. There cannot be one single solution.”