Skip to content

Teaching elementary schoolchildren the rights and wrongs of AI is just as important as sex and drug education, Northeastern expert says

Assistant professor in applied ethics Hossein Dabbagh says it should be mandatory for children to learn about the dangers of becoming over-reliant on artificial intelligence.

A kid looking at a screen with an overlay of a digital display over their face.
A Northeastern professor argues that AI ethics should be a mandatory part of the curriculum in schools that teach children ages 11 and under. Getty Images

LONDON — Learning about the rights and wrongs of artificial intelligence should be as fundamental for a child in elementary school as sex or drug education, according to research led by a Northeastern University professor.

AI is “perhaps the most powerful tool humans will ever have used,” Hossein Dabbagh and a host of his peers say, and the next generation needs to be properly equipped to know about the ethics of deploying the advanced technology.

Dabbagh, an assistant professor in applied ethics in the faculty of philosophy at Northeastern in London, led research for an article published this month in the AI and Ethics journal arguing for AI ethics to be made a mandatory part of the curriculum in schools that teach children ages 11 and under.

He said it is vital that children grasp at an early age the dangers that machine-learning represents.

Dabbagh points out that AI is already a feature in most people’s lives, whether they know it or not.

Headshot of Hossein Dabbagh.
Northeastern assistant professor in applied ethics Hossein Dabbagh. Courtesy Photo

People are exposed to AI through virtual assistants like Siri and Alexa, social media algorithms and online shopping recommendations — and that day-to-day contact is only likely to grow, particularly as the technology moves forward.

OpenAI’s ChatGPT, a chatbot powered by the company’s AI algorithm, has made advances in leaps and bounds since being released in 2022, with studies suggesting it can pass some university examinations.

“AI is common, it is in our daily lives. We have access to AI when we use ChatGPT and social media, with its algorithms,” Dabbagh said.

“Even unconsciously we are using AI — or at least, we are receiving AI. It is mandatory to raise awareness about this in our schools and our curriculum.”

Just as parents would want their children to know at a young age about the harms that taking drugs can involve, Dabbagh argues that so too should they understand the risks involved with developing an overreliance on AI.

“In drug education, you think about drug dependency,” Dabbagh said. 

“We can use the same argument to talk about AI dependency — using AI for everything and that over-trusting AI may be dangerous for the next generation.

“If they believe that AI provides the right answer for everything, that might perhaps reduce the level of critical thinking, the critical ability of the next generation — that might be concerning, that might be problematic.

“In our schools and universities, we want to raise a generation that has this ability to think critically and think independently. 

“But if you just transfer absolutely everything to AI — suppose the next generation asks every question to ChatGPT and they assume that whatever they will see from ChatGPT is correct.

“In the long run that might be bad for our society because we lose that ability or capacity to think critically. 

“So it is the same kind of concerns, the same kind of harms that society might receive from a lack of education about sex, about a lack of education about drugs that we believe would be the case for a lack of education about AI ethics.”

He said just as children learn about the fundamentals of human-to-human relationships in sex education, in the same way the next generation needs to know about human-to-robot relationships and about what should be kept private and confidential from AI.

Dabbagh’s thinking on the role of AI ethics in education was put into written form on April 4 in an article he co-wrote with colleagues in the AI and Ethics journal.

His co-writers included Brian D. Earp from the Uehiro Centre for Practical Ethics at the University of Oxford, Julian Savulescu, chair of the same center at Oxford, Sebastian Porsdam Mann from Oxford’s faculty of law, Monika Plozza from the faculty of law at the University of Lucerne in Switzerland and Sabine Salloch from the Institute for Ethics, History and Philosophy of Medicine at Hannover Medical School in Germany.

The contributors came together to discuss the role of AI in society and began looking at the need for “not only AI education and AI literacy, but AI ethics,” Dabbagh said.

In the open access article, the cohort gives examples of what kind of AI ethics elementary-age children might learn, including learning to distinguish between an image created by humans and one created by a machine.

Introducing storytelling with AI, simple coding exercises or discussions on the impact of AI in everyday life are other ways the curriculum could “make the subject accessible and engaging,” they suggest.

Dabbagh said coming up with games or learning exercises could be one way of introducing pupils to the pros and cons of AI.

“First of all, we need an appropriate language here for school children, a form of age-appropriate exercises and practices that can introduce AI, that can introduce ethics,” he said.

“The same kind of language that we use for sex education, the same kind of language that we use for drug education, we need to develop it for AI. This is the next challenging step that we need to take. Obviously, it is a legitimate concern, but we need to take it.”