Featured
Alice Helliwell, an expert in the philosophical implications of computational creativity, thinks the public needs assistance in gaining a “conceptual level” understanding of how AI functions.
LONDON — The public would be less “scared” of artificial intelligence if time were taken to explain conceptually how the advanced technology works, a Northeastern University expert says.
Alice Helliwell, an assistant professor in philosophy, made the argument during her appearance on a six-person panel examining AI’s impact on the creative industries as part of a regular public series, “Conversations On AI.”
The event — held at tech giant Adobe U.K.’s London headquarters on June 13 and titled “How should humans co-create with AI?” — was the third in a series of workshops inspired by Northeastern’s graduate program in philosophy and AI.
Helliwell, whose research is focused on the philosophical implications of computational creativity and art made by AI, argued that people need a greater understanding “about how the systems work.”
“At the minute for lots of people, they are just a black box,” said the London-based academic. “No matter that the computer scientists know what is going on, for the rest of us, a lot of us are scared because we don’t know what is going on.
“And I think we are seeing that come out quite a lot in the sort of accusations of what these generative AI systems are doing and how they are working in that they are not necessarily based on how they actually work. And that doesn’t mean that there are not objectionable things about them, it is just that it is easy to deflect them because they are not based on the actual technology.
“So part of what we should be thinking about doing, in this current landscape where these are out there, is making sure that people do have some understanding of how they are working, even at a conceptual level. You don’t necessarily need all the mathematics behind it, but at some conceptual level … I think it is very important to focus on how to explain that.”
The event, co-organized by Northeastern University, Adobe and tech firm Cognizant, brought together a host of key figures who are currently teaching about or dealing with the impact of AI on creative industries.
Christian Zimmermann, chief executive of the nonprofit Design and Arts Copyright Society, said regulation is needed to help protect the rights of artists.
“It is not about getting rid of technology but it has to go hand in hand with the recognition that other people’s works are being used in order to train these machines that do great things,” he said.
“What we are seeing now is that there is a wholesale use of works by every type of creator without any recognition, without any remuneration, and there isn’t any regulation of how it is being used,” he added.
Sam Adeyemi, a product specialist at Adobe U.K., said his company is training its AI models using Adobe “stock content” so that it could “compensate the contributors of our models.”
He also added that, while there are public fears that AI could end up replacing creative jobs, Adobe is using the technology to help free up creative members of staff by automating the more mundane and repetitive tasks.
But he said there was a recognition that machine-learning from internet-only sources could lead to installing inherent biases.
“If you were to use a model that has just been trained by the internet and you typed in ‘doctor,’ the truth is that it is going to give you a white male as a doctor. That is not representative of the world that we live in,” Adeyemi said.
“So when it came to our models, we really thought about mitigating that bias, so we have trained our models to be more diverse in their set.”
Helliwell picked up on Adeyemi’s point about biases as she explained that teams at Northeastern had been conducting a research project to help developers think about “ethically minded questions” when they are creating AI.
She said the idea was about “embedding” ethical development throughout the creation of new technology and “not just relying on regulation after technology has come out.”
“It is hard to do, and it does put you at risk that other countries, which maybe aren’t pushing this kind of thing, might develop [AI products] quicker,” Helliwell said.
“But if you are asking the question of how this might be misused at the time of development, you might be able to guard against it a bit better rather than trying to go back in afterwards to regulate around it or try and fix things about it,” she said.
Tracy Woods, head of life sciences consulting in the U.K. and Ireland for Cognizant, opened the event by explaining how her time studying for a master’s degree in philosophy and AI at Northeastern in London inspired the idea of holding regular public talks.
“Apart from the excellent lectures and learning that the university gave me, the other fantastic part of it was the diversity of the people who were doing the course and the lectures,” she said.
“We had artists, we had people in business, we had people in social services, academics and a whole bunch [of others]. There was huge value in having all of those different points of view in really deep discussions around some of these topics — it was absolutely fascinating.”