Skip to content

EU AI Act sets precedent with $37 million fines for non-compliance. Experts explain the impact on UK and US developers

Northeastern experts set the scene for how the EU AI Act will work and what the pros and cons are of governments stepping in to regulate the advanced technology.

EU flags flying outside of a European Parliament building on a cloudy day.
The European Union’s AI Act regulating the technology has been described as the ‘first of its kind in the world’. Philipp von Ditfurth/picture-alliance/dpa/AP Images

LONDON — The world’s first legislation designed to regulate artificial intelligence has arrived and it comes with hefty fines for developers who fail to comply.

The European Union’s AI Act has a reach that extends beyond the territorial remit of its 27 member states and contains predetermined punishments up to $37 million. For mega corporations, the fines could potentially be even higher.

Mathieu Michel, the Belgian digital minister, heralded the act as a “landmark law,” calling it the “first of its kind in the world” when each EU country’s parliament signed off on it last month.

Anton Dinev, an assistant professor in law at Northeastern University, explains that developers in the United Kingdom, which left the EU in 2020, the United States and elsewhere will have to prepare themselves for the law’s impact as the rules designed to protect the fundamental rights of European citizens gradually come into force.

The London-based professor says anyone using an AI system that processes data from the EU or has an output that then has repercussions inside the bloc will find themselves covered by the legislation.

“The AI Act has a very broad extraterritorial reach,” Dinev says. “It applies not only to users of AI systems located within the European Union but also to providers placing into the European market or putting into service AI systems [that impact EU citizens], irrespective of whether those providers are established within the EU or a third country.”

The law, which was three years in the making, categorizes different types of AI according to risk. AI systems presenting only a limited risk would be subject to light transparency obligations, according to the EU, while “high-risk” AI systems would be authorized, but subject to a set of requirements and obligations in order to gain access to the EU’s single market.

AI systems — defined as being able to conduct autonomous machine learning and having the capacity to make “outputs” such as predictions and recommendations — deemed to hold an “unacceptable risk” to the rights of EU citizens will be outlawed entirely.

Headshot of Anton Dinev (left) and Malik Haddad (right).
Northeastern’s Anton Dinev, an expert in EU law, and Malik Haddad, an assistant professor in computer science at the university. Courtesy photos.

Bans will come into place next year on activities such as using the advanced technology for social scoring — a process that could deny people access to public services based on their behavior — or for predictive policing based on profiling, with punishments for failing to comply including a fine worth as much as 35 million Euros ($37 million) or 7% of global revenues, whichever is higher.

Dinev explains that U.S.-administered qualifications such as the Test of English as a Foreign Language (TOEFL) will find themselves covered under the AI Act.

Those undertaking TOEFL in Europe have their answers scored in the U.S. by a process that utilizes both AI and human effort. He says that means the activity falls into the EU’s “high risk” AI systems category, which covers educational institutions’ use of AI, as well as its deployment in health care and recruitment settings.

“With the TOEFL test, the provider is based in the U.S., AI is being used in the U.S. — but the AI Act will still apply to that activity,” he says.

Developers whose systems come under the “high risk” category will need to set up a code of practice by March 2025 that ensures compliance with the EU legislation. Fines for breaches can reach up to 15 million Euros ($16.2 million) or 3% of a firm’s global revenue, Dinev explains.

General purpose AI models, whose definition covers large-language models like ChatGPT and Google’s Gemini software, will face some limited requirements, mostly in regard to transparency, according to Brussels. But the legislation states that those models regarded as presenting “systemic risks” will have to comply with stricter rules.

The EU’s decision to go first in setting regulation can be seen in the context of past efforts to introduce an international benchmark for increasing civil protections, according to European legal expert Dinev.

He highlights how Brussels’ decision to establish better personal data controls in the form of the General Data Protection Regulation (GDPR) became the “gold standard” and “inspired many similar” laws across the globe, including in the state of California.

“Although some would criticize the European Union’s approach as being too interventionist, there is a rationale for this,” he says. “It is a reality that there is regulatory competition out there in the world and, just like in traditional markets, there is always this ‘first mover’ advantage. Whoever gets to regulate a new technology sets the tone.”

Malik Haddad, an assistant professor in data and computer science, says regulation of AI is “a must” due to concerns about its ability to be used for harm, including posing national security threats. But the Northeastern expert argues that the industry should be responsible for setting the rules, rather than governments, so as not to limit AI’s advances.

“There is a need for regulation, but the regulation should not limit the progression of AI or the development of AI techniques because we all know and believe that AI will improve our quality of life in the future,” he says.

Haddad continues: “Regulation is a must because there are a lot of ethical standards and other GDPR standards that could be breached if AI was not regulated. For example, if we take computer vision. It is a very helpful AI technique to identify individuals, it can support security issues and can ease our lives. But at the same time this can be used for unethical uses, which must be regulated. 

“But the point is, who should regulate? Big firms using AI or governments? I think AI regulation should be done by people who are using or who are developing the algorithms.”

Haddad, who previously worked on applying AI to powered mobility devices, holds concerns that having politicians and civil servants set the rules could prevent AI from reaching its true potential, especially given the speed of progress being witnessed currently.

“Developers are the only people who know the capabilities of the system they are developing, not an outside body,” he says. “Having external regulation could limit their capabilities and limit the research they can do to improve their product.”

He adds: “In the future, what was previously taking two or three years, we can cover in just a few months. The regulations need to cope with this accelerated progress.”

Legal expert Dinev says there are concerns that there is “too much regulation” in Europe and that this can be “counterproductive” to making technological breakthroughs.

But at the same time, he puts forward that complying with EU AI standards has the potential to be seen as an international stamp of approval and trustworthiness, in much the same way as GDPR has come to be viewed.

“It is true whenever you have regulation that it could pose a strain on innovation,” he says. “For example, if you are a developer of a small AI system with a particular niche, you might think twice if you want to scale up because, if you do, you could move one category up and you could face a higher regulatory burden.

“But, equally, there is the idea that the regulation itself could serve as a badge of quality. If you are AI compliant in the European Union, countries or users elsewhere might think of you as a good, reliable provider of AI systems.”