Skip to content

World leaders want to rein in military AI applications. With the help of a Northeastern professor, they put together a blueprint to do so

 It was the subject of AI military applications that brought together world leaders, academics, civil society organizations and AI experts earlier this month in Seoul, South Korea, for a summit.

A group of people posing at the REAIM Summit.
The REAIM is an initiative spearheaded by South Korea and the Netherlands. Singapore, Kenya and the U.K. joined forces with the two nations to host the second global summit. Courtesy photo

Warfare in the digital age has taken place alongside many technological advancements. As the weapons of war continually evolve, ingenuity itself becomes a source of scrutiny. We saw that last week when exploding pagers and walkie-talkies were to blame for 25 deaths and more than 600 injuries in Lebanon — attacks believed to be carried out by Israel. 

But perhaps the most insidious development in modern warfare, many experts have pointed out, is the prospect of autonomous weapons systems — weapons that deploy artificial intelligence to destroy targets and carry out killings without direct human control.

It was the subject of AI military applications that brought world leaders, academics, civil society organizations and AI experts together earlier this month in Seoul, South Korea, for a summit — one that represented yet another step in a multistakeholder effort to develop a global governance framework for the responsible use of AI across civilian and military contexts. 

Denise Garcia, a Northeastern University professor of political science and international affairs, attended the Summit on Responsible AI in the Military Domain (REAIM) as a member of the Global Commission on Responsible Artificial Intelligence. The commission was created following REAIM’s first global summit, which took place in February 2023 at the World Forum in The Hague

Present for the summit were many of the world’s top diplomats, defense ministers and foreign policy minds. All in all, more than 60 countries — including the United States — endorsed the outcome of the talks, according to Reuters

That result was the Blueprint for Action — a document that spells out some key principles as it relates to the proliferation of AI weapons systems: emphasizing and maintaining human control over the technologies; reaffirming international law as the guiding principle; and stressing that weapons systems be developed in such a way as to “not undermine international peace, security and stability.”

The REAIM is an initiative spearheaded by South Korea and the Netherlands. Singapore, Kenya and the U.K. recently joined forces with the two nations to host the second global summit (Sept. 9-10). The primary achievement of the first global summit was of a “joint call to action” on the responsible use of AI.

“This was the second high-level summit,” Garcia says. “It was a ministerial-level meeting, with ministers of defense, in particular; but also ministers of foreign affairs.” 

As a global commissioner, Garcia says she played “a very active role” in helping to shape the direction of the draft of the Blueprint for Action. Garcia sat on the International Panel for the Regulation of Autonomous Weapons from 2017 to 2022; she says AI military applications have already been deployed in the ongoing conflicts in Europe and the Middle East — one of the most recognizable examples being Israel’s Iron Dome.

She’s also written a book on the subject: “The AI Military Race: Common Good Governance in the Age of Artificial Intelligence,” which explores the consequences of what she describes as an AI military arms race between superpowers.   

Today’s AI and quasi-AI military applications have already impacted the battlefield. One such application lets a single person control multiple unmanned systems, according to one source, such as a swarm of drones capable of attacking by air or beneath the sea. In the war in Ukraine, loitering munitions — uncrewed aircraft that use sensors to identify targets, or “killer drones” — have generated debate over just how much control human agents have over targeting decisions.  

While the REAIM is not sponsored by organizations such as the United Nations, its goal is to build “like-mindedness” among states and other actors — beginning with a small group of nations and eventually, participants hope, the whole of U.N. membership. 

And civil society was present at this month’s summit, Garcia says, “en masse.”

“The presence of a large delegation of youth in Seoul was also very inspiring,” Garcia says. “Indeed, there is an incredible momentum — and there is a sense of urgency.”

There is also, Garcia says, a desire to clear some of the misconceptions surrounding autonomous weapons. Central to this campaign is ensuring that human beings remain “in the loop” as these weapons systems continue to develop and evolve — this as opposed to outright banning the technology.   

“We want to clarify, demystify and debunk some myths and ideas that keep getting repeated at the diplomatic level in Geneva that just make no sense that are just delaying the process of a new treaty,” she says.