Skip to content

Why Microsoft is opening an AI office in London and what its challenges will be

Northeastern experts look at why Microsoft chose London for a new AI base and some of the difficulties that could be facing the tech giant’s U.K. based developers.

The outside of a Microsoft store on Oxford Street in London.
Microsoft on Oxford Street in London. Photo by Peter Dazeley/Getty Images

LONDON — There was palpable excitement in the U.K. when Microsoft announced it plans to open an office in London dedicated to researching and developing artificial intelligence.

Viscount Camrose, the British minister for AI, told the BBC that Microsoft’s new hub was a “vote of confidence in the U.K.’s status as a global leader in AI innovation.”

Prime Minister Rishi Sunak, who previously worked in California, the home of such Silicon Valley internet giants as Google, eBay and Apple, has looked to position the U.K. as the home of global AI regulation.

The AI Safety Summit held in England in November was part of that pitch, with the gathering aimed at selling world leaders and tech bosses on the prospect of the U.K. becoming the hub of controlling and monitoring advances in AI.

Headshot of Mark Martin.
Mark Martin, an assistant professor in computer science at Northeastern University in London. Courtesy photo

When announcing the new London hub in a blog post this month, Microsoft AI chief executive Mustafa Suleyman praised the U.K.’s “safety-first” approach to AI.

Mark Martin, an assistant professor in computer science at Northeastern University in London, said the U.K.’s wooing of industry leaders through the AI summit was about ensuring the country was not “chasing” the pack like it had during previous periods of technological progress.

“The U.K. is really keen to have a voice within this AI space, especially in terms of the national risk it presents,” he said. “We need more people in AI to be able to help us from cyberattacks, we need more people to grow businesses within the U.K., so there are quite a few incentives for having this conversation now rather than having to chase.

“I think that historically the U.K. has been chasing a lot of what is happening around the world when it comes to tech innovation,” he continued. “But that summit has been a catalyst to think about, how can we start to have some of these guardrails when it comes to AI? Because we don’t yet know what AI is in terms of its potential to be an opportunity and also a potential to be a great risk.”

Suleyman — who co-founded AI research lab DeepMind in the U.K. before it was bought by Google in 2014 — said there is an “enormous pool of AI talent and expertise in the U.K.” when announcing the AI office.

Microsoft is a major investor in OpenAI — the U.S. organization behind the ChatGPT chatbot — which itself opened an office in London in 2023.

Martin said Microsoft committing further to London marked an “exciting opportunity,” with the city’s varied economy making it “perfectly placed for new innovation and emerging technologies.”

“We know that London is a melting pot for different sectors, from the financial district, to regulators and so forth,” he said. “And despite Brexit, we are still in the top five in the world for tech, and being in London lures a lot of investment and a lot of innovation.”

Headshot of Brian Ball
Brian Ball, associate professor in philosophy at Northeastern in London. Courtesy photo

He said the hope is that “homegrown talent can be plugged into these opportunities” available in the AI sector, rather than having to “leave this country and go elsewhere” for big tech jobs.

With the opportunities becoming more widespread for those employed in or training for the tech workforce in Britain, those experts will need to brace for the next challenges that AI development poses.

Brian Ball, associate professor in philosophy at Northeastern in London, said a hurdle Microsoft — which has said it wants to “drive pioneering work to advance state-of-the-art language models” at its London base — is likely to face is to produce AI that can understand the nuances of human speech.

He said “linguistic competence is complicated,” especially when it comes to understanding changes in tone of voice or inferences. 

Ball, an expert in the philosophy of language, added: “Pragmatics is about what people do with words.

“So one example I like to give is if you say something dumb and I say, ‘Yeah, and I’m a monkey’s uncle.’” he said. “I’m implicating or implying that what you said is not true, because what I said is not true.

“It is obviously not true and you can infer from my saying that, that I don’t mean the thing that my words mean. It is not merely the expression of their literal meaning — I’ve tried to imply something,” Ball said. “So I think picking up on those pragmatic features of language-use is not easy for computational modeling of language traits. [But] I’m not saying it is in principle impossible.”

That ability for large language models — known as LLMs — to decipher such inferences are important if they are to understand the difference between humor and more sinister attempts to mislead people, Ball points out.

“This plays out in things like detecting misinformation. People might say, ‘Oh, well that was just a joke,’” he said. “So language models need to be able to distinguish between, say, satire and the intent to deceive through spreading falsehoods. I think a lot of that is hard.”