Skip to content

AI governance is not just top-down in China, research finds

Political scientist Xuechen Chen said traditional Chinese values and market driven factors have also driven moves to regulate generative AI platforms.

A hand holding a smartphone with a blurry screen.
Xuechen Chen’s paper examines how Chinese AI companies have looked to self-regulate their content to keep consumers on side. Photo by Matthew Modoono/Northeastern University

LONDON — China-watchers arguing that Beijing’s artificial intelligence controls are dependent on its authoritarian government are peddling a “stereotypical narrative,” according to new research.

Xuechen Chen, associate professor in politics and international relations at Northeastern University in London, has co-written a paper that explores how traditional Chinese values and commercial interests have also played a part in self-regulatory guardrails on AI being introduced.

The argument is made in a peer-reviewed paper, “State, society, and market: Interpreting the norms and dynamics of China’s AI governance,” published by the Computer Law & Security Review.

The prevailing view of President Xi Jinping and the Chinese Community Party’s grip on the running of China, where dissent and anti-government views are heavily censored, has left many Beijing observers convinced that technological oversight in the country is top-down. 

But Chen said it was a “stereotypical narrative” to argue that all safeguards are handed down by the state. Such a view, she positioned, fails to recognize the impact Chinese society and cultural norms are having on technological giants such as TikTok owner ByteDance and chatbot insurgent DeepSeek.

“What we wanted to do is demonstrate that China’s AI governance, and digital governance more broadly, is not like what people imagine — a top-down, state-driven system where the national government says you should do that, and then you just do it,” said Chen.

“It’s actually not like that because in this whole governance process, there exist a wide range of different stakeholders, including obviously the state, but also the private sectors and then more recently, and I think more importantly, society.”

Chen described each of these elements — the state, private sector and society — as stakeholders in the governance debate. “They collaborate and then they co-produce these norms and regulatory mechanisms,” she added.

According to a study by Tech Buzz China and Unique Research, 23 of the 100 largest AI products worldwide by annual recurring revenue are made by Chinese developers, with the majority of them focused on overseas markets. The four largest Chinese firms — Glority, Plaud, ByteDance and Zuoyebang — together generated a combined $447 million.

That income remains well behind the biggest players in the U.S., with developers OpenAI and Anthropic making around $17 billion and $7 billion in estimated annual recurring revenue, respectively.

China does not have ratified AI legislation, like the European Union’s AI Act, but follows a more American model of market-led regulation, Chen said.

AI governance is led by the Cyberspace Administration of China, the country’s internet regulator. 

China hawks state that it is part of the state’s censorship of the internet. In September, the Cyberspace Administration carried out a two-month campaign in which it threatened to dish out “strict punishments” against social media apps such as Weibo — a popular micro-blogging site — that failed to rein in “negative” content about life in China.

Tech news outlet Wired reported that every AI company in China has to register with the regulator and show that their products avoid risks, ranging from psychological harm to “violating core socialist values.”vulputate nisi felis ac velit. Ut lacinia non augue sed elementum.

Chen’s paper, co-authored with Lu Xu from Lancaster University, points out how China became the first country to have formal regulations in place that specifically relate to generative AI. The topic has been a source of debate in the West in recent weeks after the furor over Grok, Elon Musk’s AI on the social media platform X, creating sexualized deepfake images of women and children.

Chinese generative AI is legally restricted from making content that is unlawful or considered vulgar to “reflect the taste and wider concerns of contemporary Chinese society,” the paper states. “China has also developed arguably one of the most effective and rigorous systems for minor protection in cyberspace, encapsulating gaming, short-video and GAI services,” it adds.

Last year, the communist government updated its wide-ranging Minors Protection Law to include online restrictions. It limited the time minors spend online, while smartphone producers were forced to install child-friendly modes.

Even before that legislative update, Chen said that AI developers had looked to self-regulate and “proactively impose rules” on their platforms to avoid clashing with regulators.

The reason is twofold, she explained. The first is not wanting to fall foul of the government’s strict censorship laws. For example, DeepSeek — China’s answer to OpenAI’s ChatGPT chatbot — will not respond to prompts that are critical of Xi’s administration.

The second reason AI-utilizing firms have decided to self-regulate is market-driven, Chen continued. Chinese culture retains Confucian values, where the hierarchical nature of the family remains strong. It means that, if parents spot their children consuming harmful or undesirable content on AI and other online platforms, they are likely to intervene.

“If ByteDance does not control the content for kids, then the parents would be furious, and then they would simply just say, ‘No, I’m not going to use your TikTok, and I’m done,’” said Chen. “Tech companies don’t want to face this kind of scenario where the consumers are not happy.”

There is a wider question, Chen accepted, of how powerful the other non-state actors are in an authoritarian society like China’s. But she said weighing that up is a question for further research.

“In this paper, what we wanted to demonstrate is that these different actors, they indeed have been actively participating in shaping the regulations and policies and guidelines and standards in the field,” she added.