Will these heavy-handed constraints ultimately stifle the very innovation China needs to compete with the U.S.? By forcing AI models to operate within a narrow ideological "sandbox," the government risks making its homegrown models less capable, less creative, and less useful than their Western counterparts, potentially causing China to fall behind in the most important technological race of the century. Will the western counterparts follow suit?
I don't see how filtering the training data to exclude specific topics the CCP doesn't like would affect the capabilities of the model. The reason Chinese models are so competitive is because they're innovating on the architecture, not the training data.
Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.
Imagine a model trained only on an Earth-centered universe, that there are four elements (earth, air, fire, and water), or one trained only that the world is flat. Would the capabilities of the resulting model equal those of models trained on a more robust set of scientific data?
Pretty much all the Greek philosophers grew up in a world where the classical element model was widely accepted, yet they had reasoning skills that led them to develop theories of atomism, and measure the circumference of the earth. It'd be difficult to argue they were less capable than modern people who grew up learning the ideas they originated either.
It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.
I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.
Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.
People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.
It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.
I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.
I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.
China has been more cautious the whole year. Xi has warned of an "AI" bubble, and "AI" was locked down during exam periods.
More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).
The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.
Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.
If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.
Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.
I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall
America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm
Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.
Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.
A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.
The actual chatbots, themselves, seem to be relatively useful.
Will these heavy-handed constraints ultimately stifle the very innovation China needs to compete with the U.S.? By forcing AI models to operate within a narrow ideological "sandbox," the government risks making its homegrown models less capable, less creative, and less useful than their Western counterparts, potentially causing China to fall behind in the most important technological race of the century. Will the western counterparts follow suit?
I don't see how filtering the training data to exclude specific topics the CCP doesn't like would affect the capabilities of the model. The reason Chinese models are so competitive is because they're innovating on the architecture, not the training data.
Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.
Imagine a model trained only on an Earth-centered universe, that there are four elements (earth, air, fire, and water), or one trained only that the world is flat. Would the capabilities of the resulting model equal those of models trained on a more robust set of scientific data?
Architecture and training data both matter.
Pretty much all the Greek philosophers grew up in a world where the classical element model was widely accepted, yet they had reasoning skills that led them to develop theories of atomism, and measure the circumference of the earth. It'd be difficult to argue they were less capable than modern people who grew up learning the ideas they originated either.
It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.
I mean they came up with it then very slowly, they would quickly have to learn everything modern if they wanted to compete...
Kind of a version of you don't have to run faster than the bear, you just have to run faster than the person beside you.
I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.
Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.
> Will the western counterparts follow suit?
Haven't some of them already? I seem to recall Grok being censored to follow several US gov-preferred viewpoints.
The west is already ahead on this. It is called AI safety and alignment.
People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.
Probably not.
It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.
I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.
I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.
This isn’t surprising. They even enforced rules protecting Chinese government interests in the US TikTok company (https://dailycaller.com/2025/01/14/tiktok-forced-staff-oaths...), so I would expect them to be tougher within their borders.
China has been more cautious the whole year. Xi has warned of an "AI" bubble, and "AI" was locked down during exam periods.
More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).
The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.
Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.
If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.
Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.
I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall
America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm
Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.
Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.
A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.
The actual chatbots, themselves, seem to be relatively useful.