"The Trump administration, which took a noninterventionist approach to artificial intelligence, is now discussing imposing oversight on A.I. models before they are made publicly available."
I wonder how much of this is geared towards public safety versus the current administration wanting to use this as another form of leverage when AI companies (e.g. Anthropic) don't listen to them.
More to the point, the vendor will have to make the correct deals with and contributions to firms and foundations owned and operated by the President’s friends and family members.
I have many questions. How would A/B testing work in the scenario where models need to be approved by the government before release? All the big providers commonly a/b test their unreleased models on production traffic. Would these need to be preapproved? Many models get tested on the public for every one that is officially "released". Will the government have the bandwidth to examine each of these? Does changing the system prompt count as a different model or only model weights?
You just perfectly highlighted why over-regulation in tech is troublesome. This is why I've always been against European tech regulations which gave us the cookie prompt. Politicians shouldn't be product designers.
Edit: I'll take the downvotes. Every time I say this, I get downvoted. Weirdly, even EU politicians are beginning to see that they've over regulated their tech industry so much that it can't compete but HN just can't accept this opinion.
This is because the law should say "The only circumstances in which you can get your users PII is when they willingly give them to you, as clients/subscribers. The only circumstances you can sell that data or track your users is never".
Instead we tried something that look like a punt, and even then tracking/adtech ghouls aren't happy. I say we should lobby hard to get my version at least examined in the EU parliament (or in any parliament in a EU country, really), that will probably scare them into removing the cookie banners.
The cookie prompt is a perfect example of under regulation. The law you're citing as over-regulation requires companies to get consent before tracking you. Companies across the board settled on "annoy users into consenting" as their compliance strategy. You want to revert to implied consent? Fuck that, layer on "also you can't pester users into agreeing to being tracked." Too vague? That's the point; anything else and you incentivize dancing on the line of exactly how close to non-compliance you can get away with. Politicians broadly shouldn't be product designers, but establishing broad no-go zones around anti-consumer behavior is foundational to modern society. Without that you get cartoon ads marketing menthol cigarettes to kids and commercials for casino apps for betting on drone strikes.
This is over regulation because regulation always have unintended consequences. The unintended consequence is the cookie prompt.
And before someone comes out saying that only "bad" websites want to track you, the official European Union website has a cookie prompt. https://commission.europa.eu/index_en
> even EU politicians are beginning to see that they've over regulated their tech industry so much that it can't compete
Yes, it feels a bit weird to me that the HN crowd is a fan of regulation although much of the crowd works in the least regulated profession.
Maybe we need to have regulation that puts an automatic expiration on regulation and there's no way to bypass that. Existing regulation nearing expiration can only be extended by a democratic voting process. Just the burden of handling this should naturally filter out regulation that's unpopular or no longer relevant.
A worst case scenario I feel is that the government could restrict inference providers within the US to run only approved/American LLMs, which would be a huge deal since the only recent American OSS model is Gemma. I could see OpenAI/Anthropic/Google lobbying for that though…
My thought as well. They will approve every new American trained LLM but they can't control the release of free Chinese LLMs. Therefore, the only card they can play is to simply make Chinese LLMs illegal to use for American companies and Americans.
Ultimately, this will grant more power to OpenAI, Anthropic, and Google due to regulatory capture but it hurts the AI industry overall.
Only the us AI industry. It will put the US in a ditch with it's only asset being the ability to surveill its citizens for negative money and negative innovation. The rest of the world will keep spinning just fine
Nah there's literally no reason geopolitically or economically to do that for Europe. The US has more or less entirely botched everything except it's military applications for AI. It's endangering the whole economy in the process too.
They know very well that China is going to keep releasing world class models at 1/20th the price and 5-300x smaller in size. They also know they screwed up by going full technofacism and there's no way back because of the trillions invested in oligarchs and it endangers the entire economy.
China can’t keep doing that. This is essentially a capture the market ploy that is government back from them at this point at current DeepSeek prices they would have to make a massive loss.
I think you misunderstood what China has been doing to make the progress they have. They've invested far less and have actual applications because they are the world's largest manufacturer.
In the us we have products we sell to china to automate their factories. China soon wont need those. The US goal of laying off anyone who thinks for money is really different than chinas goals of automating product manufacturing.
Deepseek costs less because it actually costs less. Chinas electrical infrastructure is so much better than the uses. Meanwhile the us has ai data centers running on effing gas. On literal gas generators. The only budget discussions for infrastructure in the us are basically for the DHS too. It's not sustainable.
Wouldn't this immediately put the American companies producing these models at a significant disadvantage? Just use an unmolested model hosted by a provider in Vancouver.
If anything, this measure seems like it would create a scenario where services hosted outside the US would become a lot more attractive relative to Trumped AI.
Is there an arms race of payment infrastructure for international LLM providers? A common payment gateway so that people can pay providers anywhere for tokens will inevitably emerge if the US is making moves like this.
* Maybe Anthropic's call for regulation has backfired. Now it's going to be overregulation. They might regret it now.
* This might be regulatory capture for OpenAI, Google, and Anthropic. Any new entrant will have a harder time getting approval.
* This is going to be terrible for the industry in general because this administration will not hesitate to demand bribes and force their propaganda into the models.
* This might cause the US to ban the use of Chinese models for US businesses and governments. After all, Chinese models won't need white house approval to release. So the only way to "control" them is to simply make them illegal.
Nah, Anthropic would love this. They definitely don't want you using KimiK2.6, DeepSeekV4Pro, GLM5.1, MiniMax2.7, MimoV2.5-Pro, Qwen3.5-397B, Step3.5Flash, because, truth be told. You can survive fine without Claude.
"The National Security Agency has also recently used Anthropic’s Mythos model to assess vulnerabilities in the U.S. government’s software, people with knowledge of the work said."
I'm sure that's not the only thing they've used it for. Definitely looking for any exploit they can use to enhance data gathering, and cracking into IOS, private networks, etc. Gotta keep an eye on citizens, but hey, it's the only government body that really listens you.
at this point it almost seems like citizens should review AI models before the government can access them.
How the fuck would this even be enforced? "AI model" is a pretty broad thing; in some sense basically anything involving weights could be considered "AI", and even more abstractly you could argue that even a runtime conditional is AI.
Honestly, if we are discussing the „how“ I feel that we are already ceding too much ground. Whatever technical solutions exist it is a terrible precedent
What specifically is the goal of the pre-release review? Just to patch government systems first? Seems like the government was banning internal use of anthropic's models 2 months ago and now wants exclusive access for some amount of time. Clown show...
Mind elaborating instead of vague posting? Are you saying that China is already doing that vetting or that China will benefit because they can release models faster without having to be blocked by WH vetting?
- China is the largest open weight provider, with Mistral and Cohere delivering a few other models. There isn’t much else internationally
- (I think OP is suggesting) this would effectively ban Chinese models in the US, which would be an interesting case. Who knows if they could have theirs reviewed, or if we’ll see another FCC approved router situation.
- that Chinese models are censored is a very common criticism. If American models are also censored that looks bad.
- this will be awful for self hosters and local inference. Imagine if HuggingFace had to drop non-American model weights. That would effectively kill them.
Also not the OP, but my read is that China can release a model without the US president's approval. If the US models need approval and China's don't, then advantage China.
Um, I realize the Trump administration doesn't pay a lot of attention to what it does and does not have authority to do, but I'm having trouble imagining what they'd even claim their authority was...
"The Trump administration, which took a noninterventionist approach to artificial intelligence, is now discussing imposing oversight on A.I. models before they are made publicly available."
I wonder how much of this is geared towards public safety versus the current administration wanting to use this as another form of leverage when AI companies (e.g. Anthropic) don't listen to them.
They will have to "correctly" answer who is the best president, is the straight of Hormuz blocked, and how tall should the ballroom be.
More to the point, the vendor will have to make the correct deals with and contributions to firms and foundations owned and operated by the President’s friends and family members.
Also answer “correctly” who won the 2020 presidential election
And Jan 6 revisionism, ie not mentioning that the sitting president attempted a coup to steal an election
Or who was identified in the Epstein files
Wishing a fond goodbye to the Gulf of Mexico.
And how will the test change in 2029?
We won't even have the window dressing of being declined. “Sorry, that’s beyond my current scope. Let’s talk about something else.”[1]
Instead we'll be actively lied to. American exceptionalism.
1. https://www.theguardian.com/technology/2025/jan/28/we-tried-...
I have many questions. How would A/B testing work in the scenario where models need to be approved by the government before release? All the big providers commonly a/b test their unreleased models on production traffic. Would these need to be preapproved? Many models get tested on the public for every one that is officially "released". Will the government have the bandwidth to examine each of these? Does changing the system prompt count as a different model or only model weights?
You just perfectly highlighted why over-regulation in tech is troublesome. This is why I've always been against European tech regulations which gave us the cookie prompt. Politicians shouldn't be product designers.
Edit: I'll take the downvotes. Every time I say this, I get downvoted. Weirdly, even EU politicians are beginning to see that they've over regulated their tech industry so much that it can't compete but HN just can't accept this opinion.
This is because the law should say "The only circumstances in which you can get your users PII is when they willingly give them to you, as clients/subscribers. The only circumstances you can sell that data or track your users is never".
Instead we tried something that look like a punt, and even then tracking/adtech ghouls aren't happy. I say we should lobby hard to get my version at least examined in the EU parliament (or in any parliament in a EU country, really), that will probably scare them into removing the cookie banners.
The cookie prompt is a perfect example of under regulation. The law you're citing as over-regulation requires companies to get consent before tracking you. Companies across the board settled on "annoy users into consenting" as their compliance strategy. You want to revert to implied consent? Fuck that, layer on "also you can't pester users into agreeing to being tracked." Too vague? That's the point; anything else and you incentivize dancing on the line of exactly how close to non-compliance you can get away with. Politicians broadly shouldn't be product designers, but establishing broad no-go zones around anti-consumer behavior is foundational to modern society. Without that you get cartoon ads marketing menthol cigarettes to kids and commercials for casino apps for betting on drone strikes.
Yup. It was pure malicious compliance by the tracking industry with the hopes of killing the regulation.
This is over regulation because regulation always have unintended consequences. The unintended consequence is the cookie prompt.
And before someone comes out saying that only "bad" websites want to track you, the official European Union website has a cookie prompt. https://commission.europa.eu/index_en
It’s about consent, that has nothing to do with good or bad
> even EU politicians are beginning to see that they've over regulated their tech industry so much that it can't compete
Yes, it feels a bit weird to me that the HN crowd is a fan of regulation although much of the crowd works in the least regulated profession.
Maybe we need to have regulation that puts an automatic expiration on regulation and there's no way to bypass that. Existing regulation nearing expiration can only be extended by a democratic voting process. Just the burden of handling this should naturally filter out regulation that's unpopular or no longer relevant.
It's because the actual goals have nothing to do with what they say they are.
This is a really bad thing.
A worst case scenario I feel is that the government could restrict inference providers within the US to run only approved/American LLMs, which would be a huge deal since the only recent American OSS model is Gemma. I could see OpenAI/Anthropic/Google lobbying for that though…
My thought as well. They will approve every new American trained LLM but they can't control the release of free Chinese LLMs. Therefore, the only card they can play is to simply make Chinese LLMs illegal to use for American companies and Americans.
Ultimately, this will grant more power to OpenAI, Anthropic, and Google due to regulatory capture but it hurts the AI industry overall.
Only the us AI industry. It will put the US in a ditch with it's only asset being the ability to surveill its citizens for negative money and negative innovation. The rest of the world will keep spinning just fine
Let’s hope. It would be great for Europe and the rest of the world.
Unless we (Europe) start to do the same…
Nah there's literally no reason geopolitically or economically to do that for Europe. The US has more or less entirely botched everything except it's military applications for AI. It's endangering the whole economy in the process too.
They know very well that China is going to keep releasing world class models at 1/20th the price and 5-300x smaller in size. They also know they screwed up by going full technofacism and there's no way back because of the trillions invested in oligarchs and it endangers the entire economy.
China can’t keep doing that. This is essentially a capture the market ploy that is government back from them at this point at current DeepSeek prices they would have to make a massive loss.
I think you misunderstood what China has been doing to make the progress they have. They've invested far less and have actual applications because they are the world's largest manufacturer.
In the us we have products we sell to china to automate their factories. China soon wont need those. The US goal of laying off anyone who thinks for money is really different than chinas goals of automating product manufacturing.
Deepseek costs less because it actually costs less. Chinas electrical infrastructure is so much better than the uses. Meanwhile the us has ai data centers running on effing gas. On literal gas generators. The only budget discussions for infrastructure in the us are basically for the DHS too. It's not sustainable.
China can’t keep doing that.
Who or what will stop them?
Wouldn't this immediately put the American companies producing these models at a significant disadvantage? Just use an unmolested model hosted by a provider in Vancouver.
If anything, this measure seems like it would create a scenario where services hosted outside the US would become a lot more attractive relative to Trumped AI.
gift link: https://www.nytimes.com/2026/05/04/technology/trump-ai-model...
Is there an arms race of payment infrastructure for international LLM providers? A common payment gateway so that people can pay providers anywhere for tokens will inevitably emerge if the US is making moves like this.
* Maybe Anthropic's call for regulation has backfired. Now it's going to be overregulation. They might regret it now.
* This might be regulatory capture for OpenAI, Google, and Anthropic. Any new entrant will have a harder time getting approval.
* This is going to be terrible for the industry in general because this administration will not hesitate to demand bribes and force their propaganda into the models.
* This might cause the US to ban the use of Chinese models for US businesses and governments. After all, Chinese models won't need white house approval to release. So the only way to "control" them is to simply make them illegal.
Nah, Anthropic would love this. They definitely don't want you using KimiK2.6, DeepSeekV4Pro, GLM5.1, MiniMax2.7, MimoV2.5-Pro, Qwen3.5-397B, Step3.5Flash, because, truth be told. You can survive fine without Claude.
"Black market AI" has a nice ring to it.
There's a reason they are going after VPNs as well right now. Uh oh.
Robot pirates...
so the trump mafia can corruptly profit from them?
Mobster admin so checks out
“Nice model you got there… shame if someone prompt injected a regulatory framework into it.”
More inside trading and poly market betting
Sure, let’s kill what little lead the US AI industry has while the rest of the world kicks ass - it’s working so well in all our other endeavors.
The party of free market economics, everybody!
"The National Security Agency has also recently used Anthropic’s Mythos model to assess vulnerabilities in the U.S. government’s software, people with knowledge of the work said."
I'm sure that's not the only thing they've used it for. Definitely looking for any exploit they can use to enhance data gathering, and cracking into IOS, private networks, etc. Gotta keep an eye on citizens, but hey, it's the only government body that really listens you.
at this point it almost seems like citizens should review AI models before the government can access them.
How the fuck would this even be enforced? "AI model" is a pretty broad thing; in some sense basically anything involving weights could be considered "AI", and even more abstractly you could argue that even a runtime conditional is AI.
Honestly, if we are discussing the „how“ I feel that we are already ceding too much ground. Whatever technical solutions exist it is a terrible precedent
What specifically is the goal of the pre-release review? Just to patch government systems first? Seems like the government was banning internal use of anthropic's models 2 months ago and now wants exclusive access for some amount of time. Clown show...
China doesn't require permission from the White House.
Mind elaborating instead of vague posting? Are you saying that China is already doing that vetting or that China will benefit because they can release models faster without having to be blocked by WH vetting?
Not the OP, but:
- China is the largest open weight provider, with Mistral and Cohere delivering a few other models. There isn’t much else internationally
- (I think OP is suggesting) this would effectively ban Chinese models in the US, which would be an interesting case. Who knows if they could have theirs reviewed, or if we’ll see another FCC approved router situation.
- that Chinese models are censored is a very common criticism. If American models are also censored that looks bad.
- this will be awful for self hosters and local inference. Imagine if HuggingFace had to drop non-American model weights. That would effectively kill them.
Thanks!
Also not the OP, but my read is that China can release a model without the US president's approval. If the US models need approval and China's don't, then advantage China.
Um, I realize the Trump administration doesn't pay a lot of attention to what it does and does not have authority to do, but I'm having trouble imagining what they'd even claim their authority was...
Ever see the old Twilight Zone episode with the 6-year-old kid who wishes you into the cornfield if you don't do what he says?
https://en.wikipedia.org/wiki/It%27s_a_Good_Life_(The_Twilig...
That's his authority.