More government intervention in private enterprise? This pattern seems to be gathering steam, does that mean they're now subscribing to this model?
Or is this just par for the course and has always been going on, it's just the reporting is different, or the current context makes it more of a sensitive topic?
No, this is very unusual. The US government taking a 10% stake in intel is very unsual.
There have been a few cases where national security has prompted the government to nationalize private institutions: the Railroads in WWI, steel mills in the korean war, CINB which was deemed a security risk by being too large a bank.
This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.
Wars are good for remaining in power. Dictatorship is good for remaining in power.
This is all very, very, very unusual in US history (except maybe when businesses tried to overthrow the government in the 30s but we don't talk about that).
It's been all of 3 days since Claude decided to delete a large chunk of my codebase as part of implementing a feature (couldn't get it to work, so it deleted everything triggering errors). I think Anthropic is right to hold the line on not letting the current generation delete people.
This is most likely because getting a SaaS software to conform to federal regulations and to promise the security needed by the US military is difficult and expensive. FedRAMP is onerous.
And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.
They always focused on the safety. (Their own safety). They only backed off from us military once they were in the bad press. As usual, they are not an ethical company. I can’t say it’s bad as all corporations are the same. Just don’t look at the illusion they create.
If you look at my post history you can see I’m always calling them out about how sketchy they are.
> the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.
> The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk.
*Supply chain risk*?
The BBC article seems to imply that the government wants to audit Anthropic.
This, coming at the same time those "distillation" claims were published, is all incredibly suspicious.
All of the coverage of this is about the negotiation points of Anthropic vs Pentagon.
Anthropic doesn’t want their software used for certain purposes, so they maintain approval/denial of projects and actions. I suspect the Pentagon doesn’t want limitations AND they dislike paying for software/service which can be withheld from them if they are found to be skirting the contractual terms.
And THAT is why the Pentagon is using maximum leverage (threatening Anthropic as a supply chain risk label).
Supply chain risk is a very specific designation, meaning not only would Anthropic lose Pentagon contracts, but no other company with Pentagon contracts would be allowed to use them either. It would have the effect of being a near industry-wide blackballing of Anthropic given all the major companies that have contracts with the DoD.
The US federal government is no longer a good faith actor acting on behalf of American citizens and following US law, but now an autonomous corporation aiming to “get the best deal” via maximum leverage.
It's inexcusable that the AI companies have not formed a united front against this. I've been skeptical of the idea that OpenAI leadership is outright MAGA, but even pure self-interest does not explain staying silent while the Pentagon demands autonomous killbots.
He claimed, and until today I was willing to give him the benefit of the doubt, that he was trying to curry favor with a notoriously bribe-able President. Not exactly a paragon of moral virtue, but I wouldn't be able to do business with nearly any company in the US if I made that a dealbreaker. This clears the bar where I'm willing to cut ties and demand that everyone else do the same.
I love watching the plot lines of The Terminator play out in real life.
More government intervention in private enterprise? This pattern seems to be gathering steam, does that mean they're now subscribing to this model?
Or is this just par for the course and has always been going on, it's just the reporting is different, or the current context makes it more of a sensitive topic?
No, this is very unusual. The US government taking a 10% stake in intel is very unsual.
There have been a few cases where national security has prompted the government to nationalize private institutions: the Railroads in WWI, steel mills in the korean war, CINB which was deemed a security risk by being too large a bank.
This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.
Wars are good for remaining in power. Dictatorship is good for remaining in power.
This is all very, very, very unusual in US history (except maybe when businesses tried to overthrow the government in the 30s but we don't talk about that).
It's been all of 3 days since Claude decided to delete a large chunk of my codebase as part of implementing a feature (couldn't get it to work, so it deleted everything triggering errors). I think Anthropic is right to hold the line on not letting the current generation delete people.
"Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."
I hadn't realized. This does make me consider using alternatives more.
This is most likely because getting a SaaS software to conform to federal regulations and to promise the security needed by the US military is difficult and expensive. FedRAMP is onerous.
And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.
They always focused on the safety. (Their own safety). They only backed off from us military once they were in the bad press. As usual, they are not an ethical company. I can’t say it’s bad as all corporations are the same. Just don’t look at the illusion they create.
If you look at my post history you can see I’m always calling them out about how sketchy they are.
[dupe] https://news.ycombinator.com/item?id=47140734
https://news.ycombinator.com/item?id=47142587
All of this is kind of weird.
https://www.bbc.com/news/articles/cjrq1vwe73po
> the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.
> The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk.
*Supply chain risk*?
The BBC article seems to imply that the government wants to audit Anthropic.
This, coming at the same time those "distillation" claims were published, is all incredibly suspicious.
All of the coverage of this is about the negotiation points of Anthropic vs Pentagon.
Anthropic doesn’t want their software used for certain purposes, so they maintain approval/denial of projects and actions. I suspect the Pentagon doesn’t want limitations AND they dislike paying for software/service which can be withheld from them if they are found to be skirting the contractual terms.
And THAT is why the Pentagon is using maximum leverage (threatening Anthropic as a supply chain risk label).
Supply chain risk is a very specific designation, meaning not only would Anthropic lose Pentagon contracts, but no other company with Pentagon contracts would be allowed to use them either. It would have the effect of being a near industry-wide blackballing of Anthropic given all the major companies that have contracts with the DoD.
Yes. Incredible, isn't it? I'm curious at what would make the government do that.
_The Art of the Deal_.
The US federal government is no longer a good faith actor acting on behalf of American citizens and following US law, but now an autonomous corporation aiming to “get the best deal” via maximum leverage.
It's inexcusable that the AI companies have not formed a united front against this. I've been skeptical of the idea that OpenAI leadership is outright MAGA, but even pure self-interest does not explain staying silent while the Pentagon demands autonomous killbots.
Brockman donated 25,000,000 dollars to the MAGA superpac, how much more 'outright' would you like him to be, haha.
This is not only a big donation. It is actually the BIGGEST donation by any single individual.
Shareholder value and MAGA value are a venn diagram of optical illusion.
He claimed, and until today I was willing to give him the benefit of the doubt, that he was trying to curry favor with a notoriously bribe-able President. Not exactly a paragon of moral virtue, but I wouldn't be able to do business with nearly any company in the US if I made that a dealbreaker. This clears the bar where I'm willing to cut ties and demand that everyone else do the same.
We must join with him, we must join with Sauron.
Sauron might win, don't want to risk being on the wrong side of the post-apocalypse