It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
> it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
Reminds me of the story of someone's woman working for a research lab to improve the computer-controlled automatic emergency landings of planes with total power failure.
... or so she was told.
She was unknowingly designing glide-bomb avionics.
I feel like these stories are apocryphal. I mean, I can't say for certain that no US DoD research program used subterfuge to trick the performers into working on The Most Racist Bomb. But I can say that in 20 years I've never seen a dearth of people ready, willing, able, and actively participating with full knowledge that they are creating The Fastest Bomb and The Sneakiest Bom and The Biggest Bomb Without Actually Going Nuclear.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
You must be joking. Which values, set by who? Jobs the marketer, Ellison the tyrant, or Gates the sociopath?
Please, spare us. They built a surveillance state masquerading as marketing companies and banal products. Don't play remember when if you don't actually remember.
Values relating to mistrust of the military (as per the context of the post I responded to) as well as values relating to ownership of the tech you bought and of personal privacy.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.
War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.
The people that actually target and launch these things do think about what they have done. It is the people ordering them to do it that don't. There is a difference, I hope.
I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?
We need to do it because our enemies are doing it, in any case.
I do not think that anyone but the US and Israel have assassinated leaders in the last 30 years. I also question their autonomous drone advancement. Russia and China did not have the means to help Venezuela and they do not have the means to help Iran.
It came later than I anticipated, but it did come after all. There is a reason companies like 9mother are working like crazy on various way to mitigate those risks.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
If LLM's are indeed a game changer professionally, you kind of need to pick one.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
If you graduated in 2007, your classmates were born around 1985. Their parents were mostly born in the mid 50s to the mid 60s and came to political consciousness either during the Vietnam War or immediately thereafter. No war since has been even close to as unpopular or frankly as salient. It’s the passing out of cultural relevance of that war that you are noticing.
> No war since has been even close to as unpopular or frankly as salient.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
Correct that there was no Iraq generation because there was no draft and numbers were way smaller. Vietnam had over half a million troops at the height of that war. Iraq had under 170k.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
I'm a decade older so maybe I missed the memo but I think you'll have a hard time naming tech companies that actually refused to work with the military, which were large enough and important enough to be in danger of selling something to the military (i.e. not Be Inc. or Beenz.com)
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
Since Pearl Harbor, the US has done 100% of the attacks on other people's soil; even 9/11 was a response to what the US did in other countries, so it stands to say that defending your country is very different from giving weapons to the Department of War, to conduct war, and most likely supply them to its Middle East ally, who will also use them to start wars, kill civilians and children.
If the country wage wars for bad reasons, that is another problem that probably should be fixed elsewhere, or you should leave that country and be somewhere who government you can fully get behind.
> defending your country
I am afraid that this does not always have to be an incoming attack. What if some country has a resource that your country badly needs, without which your people will suffer badly and imagine the same is true with the other country. How much of an hit on economic and QoL are you willing to sustain before you ask your government to go out there and get the required resource by force.
I totally get that war is profitable, and most of the wars cannot be justified. But ideas like this sounds like sabotaging your own country and thus your own existence.
> they have to couch it in language clarifying that they would love to support war, actually,
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
When people (myself included FWIW) warn about the dangers of American imperialism, it's because:
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
> I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into thinking tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
Dario, you are making a conscious choice to start developing autonomous AI weapons. That is what all of this is about, that is what you have offered to work with the DoW towards. Your red line is not that autonomous AI weapons are inherently wrong, potentially an existential threat to humanity and should be banned via treaty like chemical and biological weapons; rather you believe Claude is just not there yet and you want to help close the gap.
Do you have plans to work on a kinder, gentler form of domestic mass surveillance as well? Or will you simply leave it up to others to disguise the eventual turning of your foreign surveillance models inwards towards the United States themselves?
The Department of Defense was named as such after the detonations of the atomic bomb in Hiroshima and Nagasaki
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Among those who would resist, half would've done so outwardly by now and been fired, the other half would be hiding their activity. In both cases we wouldn't be hearing about them now.
> Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
Would love to enumerate those commonalities. Run by a psychopath? Commitment to violent lethality? Burning billions of dollars for uncertain goals? (ok there's one)
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance?
Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
What’s a “warfighter?” Do they come from the “Gulf of America?” We used to call them servicemen or service members. Emphasizing they served the people. I guess that’s too effeminate for our roided up and ironically hyper-insecure Secretary of Defense.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out SOTA but would be interested in others.
Frankly it’s a shitshow all around.
The truth is that nobody gives a fuck about this. They have no moral qualms, just practical.
And these are the people that should bring us the future.
Man what a depressing scenario.
After hearing Palmer Luckey's argument for the name change[0], I tend to think it's good change.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense".
If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
Then the US killed 150 Iranian sailors in international waters that were demonstrating no threat. A bunch of them drowned - or for that matter, might still be alive in air pockets, waiting to ie - because we're not at war.
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
As much as Trump and Hegseth would like it to be called the Department of War, it still takes an act of Congress to change the name of the Department of Defense. No reason to call it by anything else until that happens.
I think this is one of the weaknesses of rationalism and effective altruism, is that it tries to make a clean break from the common law legal reasoning that the government, and thus corporations, operate on. While I find rationalism to be a useful lens, the fact is that the common law legal framework is totally dominant, and so these deontological arguments made rationally collapse very quickly when translated to the dominant framework.
Not everything has to be a conspiracy or some 4D chess business move. Dario is a morally motivated person and regretted the tone that was being conveyed in that memo, so he apologized.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
Pragmatically, it's for the best to use its preferred name instead of legal name when sucking up to the department and Trump to try to get back in good graces.
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
I've also quit a job where the products I was working were meant to be deployed to CBP to hunt down immigrants. It's a nice gesture, but it won't stop these companies. They just hired someone else without an ethical backbone, and continued the project like nothing happened.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
You got me wondering, so I checked to see how much Anthropic's bribed Trump so far. According to Dario, Trump has been soliciting bribes, but they refused to pay, and the contract "renegotiation" is retribution:
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
The one where he accuses Trump of retaliating against Anthropic after failing to solicit a bribe?
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."
It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
My, the world has changed.
> it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
>I’m going to tell you about how I took a job building software to kill people.
>But don’t get distracted by that; I didn’t know at the time.
Caleb Hearth: "Don't Get Distracted" https://calebhearth.com/dont-get-distracted
God bless you for referencing that film.
Reminds me of the story of someone's woman working for a research lab to improve the computer-controlled automatic emergency landings of planes with total power failure.
... or so she was told.
She was unknowingly designing glide-bomb avionics.
I feel like these stories are apocryphal. I mean, I can't say for certain that no US DoD research program used subterfuge to trick the performers into working on The Most Racist Bomb. But I can say that in 20 years I've never seen a dearth of people ready, willing, able, and actively participating with full knowledge that they are creating The Fastest Bomb and The Sneakiest Bom and The Biggest Bomb Without Actually Going Nuclear.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
I feel like these stories are apocryphal.
They're not. But if it makes you feel better to believe that, everyone has their own coping mechanism.
[delayed]
“someone’s woman”?
Military isn't quite as aggressively catering to the people who historically have bullied techies as they used to.
Aside from that - there's a lot more people in tech now. It grew too fast too quick to maintain all the values it had back in the 00's and earlier.
>maintain all the values it had.
You must be joking. Which values, set by who? Jobs the marketer, Ellison the tyrant, or Gates the sociopath?
Please, spare us. They built a surveillance state masquerading as marketing companies and banal products. Don't play remember when if you don't actually remember.
Values relating to mistrust of the military (as per the context of the post I responded to) as well as values relating to ownership of the tech you bought and of personal privacy.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Jobs the Marketer! You want to lump Jobs in with Ellison because he had the gall to purchase advertising for his products?
What tech companies were these? I was younger in 2007 but i feel like i would remember if companies were openly refusing to participate in war.
Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.
War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.
The people that actually target and launch these things do think about what they have done. It is the people ordering them to do it that don't. There is a difference, I hope.
I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
Safe for whom?
Safe for the aggressors, I mean. If war is easy and cheap for us to wage, we will do more of it, and likely make the world a worse place.
Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?
We need to do it because our enemies are doing it, in any case.
I do not think that anyone but the US and Israel have assassinated leaders in the last 30 years. I also question their autonomous drone advancement. Russia and China did not have the means to help Venezuela and they do not have the means to help Iran.
Russia and other states have demonstrably conducted targeted killings.
It came later than I anticipated, but it did come after all. There is a reason companies like 9mother are working like crazy on various way to mitigate those risks.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
> they have to couch it in language clarifying that they would love to support war,
This is what baffles me when I see people flocking to them for subscriptions based on these events.
If LLM's are indeed a game changer professionally, you kind of need to pick one.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
If you graduated in 2007, your classmates were born around 1985. Their parents were mostly born in the mid 50s to the mid 60s and came to political consciousness either during the Vietnam War or immediately thereafter. No war since has been even close to as unpopular or frankly as salient. It’s the passing out of cultural relevance of that war that you are noticing.
> No war since has been even close to as unpopular or frankly as salient.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
> Iraq
Not in same ballpark. There’s no Iraq generation the way there’s a Vietnam one.
> Spoiler alert, a bunch of the current ones are going to be seen similarly too.
No they won’t. The lack of a draft and mass domestic casualties dramatically changes the picture. Especially on the saliency axis.
Correct that there was no Iraq generation because there was no draft and numbers were way smaller. Vietnam had over half a million troops at the height of that war. Iraq had under 170k.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
There is an Iraq group but we’re just a much smaller group
I'm a decade older so maybe I missed the memo but I think you'll have a hard time naming tech companies that actually refused to work with the military, which were large enough and important enough to be in danger of selling something to the military (i.e. not Be Inc. or Beenz.com)
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
Yes, and IBM had a particularly tainted history from WWII.
For every company that stands on values, there is another that will do some shady shit for a dollar.
Sperry Rand? You’re up awful late grandpa.
This wasn't really that long ago.
https://www.google.com/maps/@37.6735255,-122.389804,3a,31.2y...
>refuse to let their systems be used for war..
I don't want wars.
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
Your logic here is sound, sure. But don't tell me you can be so naive as to believe that the U.S. military is a defensive mechanism
Since Pearl Harbor, the US has done 100% of the attacks on other people's soil; even 9/11 was a response to what the US did in other countries, so it stands to say that defending your country is very different from giving weapons to the Department of War, to conduct war, and most likely supply them to its Middle East ally, who will also use them to start wars, kill civilians and children.
If the country wage wars for bad reasons, that is another problem that probably should be fixed elsewhere, or you should leave that country and be somewhere who government you can fully get behind.
> defending your country
I am afraid that this does not always have to be an incoming attack. What if some country has a resource that your country badly needs, without which your people will suffer badly and imagine the same is true with the other country. How much of an hit on economic and QoL are you willing to sustain before you ask your government to go out there and get the required resource by force.
I totally get that war is profitable, and most of the wars cannot be justified. But ideas like this sounds like sabotaging your own country and thus your own existence.
Personally, I'd rather that my country (USA) be taken over by China than bomb innocents in the Middle East.
> they have to couch it in language clarifying that they would love to support war, actually,
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
The reckoning will come.
Watch as the same people pushing for war today will pretend they were always against it 10 years from now.
I guess we're just doomed to repeat the same cycles.
> My, the world has changed.
No. Your tech experience was an aberration.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
When people (myself included FWIW) warn about the dangers of American imperialism, it's because:
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
[1]: https://www.archives.gov/milestone-documents/president-dwigh...
[2]: https://en.wikipedia.org/wiki/Imperial_boomerang
[3]: https://www.amnestyusa.org/blog/with-whom-are-many-u-s-polic...
> I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
As the Heritage Foundation has said, we are in a cold civil war for our country and right now, the authoritarians are winning.
Millenials were famously called "generation sell". It is all corporate now, DEI one day, ICE the next. Just follow your leaders.
Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into thinking tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
Messages about project Maven, Palantir and Anthropic integration are flagged by certain interest groups:
"Palantir's Maven uses Anthropic's Claude code, sources say."
https://www.reuters.com/technology/palantir-faces-challenge-...
It is always astonishing that the reviled mainstream press is more critical than hackers these days.
Dario, you are making a conscious choice to start developing autonomous AI weapons. That is what all of this is about, that is what you have offered to work with the DoW towards. Your red line is not that autonomous AI weapons are inherently wrong, potentially an existential threat to humanity and should be banned via treaty like chemical and biological weapons; rather you believe Claude is just not there yet and you want to help close the gap.
Do you have plans to work on a kinder, gentler form of domestic mass surveillance as well? Or will you simply leave it up to others to disguise the eventual turning of your foreign surveillance models inwards towards the United States themselves?
Raised an eyebrow a little at this sentence: "Anthropic has much more in common with the Department of War than we have differences."
The Department of Defense was named as such after the detonations of the atomic bomb in Hiroshima and Nagasaki
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Among those who would resist, half would've done so outwardly by now and been fired, the other half would be hiding their activity. In both cases we wouldn't be hearing about them now.
> Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
Let me rephrase it for you:
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
My eyebrows basically left my face after reading the whole thing.
This is not the forbidden love story I would've asked for.
Would love to enumerate those commonalities. Run by a psychopath? Commitment to violent lethality? Burning billions of dollars for uncertain goals? (ok there's one)
Certain patterns at top ranks?
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance? Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
What’s a “warfighter?” Do they come from the “Gulf of America?” We used to call them servicemen or service members. Emphasizing they served the people. I guess that’s too effeminate for our roided up and ironically hyper-insecure Secretary of Defense.
Because Anthropic is called Anthropic and they have this really warm and inviting visual aesthetic.
It’s a mistake to conflate “wants to spend money on the most ethical option available” with “ think the most ethical option available is perfect”
Why wouldn’t you move your dollars to someplace incrementally better?
You make it sound as if "the most ethical option available" is.. actually ethical?
Their statement doesn't make it sound they are incrementally better, they are trying to bend over backwards to keep working for war.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out SOTA but would be interested in others.
Frankly it’s a shitshow all around. The truth is that nobody gives a fuck about this. They have no moral qualms, just practical. And these are the people that should bring us the future. Man what a depressing scenario.
Nothing brings home the Orwellian nature of USA 2026 more for me than the word "warfighter".
I continue to be surprised how many people haven't heard term until now, it's been in common use in the US for 20+ years.
To me the most Orwellian thing is everyone using the newspeak name for the DoD.
After hearing Palmer Luckey's argument for the name change[0], I tend to think it's good change.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense". If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
[0] Palmer Luckey talking to Mike Rowe about the name change: https://youtu.be/dejWbn_-gUQ?t=1007
DoW is the opposite of newspeak, it is much more transparent and honest about what that organization is and has been for my entire life
DoW is newspeak. Thats not it's name.
Just remember, we're not at war with Iran. The House Speaker said so.
We can use the word war because Iran used the word war. But it is not a War in the constitutional sense. Or something.
we are though, they plotted to assassinate the US president, not to mention being the #1 sponsor of terrorism in the middle east, attacking our allies
Sure they did. Thats why we only discovered it after we assassinated their current and former leaders.
US took out Iran's supreme leader. It's simple tit for tat.
Then the US killed 150 Iranian sailors in international waters that were demonstrating no threat. A bunch of them drowned - or for that matter, might still be alive in air pockets, waiting to ie - because we're not at war.
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
The rich people are fighting with each other again.
Not sure why Dario apologized for the internal memo leak. Seems like an odd thing to backtrack on.
Probably because it hurts its position either in court or during negotiations with the DoW.
Right, I was hoping for Anthropic to stand its ground a bit more. There’s quite a bit of “ring kissing” undertones in today’s memo.
As much as Trump and Hegseth would like it to be called the Department of War, it still takes an act of Congress to change the name of the Department of Defense. No reason to call it by anything else until that happens.
Department of peace sounds even better than defense.
I think this is one of the weaknesses of rationalism and effective altruism, is that it tries to make a clean break from the common law legal reasoning that the government, and thus corporations, operate on. While I find rationalism to be a useful lens, the fact is that the common law legal framework is totally dominant, and so these deontological arguments made rationally collapse very quickly when translated to the dominant framework.
Not everything has to be a conspiracy or some 4D chess business move. Dario is a morally motivated person and regretted the tone that was being conveyed in that memo, so he apologized.
link to the memo?
https://pbs.twimg.com/media/HCmdjFGXwAAPI3d?format=jpg&name=...
thanks a lot
Could they please start using the correct name? Department of Defense?
They still want that contract so they'll continue to pander.
The correct name is the Department of War.
Calling it the Department of Defense implies a system of laws, checks and balances which no longer exists.
It very much still exists, and statements like this are what’s called “obeying in advance.” Don’t do it.
> I apologize for the tone of the post
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
It’s a business decision.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
Pragmatically, it's for the best to use its preferred name instead of legal name when sucking up to the department and Trump to try to get back in good graces.
Maybe it's bad that Anthropic wants to embrace the Department of War?
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
So is this a backtrack or clarification on their original stance? Do I need to be worried about skynet killing grandma?
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.
Posted here: https://news.ycombinator.com/item?id=47195085
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
I've also quit a job where the products I was working were meant to be deployed to CBP to hunt down immigrants. It's a nice gesture, but it won't stop these companies. They just hired someone else without an ethical backbone, and continued the project like nothing happened.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
Were you earning seven figures tho? That suppresses moral stances rather quickly I reckon
There is a reason you call it 'fuck you money'
Perhaps. It should do the opposite though - you've likely got enough in the bank that you don't need to work a day in your life again.
This is turning into just another reality show. There are no adults anymore.
Cringing every time I see the word "warfighter", and disappointed that they're still pushing to keep that contract.
What's next, bribing Trump with gold bars and donations to "charity"?
You got me wondering, so I checked to see how much Anthropic's bribed Trump so far. According to Dario, Trump has been soliciting bribes, but they refused to pay, and the contract "renegotiation" is retribution:
https://news.ycombinator.com/item?id=47269649
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
They have a crypto coin for explicit bribing
You can also "invest" money for Trump's family to "earn" their "management fees."
Nowhere, because there's no such department..
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
The one where he accuses Trump of retaliating against Anthropic after failing to solicit a bribe?
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."