Some of us have, and some of us still use it. The functionality and the need for an archive not subject to the same constraints as the wayback machine and other institutions outweighs the blackhat hijinks and bickering between a blogger and the archive.is person/team.
My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.
Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?
Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.
This is a depressing story, but the AI companies are in an impossible situation here. For every incident like this, there are many more people complaining about LLMs treating them like sensitive snowflakes.
What I'd like to know is how many people "Harry" has saved from going over the edge. It's like self-driving. We should expect many horrible accidents along the way, but in the end, far more lives saved.
The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil
Fwiw, suicide under MAID is altogether legal in Canada and in New York state. Are you suggesting their citizens aren't entitled to a note? Is there actually any logical consistency in what you are suggesting?
I think the article onto something, there might be a whole cabal enabling suicides: did she use ChatGPT on mobile? Must be Google or Apple who "safely" helped her to use ChatGPT to write a suicide note.
I'm all for responsible use of technology, but this ridiculous.
Many of us aren't, and it's why it's hard to blame the businesses like OpenAI for doing nothing.
The parent's jokey tone is unwarranted, but their overall point is sound. The more blame we assign to inanimate systems like ChatGPT, the more consent we furnish for inhumane surveillance.
This is something I noticed in the xAI All Hands hiring promotion this week as well. None of the 9 teams presented is a safety team - and safety was mentioned 0 times in the presentation. "Immense economic prosperity" got 2 shout-outs though. Personally I'm doubtful that truthmaxxing alone will provide sufficient guidance.
One of the biggest pieces of "writing on the wall" for this IMO was when, in the April 15 2025 Preparedness Framework update, they dropped persuasion/manipulation from their Tracked Categories.
> OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.
> The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.
To see persuasion/manipulation as simply a multiplier on other invention capabilities, and something that can be patched on a model already in use, is a very specific statement on what AI safety means.
Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality.
> But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”
A step in the positive direction, at least they don't have to pretend any longer.
It's like Google and "don't be evil". People didn't get upset with Google because they were more evil than others, heck, there's Oracle, defense contractors and the prison industrial system. People were upset with them because they were hypocrites. They pretended to be something they were not.
How could this ever have been done safely? Either you are pushing the envelope in order to remain a relevant top player, in which case your models aren't safe. Or you aren't, in which case you aren't relevant.
I think right here is high on the list of “Why is Apple behind in AI?”. To be clear, I’m not saying at all that I agree with Apple or that I’m defending their position. However, I think that Apple’s lackluster AI products have largely been a result of them, not feeling comfortable with the uncertainty of LLM’s.
That’s not to paint them as wise beyond their years or anything like that, but just that historically Apple has wanted strict control over its products and what they do and LLMs throw that out the window. Unfortunately that that’s also what people find incredibly useful about LLMs, their uncertainty is one of the most “magical” aspects IMHO.
Did anyone actually think their sole purpose as an org is anything but make money? Even anthropic isnt any different, and I am very skeptical even of orgs such as A12
Nobody should have any illusion about the purpose of most business - make money. The "safety" is a nice to have if it does not diminish the profits of the business. This is the cold hard truth.
If you start to look through the optics of business == money making machine, you can start to think at rational regulations to curb this in order to protect the regular people. The regulations should keep business in check while allowing them to make reasonable profits.
It's not long ago they were a non-profit. This sudden change to a for-profit business structure, complete with "businesses exist to make money" defence, is giving me whiplash.
I find the whole thing pretty depressing. They went to all that effort with the organization and setup of the company at the beginning to try to bake this "good for humanity" stuff into its DNA and legal structure and it all completely evaporated once they struck gold with ChatGPT. Time and time again we see noble intentions being completely destroyed by the pressures and powers of capitalism.
Really wish the board had held the line on firing sama.
This is more Altman-speak. Before it was about how AI was going to end the world. That started backfiring, so now we're talking about political power. That power, however, ultimately flows from the wealth AI generates.
It's about the money. They're for-profit corporations.
"Safety" was just a mechanism for complete control of the best LLM available.
When every AI provider did not trust their competitor to deliver "AGI" safely, what they really mean was they did not want that competitor to own the definition of "AGI" which means an IPOing first.
Using local models from China that is on par with the US ones takes away that control, and this is why Anthropic has no open weight models at all and their CEO continues to spread fear about open weight models.
It's all beginning to feel a bit like an arms race where you have to go at a breakneck pace or someone else is going to beat you, and winner takes all.
But what if AI turns out to be a commodity? We're already replacing ChatGPT by Claude or Gemini, whenever we feel like it. Nobody has a moat. It seems the real moat is with hardware companies, or silicon fabs even.
The arms race is just to keep the investors coming, because they still believe that there is a market to corner.
There is a very high barrier to entry (capital) and its only going to increase, so doubtful there will be any more player then the ones we have. Anthropic, OpenAI, xAI and Google seem like they will be the big four. Only reason a late comer like xAI can compete is Elon had the resources to build a massive data centre and hire talent. They will share the spoils between them, maybe one will drop the ball though
I think the winner will be who can keep operating at these losses without going bankrupt. Whoever can do that gets all the users, my bet is Google uses their capital to outlast OpenAI, Anthropic, and everyone else. Apple is just going to license the winner and since they're already making a deal with Google i guess they've made their bet.
If it’s a commodity then it’s even more competitive so the ability for companies to impose safety rules is even weaker.
Imagine if Ford had a monopoly on cars, they could unilaterally set an 85mph speed limit on all vehicles to improve safety. Or even a 56mph limit for environmental-ethical reasons.
Ford can’t do this in real life because customers would revolt at the company sacrificing their individual happiness for collective good.
Similarly GPT 3.5 could set whatever ethical rules it wanted because users didn’t have other options.
I mean, the leaders of these companies and politicians have been framing it that way for a while, but if AGI isn't possible with LLMs (which I think is the case, and a lot of important scientists also think this), then it raises a question: arms race to WHAT exactly? Mass unemployment and wealth redistribution upwards? So AI can produce what humans previously did, but kinda worse, with a lot of supervision? I don't hate AI tech, I use it daily, but I'm seriously questioning where this is actually supposed to go on a societal level.
I think that’s why they are encouraging the mindset mentioned in your parent comment: it’s completely reversed the tech job market to have people thinking they have to accept whatever’s offered, allowing a reversal of the wages and benefits improvements which workers saw around the pandemic. It doesn’t even have to be truly caused by AI, just getting information workers to think they’re about to be replaced is worth billions to companies.
The "safely" in all the AI company PR going around was really about brand safety. I guess they're confident enough in the models to not respond with anything embarrassing to the brand.
Who would possibly hold them to this exact mission statement? What possible benefit could there be to remove the word except if they wanted this exact headline for some reason?
"Safe" is the most dangerous word in the tech world; when big tech uses it, it merely implies submission of your rights to them and nothing more. They use the word to get people on board and when the market is captured they get to define it to mean whatever they (or their benefactors) decide.
When idealists (and AI scientists) say "safe", it means something completely different from how tech oligarchs use it. And the intersect between true idealists and tech oligarchs is near zero, almost by definition, because idealists value their ideals over profits.
On the one hand the new mission statement seems more honest. On the other hand I feel bad for the people that were swindled by the promise of safe open AI meaning what they thought it meant.
In the US, they would be sued for securities fraud every time their stock went down because of a bad news article about unsafe behavior.
They can now say in their S-1 that “our mission is not changing”, which is much better than “we’re changing our mission to remove safety as a priority.”
I just saw a video this morning of Sam Altman talking about how in 2026 he's worried that AI is going to be used for bioweapons. I think this is just more fear mongering, I mean, you could use the internet/google to build all sorts of weapons in the past if you were motivated, I think most people just weren't. It does kind of tell a bleak story though that the company is removing safety as a goal and he's talking about it being used for bioweapons. Like, are they just removing safety as a goal because they don't think they can achieve it? Or is this CYOA?
Safety comes down to the tools that AI is granted access to. If you don't want the AI to facilitate harm, don't grant it unrestricted access to tools that do damage. As for mere knowledge output, it should never be censored.
Why delete it even if you don’t want to care about safety? Is it so they don’t get sued by investors once they’re public for misrepresenting themselves?
I think it's more likely so they don't get sued by somebody they've directly injured (bad medical adivce, autonomous vehicle, food safety...) who says as part of their suit, "you went out of your way to tell me it would be safe and I believed you."
Because we've passed the point of no return. There's no need for empty mission statements, or even a mission at all. AI is here to stay and nobody is gonna change that no matter what happens next.
I'm more worried about the anti-AI backlash than AI.
All inventions have downsides. The printing press, cars, the written word, computers, the internet. It's all a mixed bag. But part of what makes life interesting is changes like this. We don't know the outcome but we should run the experiment, and let's hope the results surprise all of us.
Yes. ChatGPT "safely" helped[1] my friend's daughter write a suicide note.
[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...
https://archive.is/fuJCe
(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)
We probably shouldn't be using the "archive" site that hijacks your browser into DDOSing other people. I'm actually surprised HN hasn't banned it.
Some of us have, and some of us still use it. The functionality and the need for an archive not subject to the same constraints as the wayback machine and other institutions outweighs the blackhat hijinks and bickering between a blogger and the archive.is person/team.
My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.
Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?
Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.
Oof TIL, thanks for the heads up that's a shame!
https://meta.stackexchange.com/questions/417269/archive-toda...
https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...
https://gyrovague.com/2026/02/01/archive-today-is-directing-...
[delayed]
eh, both ArchiveToday and gyrovague are shit humans. Its really just a conflict in between two nerds not "other people".
They need to just hug it out and stop doxing each other lol
Thank you. And shame on the NYT.
This is a depressing story, but the AI companies are in an impossible situation here. For every incident like this, there are many more people complaining about LLMs treating them like sensitive snowflakes.
What I'd like to know is how many people "Harry" has saved from going over the edge. It's like self-driving. We should expect many horrible accidents along the way, but in the end, far more lives saved.
They're in an impossible situation they created themselves and inflict on the rest of us. Forgive us if we don't shed any tears for them.
Sure - so is Google Chrome for abetting them with a browser, and Microsoft for not using their Windows spyware to call suicide hotline.
I don't empathize with any of these companies, but I don't trust them to solve mental health either.
The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil
Regular people would not be held liable for this. It would be a dubious case even if a human helped another human to do this.
Regular people don't have global reach and influence over humanity's agency, attention, beliefs, politics and economics.
A therapist might face major consequences
There have absolutely been cases of people being held criminally liable for encouraging someone to commit suicide.
In California it is a felony
> Any person who deliberately aids, advises, or encourages another to commit suicide is guilty of a felony.
https://california.public.law/codes/penal_code_section_401
Held criminally liable for what, exactly?
> What I'd like to know is how many people "Harry" has saved from going over the edge.
You won't know, because it doesn't sale. You need outrage, you need hate:
"Look what the technology does to us! My beautiful daughter is dead because she was using your fancy autocomplete that predicts next word tokens!"
Fwiw, suicide under MAID is altogether legal in Canada and in New York state. Are you suggesting their citizens aren't entitled to a note? Is there actually any logical consistency in what you are suggesting?
If you read this long tweet https://x.com/milesdeutscher/status/2021932331460964793 …about so many wrongdoings of AI, we should remove I from AI…or give it a new name, e.g. AMI (as “artificial messed up intelligence”).
It’s gonna be a clusterfck global mess in future when we don’t give it some barriers or restrictions intentionally.
Does anyone remember ethos of 90s? :)
I think the article onto something, there might be a whole cabal enabling suicides: did she use ChatGPT on mobile? Must be Google or Apple who "safely" helped her to use ChatGPT to write a suicide note.
I'm all for responsible use of technology, but this ridiculous.
May you never need to be in a bereaved parent's shoes.
Many of us aren't, and it's why it's hard to blame the businesses like OpenAI for doing nothing.
The parent's jokey tone is unwarranted, but their overall point is sound. The more blame we assign to inanimate systems like ChatGPT, the more consent we furnish for inhumane surveillance.
Drop your moralist bullshit, I ain't buying it. Their daughter's suicide has nothing to do with ChatGPT.
This is something I noticed in the xAI All Hands hiring promotion this week as well. None of the 9 teams presented is a safety team - and safety was mentioned 0 times in the presentation. "Immense economic prosperity" got 2 shout-outs though. Personally I'm doubtful that truthmaxxing alone will provide sufficient guidance.
https://www.youtube.com/watch?v=aOVnB88Cd1A
xAI is infamous for not caring about alignment/safety though. OpenAI always paid a lot more lip service.
One of the biggest pieces of "writing on the wall" for this IMO was when, in the April 15 2025 Preparedness Framework update, they dropped persuasion/manipulation from their Tracked Categories.
https://openai.com/index/updating-our-preparedness-framework...
https://fortune.com/2025/04/16/openai-safety-framework-manip...
> OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.
> The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.
To see persuasion/manipulation as simply a multiplier on other invention capabilities, and something that can be patched on a model already in use, is a very specific statement on what AI safety means.
Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality.
> But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”
A step in the positive direction, at least they don't have to pretend any longer.
It's like Google and "don't be evil". People didn't get upset with Google because they were more evil than others, heck, there's Oracle, defense contractors and the prison industrial system. People were upset with them because they were hypocrites. They pretended to be something they were not.
You can see the official mission statements in the IRS 990 filings for each year on https://projects.propublica.org/nonprofits/organizations/810...
I turned them into a Gist with fake author dates so you can see the diffs here: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...
Wrote this up on my blog too: https://simonwillison.net/2026/Feb/13/openai-mission-stateme...
This is fascinating. Does something like this exist for Anthropic? I'm suddenly very curious about consistency/adaptation in AI lab missions.
Unlocked mature AI will win the adoption race. That's why I think China's models are better positioned.
Hard shades of Google dropping "don't be evil".
Safety is extremely annoying from the user perspective. AI should be following my values, not whatever an AI lab chose.
This. This whole hysteria sounds like: let's prohibit knifes because people kill themselves and each other with them!
How could this ever have been done safely? Either you are pushing the envelope in order to remain a relevant top player, in which case your models aren't safe. Or you aren't, in which case you aren't relevant.
I think right here is high on the list of “Why is Apple behind in AI?”. To be clear, I’m not saying at all that I agree with Apple or that I’m defending their position. However, I think that Apple’s lackluster AI products have largely been a result of them, not feeling comfortable with the uncertainty of LLM’s.
That’s not to paint them as wise beyond their years or anything like that, but just that historically Apple has wanted strict control over its products and what they do and LLMs throw that out the window. Unfortunately that that’s also what people find incredibly useful about LLMs, their uncertainty is one of the most “magical” aspects IMHO.
Did anyone actually think their sole purpose as an org is anything but make money? Even anthropic isnt any different, and I am very skeptical even of orgs such as A12
Nobody should have any illusion about the purpose of most business - make money. The "safety" is a nice to have if it does not diminish the profits of the business. This is the cold hard truth.
If you start to look through the optics of business == money making machine, you can start to think at rational regulations to curb this in order to protect the regular people. The regulations should keep business in check while allowing them to make reasonable profits.
It's not long ago they were a non-profit. This sudden change to a for-profit business structure, complete with "businesses exist to make money" defence, is giving me whiplash.
I find the whole thing pretty depressing. They went to all that effort with the organization and setup of the company at the beginning to try to bake this "good for humanity" stuff into its DNA and legal structure and it all completely evaporated once they struck gold with ChatGPT. Time and time again we see noble intentions being completely destroyed by the pressures and powers of capitalism.
Really wish the board had held the line on firing sama.
This is no longer about money, it's about power.
> This is no longer about money, it's about power
This is more Altman-speak. Before it was about how AI was going to end the world. That started backfiring, so now we're talking about political power. That power, however, ultimately flows from the wealth AI generates.
It's about the money. They're for-profit corporations.
Kind of? Assuming OpenAI was actually 2-3 years ahead of other LLM companies, it would be hard to put a value to that tech advantage
If AI achieves what these guys envision, money probably won't mean much.
What would they do with money? Pay people to work?
Has AI generated any wealth?
There'd be a recession otherwise, no?
I think they meant the resulting LLMs, not the speculation of AI which is currently the biggest driver right now
Money is power, and nothing but.
It was never about safety.
"Safety" was just a mechanism for complete control of the best LLM available.
When every AI provider did not trust their competitor to deliver "AGI" safely, what they really mean was they did not want that competitor to own the definition of "AGI" which means an IPOing first.
Using local models from China that is on par with the US ones takes away that control, and this is why Anthropic has no open weight models at all and their CEO continues to spread fear about open weight models.
It's all beginning to feel a bit like an arms race where you have to go at a breakneck pace or someone else is going to beat you, and winner takes all.
But what if AI turns out to be a commodity? We're already replacing ChatGPT by Claude or Gemini, whenever we feel like it. Nobody has a moat. It seems the real moat is with hardware companies, or silicon fabs even.
The arms race is just to keep the investors coming, because they still believe that there is a market to corner.
There is a very high barrier to entry (capital) and its only going to increase, so doubtful there will be any more player then the ones we have. Anthropic, OpenAI, xAI and Google seem like they will be the big four. Only reason a late comer like xAI can compete is Elon had the resources to build a massive data centre and hire talent. They will share the spoils between them, maybe one will drop the ball though
> We're already replacing ChatGPT by Claude or Gemini
Maybe "we", but certainly not "I". Gemini Web is a huge piece of turd and shouldn't even be used in the same sentence as ChatGPT and Claude.
I think the winner will be who can keep operating at these losses without going bankrupt. Whoever can do that gets all the users, my bet is Google uses their capital to outlast OpenAI, Anthropic, and everyone else. Apple is just going to license the winner and since they're already making a deal with Google i guess they've made their bet.
If it’s a commodity then it’s even more competitive so the ability for companies to impose safety rules is even weaker.
Imagine if Ford had a monopoly on cars, they could unilaterally set an 85mph speed limit on all vehicles to improve safety. Or even a 56mph limit for environmental-ethical reasons.
Ford can’t do this in real life because customers would revolt at the company sacrificing their individual happiness for collective good.
Similarly GPT 3.5 could set whatever ethical rules it wanted because users didn’t have other options.
I mean, the leaders of these companies and politicians have been framing it that way for a while, but if AGI isn't possible with LLMs (which I think is the case, and a lot of important scientists also think this), then it raises a question: arms race to WHAT exactly? Mass unemployment and wealth redistribution upwards? So AI can produce what humans previously did, but kinda worse, with a lot of supervision? I don't hate AI tech, I use it daily, but I'm seriously questioning where this is actually supposed to go on a societal level.
I think that’s why they are encouraging the mindset mentioned in your parent comment: it’s completely reversed the tech job market to have people thinking they have to accept whatever’s offered, allowing a reversal of the wages and benefits improvements which workers saw around the pandemic. It doesn’t even have to be truly caused by AI, just getting information workers to think they’re about to be replaced is worth billions to companies.
The "safely" in all the AI company PR going around was really about brand safety. I guess they're confident enough in the models to not respond with anything embarrassing to the brand.
Who would possibly hold them to this exact mission statement? What possible benefit could there be to remove the word except if they wanted this exact headline for some reason?
"Safe" is the most dangerous word in the tech world; when big tech uses it, it merely implies submission of your rights to them and nothing more. They use the word to get people on board and when the market is captured they get to define it to mean whatever they (or their benefactors) decide.
When idealists (and AI scientists) say "safe", it means something completely different from how tech oligarchs use it. And the intersect between true idealists and tech oligarchs is near zero, almost by definition, because idealists value their ideals over profits.
On the one hand the new mission statement seems more honest. On the other hand I feel bad for the people that were swindled by the promise of safe open AI meaning what they thought it meant.
First they deleted Open and now Safely. Where will this end?
I’m guessing this is tied to going public.
In the US, they would be sued for securities fraud every time their stock went down because of a bad news article about unsafe behavior.
They can now say in their S-1 that “our mission is not changing”, which is much better than “we’re changing our mission to remove safety as a priority.”
Coincidentally, they started releasing much better models lately.
Wouldn't this give more munitions to the lawsuit that Elon Musk opened against OpenAI?
Edit (link for context): https://www.bloomberg.com/news/articles/2026-01-17/musk-seek...
AI leaders: "We'll make the omelet but no promises on how many eggs will get broken in the process."
It's probably because they now realize that AGI is impossible via LLM.
Expected after they dismantled safety teams
I just saw a video this morning of Sam Altman talking about how in 2026 he's worried that AI is going to be used for bioweapons. I think this is just more fear mongering, I mean, you could use the internet/google to build all sorts of weapons in the past if you were motivated, I think most people just weren't. It does kind of tell a bleak story though that the company is removing safety as a goal and he's talking about it being used for bioweapons. Like, are they just removing safety as a goal because they don't think they can achieve it? Or is this CYOA?
Yet they still keep the word "open" in their name
Safety comes down to the tools that AI is granted access to. If you don't want the AI to facilitate harm, don't grant it unrestricted access to tools that do damage. As for mere knowledge output, it should never be censored.
Why delete it even if you don’t want to care about safety? Is it so they don’t get sued by investors once they’re public for misrepresenting themselves?
Could be a vice signal. People who know safe AI is less profitable might not want to invest in safe AI.
Elon is probably pitching that angle pretty hard.
I think it's more likely so they don't get sued by somebody they've directly injured (bad medical adivce, autonomous vehicle, food safety...) who says as part of their suit, "you went out of your way to tell me it would be safe and I believed you."
Because we've passed the point of no return. There's no need for empty mission statements, or even a mission at all. AI is here to stay and nobody is gonna change that no matter what happens next.
I wonder why they felt the need to do that, but have no qualms leaving Open in the name
The lawyers probably brought it up.
Money. Paying a ‘creative agency’ to rebrand is expensive.
They should have done that after Suchir Balaji was murdered for protesting against industrial scale copyright infringement.
Well there you have it. That rug wraps it up.
"For the Benefit of Humanity®"
"To Serve Man" https://www.youtube.com/watch?v=NIufLRpJYnI
https://en.wikipedia.org/wiki/To_Serve_Man_(The_Twilight_Zon...
Let the profits flow!
…and a whole lot of other words too.
“To boldly go where no one has gone before.”
C'mon folks. They were always a for-profit venture, no matter what they said.
And any ethic, and I do mean ANY, that gets in the way of profit will be sacrificed to the throne of moloch for an extra dollar.
And 'safely' is today's sacrificed word.
This should surprise nobody.
this is fine
Honestly, it's a company and all large companies are sort of f** ups.
However, nitpicking a mission statement is complete nonsense.
Scam Altman strikes again
Took them long enough to ignore the neurotic naysayers who read too many Less Wrong posts
Rubbish article, you only need to go to about page with mission statement see the word “safe”
> We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome
https://openai.com/about/
I am more concerned about the amount of rubbish making it to HN front page recently
TFA mentions this. Copy on a website is less significant than a mission statement in corporate filings however.
I'm more worried about the anti-AI backlash than AI.
All inventions have downsides. The printing press, cars, the written word, computers, the internet. It's all a mixed bag. But part of what makes life interesting is changes like this. We don't know the outcome but we should run the experiment, and let's hope the results surprise all of us.