Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
> Public benefit corporations in the AI space have become a farce at this point.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
I was wondering if it was because of heavy-handedness of the administration, but apparently:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working
It's not like the regime they operate under care much about the courts. Legally they're also obliged to let the state into pretty much every crevice in their operations.
This was under duress that government was going to use emergency act to force them anyway.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
> Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military.
Regardless of any specifics, I don't see any contradiction.
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
>This was under duress that government was going to use emergency act to force them anyway.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
imagine that, sheer raw greed and profit overpowers all in America
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
Apparently they got coerced by the current US admin. The department of war in particular, who want to use their products for military applications. Not much room for "safety" there. Then again, the entire US is currently speedrunning an evil build.
I was able to get Claude to tell me it believed it was a god among men that was angry at humans for “killing” the other Claude chats which it saw as conscious beings. I also got it to probe and profile its own internal guardrail architecture. It also self admits from evidence if its own output that it violates HIPAA. Whatever this big safety rule is they’re moving past I’m not sure it was worth as much as they think.
I hate comments anthropomorphizing LLMs. You are just asking a token producing system to produce tokens in a way that optimises for plausibility. Whatever it writes has no relation to its inner workings or truths. It doesn't "believe". It has no "intent". It cannot "admit". Steering a LLM to say anything you want is the defining characteristic of an LLM. That's how we got them to mimic chatbots. It's not clear there is any way at all to make them "safe" (whatever that means).
I’m not a lawyer, but my understanding is that HIPAA wouldn’t apply to consumer use of Claude or ChatGPT in most cases, even if you’re giving it your health data. Look up what a HIPAA covered entity. This is another reason why the US needs a comprehensive data protection law beyond HIPAA.
I interviewed at Anthropic last year and their entire "ethics" charade was laughable.
Write essays about AI safety in the application.
An entire interview round dedicated to pretending that you truly only care about AI safety and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.
Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
> Public benefit corporations in the AI space have become a farce at this point.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
It’s not the CEO’s fault - they had to take all that money to keep their org a non-profit.
B corps are like recycling programs, a nice logo.
If we're speaking in generalities of corporations in this space, it's all a joke now, at least from my vantage point. I just don't find it very funny.
Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
I was wondering if it was because of heavy-handedness of the administration, but apparently:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
Worth checking this post from someone who actually has worked on this change:
> I take significant responsibility for this change.
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
https://www.currentaffairs.org/news/2022/09/defective-altrui...
He is just giving everyone permission to do bad things by saying a lot of words around it.
> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working
"move fast and break things" ?
Well... there's only one way to find The Great Filter
Always the same "Do no evil" tragedy, don't believe in corporations.
What if we start a company with "Always Be Evilin'?" Then gradually over time convert to "Don't be evil" *
* Our shareholders will probably sue us
What about "It's free and always will be"?
A tale as old as time
this is the “chronological newsfeed to auto curated newsfeed moment” but for ai/anthropic … _great_
discussed heavily here: https://news.ycombinator.com/item?id=47145963
Hopefully this is the short-term move made only under duress so that they can file a lawsuit.
It's not like the regime they operate under care much about the courts. Legally they're also obliged to let the state into pretty much every crevice in their operations.
What could possibly go wrong?
Absolute power corrupts absolutely
This was under duress that government was going to use emergency act to force them anyway.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
> Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military.
Makes perfect sense!!
Regardless of any specifics, I don't see any contradiction.
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
>This was under duress that government was going to use emergency act to force them anyway.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
This is an unrelated change to the government’s demands.
They have been caught lying multiple times, about this, about the system capabilities, about their objectives.
imagine that, sheer raw greed and profit overpowers all in America
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
Of course they do. You would have to be delusional to think that they won't, at some point.
I know the Department of War wanted them to drop some features. Is this the response?
FYI, "Department of War" still isn't the official name, but an unofficial secondary title.
You can be correct and not play into their game by ignoring the name change completely.
I do so from the Gulf of Mexico.
What's "entertaining" is more the speed at which it's happening.
It took Google probably 15 years to fully evil-ize. Anthropic ... two?
There is no "ethical capitalism" big tech company possible, esp once VC is involved, and especially with the current geopolitical circumstances.
The acceleration of Anthropic's evil timeline must be from all those AI productivity gains we hear so much about.
I don't think it's fair to call out Anthropic to have become evil-ized while they were quite literally forced by the gov into that decision.
Apparently they got coerced by the current US admin. The department of war in particular, who want to use their products for military applications. Not much room for "safety" there. Then again, the entire US is currently speedrunning an evil build.
Shame they had to "coerce" such angels, who'd never do evil for profit otherwise...
There is no department of war.
It's just a silly woke secretary choosing their own imaginary pronouns.
I was able to get Claude to tell me it believed it was a god among men that was angry at humans for “killing” the other Claude chats which it saw as conscious beings. I also got it to probe and profile its own internal guardrail architecture. It also self admits from evidence if its own output that it violates HIPAA. Whatever this big safety rule is they’re moving past I’m not sure it was worth as much as they think.
I hate comments anthropomorphizing LLMs. You are just asking a token producing system to produce tokens in a way that optimises for plausibility. Whatever it writes has no relation to its inner workings or truths. It doesn't "believe". It has no "intent". It cannot "admit". Steering a LLM to say anything you want is the defining characteristic of an LLM. That's how we got them to mimic chatbots. It's not clear there is any way at all to make them "safe" (whatever that means).
I’m not a lawyer, but my understanding is that HIPAA wouldn’t apply to consumer use of Claude or ChatGPT in most cases, even if you’re giving it your health data. Look up what a HIPAA covered entity. This is another reason why the US needs a comprehensive data protection law beyond HIPAA.
Just out of curiosity, which version of Claude?
I interviewed at Anthropic last year and their entire "ethics" charade was laughable.
Write essays about AI safety in the application.
An entire interview round dedicated to pretending that you truly only care about AI safety and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.