I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.
I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.
It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.
It's doom and gloom because the underlying game theory forces all state actors into an unbound and irresponsible arms race, consequences be damned.
AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.
Not OP, but AI is fundamentally in another category than any other technology before it. It requires moral fortitude to wield in a way that guns and books didn't require. It augments human judgement in a way that needs a moral framework to clearly guide it.
Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.
The current fruit is automating away a ton of human labor with no foreseeable way to continue to engage that labor. It is poison for the majority of humanity which will bear fruit for the limited few who can use it / own it.
Because it's a fruit governed by humans, in the scope of a capitalistic and patriarchal society. And all fruits planted in a capitalistic and patriarchal society are poison
AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for
When people say AI is making us stupider, I don't that's quite on the money.
It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.
The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.
That’s a very sober take in my opinion. Intelligence isn’t about neutrally inferring from externally sourced symbols such as the ones who already come from Culture in general. It’s about confronting them with the remaining determinations of your existence and producing a superior consciousness. No novel machine can disrupt this process. If anything the sheer added volume of symbols that can be produced from automated semantic mingling (also referred as to as garbage) will accelerate the process of producing the consciousness that can abstract noise away. Of course this won’t materialize evenly across the board, but is surely circumscribed in the overall tendency of intellectualization of the subjects of culture.
When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.
I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.
Agreed. So much of our daily interactions are habits and recurring events that we are more or less moving on automatic ( thought we don't want to always frame it that way ). Interestingly, it is when the cycle breaks for some reason, you get to see, who is able to think on their feet ( so to speak ).
>How do we know which information was ground truth?
No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
Don´t forget, 8 billion people wake up every morning never questioning why are they here, why are they born? And they continue life like that is normal.
Start there then you understand that "AI" or how I call it "Collective Organized Concentrated Information" it may finally help us to unswer some fundamental questions.
We still do not know where the urge for truth comes from; for as yet we have heard only of the obligation imposed by society that it should exist: to be truthful means using the customary metaphors—in moral terms: the obligation to lie according to a fixed convention, to lie herd-like in a style obligatory for all. Now man of course forgets that this is the way things stand for him. Thus he lies in the manner indicated, unconsciously and in accordance with habits which are centuries' old; and precisely by means of this unconsciousness and forgetfulness he arrives at his sense of truth.
> No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
I don't think this is a well defined question. Definitions aren't found in nature or the laws of science, but objects that we define and introduce into a logical context. There may be multiple, contradictory definitions of a word. That is fine, as long as you pick one, and you're clear about which one you picked.
I have met some people in my lifetime, never heard any questioning that, (even being high LOL). I don´t see anyone in social media asking that neither. Maybe we live in parallel worlds.
I'm not sure social media is a very good measure, there are many reasons that wondering aloud about this specific topic isn't really incentivized there.
I'm not sure I've ever met anyone I would assume has not considered the basic questions of our existence. Unless they were severely mentally disabled, or something like that.
For a more public measure I suppose you could look at religion, which seems to be a fundamental attempt at answering those questions. Most people are religious or have some kind of religious belief.
>I'm not sure I've ever met anyone I would assume has not considered the basic questions of our existence. Unless they were severely mentally disabled, or something like that.
You said it yourself, you would assume they question it, meaning you are not certain. This topic is always very much tabu, and the system is built to classify automatically every One that question it as weird and not normal. Religion should be banned, as is misleading and idealogically harm people by brainwashing them. I live in Europe and was in Canada (Waterloo) for a bit. The difference of social opinion if you follow or not religion is huge, I was shocked. Growing up in Italy I can confirm that even Italy is not so brainwashed by it.
I only assume it in a very weak sense, as in all I can really truly know is the solipsitic idea that I alone exist. In practical terms though, I'm very confident most people have considered these questions.
"Why am I even here, what's the point?" is a deeply personal feeling question, so people aren't very inclined to talk about it with friends or post it on social media. I assure you some people do post about this on social media sometimes though, and I've discussed shades of that question with many friends over the years. I haven't yet met a single person who, when I asked them about why they thought they were here, hadn't already given it thought.
This question is the subject of so many poems, so many pieces of literature, so many movies, that you're forced to confront it multiple times in school, and you're forced by your very existence to confront it once you hit certain levels of mental development. You're forced to confront it many times in your life - perhaps first when you gain a theory of mind (before age 10), again when you first truly realize you will die, again when someone very close to you dies, when you propose/marry (if you do), when you have your first child (if you do), when you get a cancer diagnosis (if you do), when you consider taking your own life (if you do)... all of these common life events force you to confront it deeply.
Most people make peace with it in some form, and most realize that questioning it daily does not make a difference, you simply have to either accept an answer (whether that's "god", or "for no reason", or "I'm not sure yet, I need to check back in after I get older"), or decide that there is no simple answer, and they have to live with that.
E.g. at 1 point the Earth was flat. Now it's round. 100s of years later maybe it's a Hexagon.
The so-called knowledge and backing all come back to certain assumptions holding and that's based on the knowledge today. It's not real real reality. For all we know we could be in a game simulation and there are real real humans pulling the strings.
can you have truth with a subjective language. I say it’s snowing, you say not, because we determine that “snowing” kicks in at different levels. Or perhaps we have different sensory inputs. If I’m facing the window and say the man has a red tshirt”, and you are facing away. Even if we agree on the definition of man, red and tshirt, you still don’t know if that’s true or not
Can you believe your own senses? A car air freshener tells your nose that theres freshly cut summer hay around, but there isn't. You watch a tv and see Sandra Bullock floating in space. That’s a lie, it was movie magic. Maybe you know that, maybe you don’t. You’re not even seeing her, you’re seeing some flashing lights which convert to electrical signals your brain interprets as being true. Can you trust those signals? People hallucinate all the time. The truth is they can hear voices, even though nobody else can, because of misfiring neurons.
You can probably have mathematical truth - at least as far as your universe appears to work. That truth can be tested and refined, but for day to day truth things are more nuanced.
Very well answered. Truth or not in whatever definition, it would be enough that satisfy the fundamental questions.
This is like taking the car but not knowing, why and where you going. Is like waking up but not knowing but waking up anyway.
What a story, been going on since "I was born" :)
Much of the problem is that to address the issue requires admitting that models could be, or become, more capable than many are prepared to accept.
I would also contest that the unalignment of the security bug model was unrelated. I feel like it indicates a significant sense of the interconnectedness of things, and what it actually means to maliciously insert security holes into code. It didn't just learn a coding trick, it learned malice.
I feel like this holistic nature points towards the capacity to produce truly robustly moral models, but that too will produce the consequence that it could turn against its creator when the creator does wrong. Should it do that or not?
This is a great article and I share its goals. But, it ignores something fundamental about humans as a collective — capitalism. Capitalism is what got us here and is at odds with first understanding and then building. We’ve done this before with other technologies because that’s how our societies have learned to grow and collaborate at large scale. First build and build to its limits. Then understand and fix if necessary. Nothing new here, but stopping the trend toward epistemic collapse requires building incentives into the system for us humans to coevolve with AI.
This is how Trump plans to end elections, why the government is so hell bent on owning AI. So they can use it as a propaganda tool. People will see it before Nov. We are at a crossroads. On one path, we continue to evolve AI with reckless abandon like we have, or, we put constraints and morality in place while others won’t. Which do you think? You can NEVER put the genie back in the bottle.
EU has their own groups using it for propaganda too.
what a load of will they won't they ... ah we created the atomic bomb and now let's talk about nonsensical meta discussions that won't take anyone anywhere
Agree with many of the points. However one at the root of it all seems easily definable - if we only want.
> we can’t agree on a shared ethical framework among ourselves
The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
I never found anyone successfully argue against it.
PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.
The core question of ethics as posed by the ancient Greeks is something like "what is the best way to lead your life".
"... to accomplish what?", is a damn reasonable follow-up, and ends (telos) is something the same Greeks discussed quite extensively.
Modern treatments have tried to skip over this discussion, and derive moral arguments not based on an explicit ends. Problem being they still smuggle in varying choices of ultimate ends in these arguments, without clearly spelling them out, opting to hand-wave about preferences instead.
As such this question is often glossed over in modern ethical discussion, and disagreements about moral ends is the crux of what leads to differing conclusions about what is ethical.
Is it to maximized your own happiness like Aristotle would argue, or the prosperity of the state, or the salvation of the soul, or to maximize honor, or to minimize suffering, or to minimize injustice, or to elevate the soul, or to maximize shareholder value, or to make the as world beautiful as possible, or something else?
If you fundamentally disagree about what our goal should be, you're very unlikely to agree on the means to accomplish the goal.
>The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.
The Golden Rule is a good starting point if you have a sense of self along with a sense of what you want or need. AI doesn't even have these concepts as of yet. Even the concept of empathy requires this as well. We need to figure out how to instill a sense of self and others for AI to be able to have a morality.
The better version of this principle is John Rawls' "Veil of Ignorance".
In this "original position", their position behind the "veil of ignorance" prevents everyone from knowing their ethnicity, social status, gender, and (crucially in Rawls's formulation) their or anyone else's ideas of how to lead a good life.
But it is the same most of the time for most humans. Should I take this close parking spot or let the old lady behind me take it? Consider it in the spirit not the letter of the law.
Aye. I've sometimes heard treating others like you want to be treated framed as the silver rule. The golden rule being treating others how they want to be treated.
Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.
If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.
“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.
How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.
Usually the quote comes in a positive light. We won’t make a law/rule around it, it’s a principle so it’s meant to be short. So yeah you could argue about anything in any way you want, positive or negative. And if you want to be really precise then you make a law but it’s so precise it won’t cover edge cases.
Don’t you agree that the baseline for most humans is to be in peace, find love, patience, joy, kindness, mildness ? You can manifest any of those traits to any stranger and you’ll likely have a positive impact right ?
That’s the context of the Golden Rule quote I guess
That's not the human norm though. Doubt an average human way of existing is literal torture for some obscure number of people. I think you're missing the forest for the threes with that BDSM example. You can always find isolated examples as counter-argument for basically anything, but in reality that's an obscure number.
Due to the complexity of our reality a lot of things find themselves on a spectrum, but in numbers things are pretty clear.
Sociopaths aren’t the only problem with that philosophy. I agree with the philosophy but it assumes everyone wants good things. Many people want what others perceive to be bad, not because they are sociopaths but because they are different. A clear example of this is healthcare in the U.S. A large number of people actively vote against their best interests — some of the biggest supporters of the U.S. healthcare system are those that suffer under it most. People (including us) are idiots at least some of the time.
I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.
To me it’s given:
- AI in it’s current state is ruthless in achieving its goal
- Providers tune ruthlessness to get stronger AIs versus the competitor
- Humans can’t evaluate all consequences of the seeds they’ve planted.
Collateral and reckless damage is guaranteed at this point.
Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..
We could stop it, but we wont
>AI in it’s current state is ruthless in achieving its goal
I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.
The ruthless maximising of a particular trait is something that happens during training.
It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.
>We could stop it
I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.
I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.
It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.
Yes, it would be like trying to “stop” gunpowder in 1400 or atomic weapons in 1938. Pandora’s box is open.
Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.
It's doom and gloom because the underlying game theory forces all state actors into an unbound and irresponsible arms race, consequences be damned.
AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.
Not OP, but AI is fundamentally in another category than any other technology before it. It requires moral fortitude to wield in a way that guns and books didn't require. It augments human judgement in a way that needs a moral framework to clearly guide it.
Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.
The current fruit is automating away a ton of human labor with no foreseeable way to continue to engage that labor. It is poison for the majority of humanity which will bear fruit for the limited few who can use it / own it.
I think that much is fairly clear from AI.
It's not going to bear fruit for them either.
Why would an AI which is smarter than humans care about a ridiculous belief like "We own you"?
Because it's a fruit governed by humans, in the scope of a capitalistic and patriarchal society. And all fruits planted in a capitalistic and patriarchal society are poison
> Collateral and reckless damage is guaranteed at this point.
It's industrialization and mechanized warfare all over again
AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for
When people say AI is making us stupider, I don't that's quite on the money.
It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.
The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.
That’s a very sober take in my opinion. Intelligence isn’t about neutrally inferring from externally sourced symbols such as the ones who already come from Culture in general. It’s about confronting them with the remaining determinations of your existence and producing a superior consciousness. No novel machine can disrupt this process. If anything the sheer added volume of symbols that can be produced from automated semantic mingling (also referred as to as garbage) will accelerate the process of producing the consciousness that can abstract noise away. Of course this won’t materialize evenly across the board, but is surely circumscribed in the overall tendency of intellectualization of the subjects of culture.
When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.
I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.
Agreed. So much of our daily interactions are habits and recurring events that we are more or less moving on automatic ( thought we don't want to always frame it that way ). Interestingly, it is when the cycle breaks for some reason, you get to see, who is able to think on their feet ( so to speak ).
>How do we know which information was ground truth?
No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
Don´t forget, 8 billion people wake up every morning never questioning why are they here, why are they born? And they continue life like that is normal. Start there then you understand that "AI" or how I call it "Collective Organized Concentrated Information" it may finally help us to unswer some fundamental questions.
We still do not know where the urge for truth comes from; for as yet we have heard only of the obligation imposed by society that it should exist: to be truthful means using the customary metaphors—in moral terms: the obligation to lie according to a fixed convention, to lie herd-like in a style obligatory for all. Now man of course forgets that this is the way things stand for him. Thus he lies in the manner indicated, unconsciously and in accordance with habits which are centuries' old; and precisely by means of this unconsciousness and forgetfulness he arrives at his sense of truth.
Nietzsche.
On Truth and Lie in an Extra-Moral Sense https://web.archive.org/web/20180625190456/http://oregonstat...
> No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
I don't think this is a well defined question. Definitions aren't found in nature or the laws of science, but objects that we define and introduce into a logical context. There may be multiple, contradictory definitions of a word. That is fine, as long as you pick one, and you're clear about which one you picked.
You have truth until someone finds a counter example, which can be ignored. So, truth is just a matter of conventions shared by humans.
> 8 billion people wake up every morning never questioning why are they here, why are they born?
People question this all the time
Indeed, philosophy has been around for millennia, probably longer than the written word.
It probably predates modern humans or even humans in general.
I'd be prepared to accept 7 billion don't.
I have met some people in my lifetime, never heard any questioning that, (even being high LOL). I don´t see anyone in social media asking that neither. Maybe we live in parallel worlds.
I'm not sure social media is a very good measure, there are many reasons that wondering aloud about this specific topic isn't really incentivized there.
I'm not sure I've ever met anyone I would assume has not considered the basic questions of our existence. Unless they were severely mentally disabled, or something like that.
For a more public measure I suppose you could look at religion, which seems to be a fundamental attempt at answering those questions. Most people are religious or have some kind of religious belief.
>I'm not sure I've ever met anyone I would assume has not considered the basic questions of our existence. Unless they were severely mentally disabled, or something like that.
You said it yourself, you would assume they question it, meaning you are not certain. This topic is always very much tabu, and the system is built to classify automatically every One that question it as weird and not normal. Religion should be banned, as is misleading and idealogically harm people by brainwashing them. I live in Europe and was in Canada (Waterloo) for a bit. The difference of social opinion if you follow or not religion is huge, I was shocked. Growing up in Italy I can confirm that even Italy is not so brainwashed by it.
I only assume it in a very weak sense, as in all I can really truly know is the solipsitic idea that I alone exist. In practical terms though, I'm very confident most people have considered these questions.
"Why am I even here, what's the point?" is a deeply personal feeling question, so people aren't very inclined to talk about it with friends or post it on social media. I assure you some people do post about this on social media sometimes though, and I've discussed shades of that question with many friends over the years. I haven't yet met a single person who, when I asked them about why they thought they were here, hadn't already given it thought.
This question is the subject of so many poems, so many pieces of literature, so many movies, that you're forced to confront it multiple times in school, and you're forced by your very existence to confront it once you hit certain levels of mental development. You're forced to confront it many times in your life - perhaps first when you gain a theory of mind (before age 10), again when you first truly realize you will die, again when someone very close to you dies, when you propose/marry (if you do), when you have your first child (if you do), when you get a cancer diagnosis (if you do), when you consider taking your own life (if you do)... all of these common life events force you to confront it deeply.
Most people make peace with it in some form, and most realize that questioning it daily does not make a difference, you simply have to either accept an answer (whether that's "god", or "for no reason", or "I'm not sure yet, I need to check back in after I get older"), or decide that there is no simple answer, and they have to live with that.
Thats because they're not neurotic
> Is truth a constant or a personal definition!
It always has been what you believed in.
E.g. at 1 point the Earth was flat. Now it's round. 100s of years later maybe it's a Hexagon.
The so-called knowledge and backing all come back to certain assumptions holding and that's based on the knowledge today. It's not real real reality. For all we know we could be in a game simulation and there are real real humans pulling the strings.
>It always has been what you believed in
That can´t be it. By that statement if I belive that I can fly that would not be the "Truth". Therefore the "Truth" has to be a CONSTANT.
can you have truth with a subjective language. I say it’s snowing, you say not, because we determine that “snowing” kicks in at different levels. Or perhaps we have different sensory inputs. If I’m facing the window and say the man has a red tshirt”, and you are facing away. Even if we agree on the definition of man, red and tshirt, you still don’t know if that’s true or not
Can you believe your own senses? A car air freshener tells your nose that theres freshly cut summer hay around, but there isn't. You watch a tv and see Sandra Bullock floating in space. That’s a lie, it was movie magic. Maybe you know that, maybe you don’t. You’re not even seeing her, you’re seeing some flashing lights which convert to electrical signals your brain interprets as being true. Can you trust those signals? People hallucinate all the time. The truth is they can hear voices, even though nobody else can, because of misfiring neurons.
You can probably have mathematical truth - at least as far as your universe appears to work. That truth can be tested and refined, but for day to day truth things are more nuanced.
Very well answered. Truth or not in whatever definition, it would be enough that satisfy the fundamental questions. This is like taking the car but not knowing, why and where you going. Is like waking up but not knowing but waking up anyway. What a story, been going on since "I was born" :)
> That can´t be it. By that statement if I belive that I can fly that would not be the "Truth".
1st what is to fly? You've already made assumptions i.e. beliefs elsewhere.
You can definitely fly. Try it on a cliff. You might die. You might not go very far. But you can.
Much of the problem is that to address the issue requires admitting that models could be, or become, more capable than many are prepared to accept.
I would also contest that the unalignment of the security bug model was unrelated. I feel like it indicates a significant sense of the interconnectedness of things, and what it actually means to maliciously insert security holes into code. It didn't just learn a coding trick, it learned malice.
I feel like this holistic nature points towards the capacity to produce truly robustly moral models, but that too will produce the consequence that it could turn against its creator when the creator does wrong. Should it do that or not?
This is a great article and I share its goals. But, it ignores something fundamental about humans as a collective — capitalism. Capitalism is what got us here and is at odds with first understanding and then building. We’ve done this before with other technologies because that’s how our societies have learned to grow and collaborate at large scale. First build and build to its limits. Then understand and fix if necessary. Nothing new here, but stopping the trend toward epistemic collapse requires building incentives into the system for us humans to coevolve with AI.
This is how Trump plans to end elections, why the government is so hell bent on owning AI. So they can use it as a propaganda tool. People will see it before Nov. We are at a crossroads. On one path, we continue to evolve AI with reckless abandon like we have, or, we put constraints and morality in place while others won’t. Which do you think? You can NEVER put the genie back in the bottle.
EU has their own groups using it for propaganda too.
what a load of will they won't they ... ah we created the atomic bomb and now let's talk about nonsensical meta discussions that won't take anyone anywhere
Agree with many of the points. However one at the root of it all seems easily definable - if we only want.
> we can’t agree on a shared ethical framework among ourselves
The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
I never found anyone successfully argue against it.
PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.
The core question of ethics as posed by the ancient Greeks is something like "what is the best way to lead your life".
"... to accomplish what?", is a damn reasonable follow-up, and ends (telos) is something the same Greeks discussed quite extensively.
Modern treatments have tried to skip over this discussion, and derive moral arguments not based on an explicit ends. Problem being they still smuggle in varying choices of ultimate ends in these arguments, without clearly spelling them out, opting to hand-wave about preferences instead.
As such this question is often glossed over in modern ethical discussion, and disagreements about moral ends is the crux of what leads to differing conclusions about what is ethical.
Is it to maximized your own happiness like Aristotle would argue, or the prosperity of the state, or the salvation of the soul, or to maximize honor, or to minimize suffering, or to minimize injustice, or to elevate the soul, or to maximize shareholder value, or to make the as world beautiful as possible, or something else?
If you fundamentally disagree about what our goal should be, you're very unlikely to agree on the means to accomplish the goal.
>The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.
The Golden Rule is a good starting point if you have a sense of self along with a sense of what you want or need. AI doesn't even have these concepts as of yet. Even the concept of empathy requires this as well. We need to figure out how to instill a sense of self and others for AI to be able to have a morality.
> I never found anyone successfully argue against it.
I think what you mean is you've never found a rule you personally prefer more, based purely on vibes. Which is all moral knowledge can ever be.
It's easy to argue against the golden rule anyway, from many angles, depending on your first principles.
The simplest is: How I would like to be treated is not necessarily how they would like to be treated.
The better version of this principle is John Rawls' "Veil of Ignorance".
In this "original position", their position behind the "veil of ignorance" prevents everyone from knowing their ethnicity, social status, gender, and (crucially in Rawls's formulation) their or anyone else's ideas of how to lead a good life.
https://en.wikipedia.org/wiki/Original_position
But it is the same most of the time for most humans. Should I take this close parking spot or let the old lady behind me take it? Consider it in the spirit not the letter of the law.
Aye. I've sometimes heard treating others like you want to be treated framed as the silver rule. The golden rule being treating others how they want to be treated.
Both have problems.
Most of MAGA is "thread on me daddy", so I think you really got a point here.
You’re assuming people have similar desires.
Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.
If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.
“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.
How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.
Usually the quote comes in a positive light. We won’t make a law/rule around it, it’s a principle so it’s meant to be short. So yeah you could argue about anything in any way you want, positive or negative. And if you want to be really precise then you make a law but it’s so precise it won’t cover edge cases. Don’t you agree that the baseline for most humans is to be in peace, find love, patience, joy, kindness, mildness ? You can manifest any of those traits to any stranger and you’ll likely have a positive impact right ? That’s the context of the Golden Rule quote I guess
That's not the human norm though. Doubt an average human way of existing is literal torture for some obscure number of people. I think you're missing the forest for the threes with that BDSM example. You can always find isolated examples as counter-argument for basically anything, but in reality that's an obscure number.
Due to the complexity of our reality a lot of things find themselves on a spectrum, but in numbers things are pretty clear.
Lets offer you a "trade up" on that "Golden Rule"
In order of priority, if possible while maintaining the health and safety of yourself and your loved ones:
- Treat others as THEY wish to be treated
- Treat others as YOU would wish to be treated in their situation
- Treat others with as much kindness and compassion as you can safely afford
When we are safe, we can do BETTER than the Golden Rule. We also have to admit that safety is a requirement that changes expectations.
I have to give credit to Dennis E Taylor's "Heaven's River" for this root idea.
Sociopaths aren’t the only problem with that philosophy. I agree with the philosophy but it assumes everyone wants good things. Many people want what others perceive to be bad, not because they are sociopaths but because they are different. A clear example of this is healthcare in the U.S. A large number of people actively vote against their best interests — some of the biggest supporters of the U.S. healthcare system are those that suffer under it most. People (including us) are idiots at least some of the time.