If the tech industry leaders demonstrated any amount of responsibility, accountability, or care for the general well-being of people, then I think this would instead be an exciting time for tech innovation. The enthusiasm I felt decades ago is completely gone.
There was a recent talk from CCC that got a bit deeper into Peter Thiel's "Antichrist" ideology and where it comes from.
The bottom line of that talk seemed even worse: That for some tech leaders, the general wellbeing of the population or "classical" arguments for progress such as improving living conditions and advancing mutual understanding aren't even orthogonal to their goals but are explicit anti-goals: Because a world with too much wellbeing and too little conflict would ostensibly lead to stagnation, loss of freedom and innovation and the state of what Thiel termed the "Antichrist".
Too much conflict is bad as well because it carries the risk of complete destruction, so they'd aim for some kind of ideal level of conflict and suffering in the world, like some sort of twisted inflation target.
Imagine that in ten thousand years aliens actually invade Earth and try to enslave humanity. If we spend this time perfecting the art of war we'll have some chance of survival. If we spend this time just growing tomatoes, no fucking way. So yes, from the perspective of humanity as a whole, it makes sense to say that the perfect amount of war and suffering is non-zero.
I'd also be excited about this technology if it had come before everything we've seen in the last 25 years. It's irresponsibly naive not to understand by now that technological advances are being used more against us than for us.
I don't think the opposition from the public is because they don't see value in AI. Quite the opposite, every single non tech person I know has used AI tools and can immediately see the value.
The 'backlash' seems to be from the fact that people, esp white collar workers are finally realizing what blue collar folks have been feeling for quite some time. That an overwhelming majority of the technology driven productivity gains accrue to capital owners, not workers and AI is the ultimate productivity tool.
It doesn't help that capital owners no longer feel it necessary to even pretend. Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI. When people see their electricity rates go up to subsidize billionaires building AI data centers. When they see their real wages falling continually while they are told how good the economy is.
If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
> If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
Additionally the AIs are trained on creations by many of those same regular people. They're not just seeing the profits funnelled upwards, some of those profits are being generated through their own works!
And before someone tries to argue "that's just how art etc. work" - sure, but the difference is quantity. If I get inspired by another artist, I can generate output at the speed of one artist. With current AI models, it's like a big company is training millions of artists on your style to pump out new pieces as fast as possible.
It's also worth noting that computers have been largely an equalizing force, since you don't need much capital at all to produce software, just a PC and know-how that's freely available online. You can generally build software cheaper alone or in a small group than as a large company.
AI is taking away and monopolizing the means of production, and making what few white collar workers remain rent their productivity. This is a completely different dynamic.
>Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI.
I saw a series of ads in a train station the other day for some company claiming to offer "AI employees" that had slogans like "our employees never complain about overtime", "our employees don't ask about vacations", etc. and was just shocked at the brazenness of it.
You will find many such Marie Antoinettes in certain social circles, m/v, but mostly m. Living in such a bubble tends to warp one's perspective, also as self-justification. Those people below become resources, accounted for like energy, materials and other consumables. People wouldn't notice it anymore, but it is still a telltale sign how much a company value humans if they delegate herding to a so called Human Resources department.
The default rebuttal is that Human Resources is just a standard term. <= the point
Wow this is bubble talk. Who are you talking to?! I regularly engage with people who are totally ambivalent towards genAI at best and horrifed by genAI at worst. The only people "immediately seeing the value" seem to be marketing grifters on LinkedIn.
From the article, an OpenAI researcher apparently:
> “Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”
Um, this person needs help? Serious mental issues, hello?! It's really concerning me how many people are having breaks with reality, and I don't only mean the poor people who are sadly taking their own lives.
I think its another example of the Gell-mann amnesia effect, if your'e an expert in something then the AI is often wrong and you're confused why people are saying its great, if you're less skilled then it can be impressive.
If the tech industry leaders demonstrated any amount of responsibility, accountability, or care for the general well-being of people, then I think this would instead be an exciting time for tech innovation. The enthusiasm I felt decades ago is completely gone.
There was a recent talk from CCC that got a bit deeper into Peter Thiel's "Antichrist" ideology and where it comes from.
The bottom line of that talk seemed even worse: That for some tech leaders, the general wellbeing of the population or "classical" arguments for progress such as improving living conditions and advancing mutual understanding aren't even orthogonal to their goals but are explicit anti-goals: Because a world with too much wellbeing and too little conflict would ostensibly lead to stagnation, loss of freedom and innovation and the state of what Thiel termed the "Antichrist".
Too much conflict is bad as well because it carries the risk of complete destruction, so they'd aim for some kind of ideal level of conflict and suffering in the world, like some sort of twisted inflation target.
Imagine that in ten thousand years aliens actually invade Earth and try to enslave humanity. If we spend this time perfecting the art of war we'll have some chance of survival. If we spend this time just growing tomatoes, no fucking way. So yes, from the perspective of humanity as a whole, it makes sense to say that the perfect amount of war and suffering is non-zero.
> but aliens won't invade Earth
Native Americans thought the same.
> Imagine that in ten thousand years aliens actually invade Earth and try to enslave humanity.
Insert Mr. Bean highway meme
I'd also be excited about this technology if it had come before everything we've seen in the last 25 years. It's irresponsibly naive not to understand by now that technological advances are being used more against us than for us.
It’s been really disgusting watching some people I used to look up to somewhat devolve into.
I don't think the opposition from the public is because they don't see value in AI. Quite the opposite, every single non tech person I know has used AI tools and can immediately see the value.
The 'backlash' seems to be from the fact that people, esp white collar workers are finally realizing what blue collar folks have been feeling for quite some time. That an overwhelming majority of the technology driven productivity gains accrue to capital owners, not workers and AI is the ultimate productivity tool.
It doesn't help that capital owners no longer feel it necessary to even pretend. Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI. When people see their electricity rates go up to subsidize billionaires building AI data centers. When they see their real wages falling continually while they are told how good the economy is.
If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
> If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
Additionally the AIs are trained on creations by many of those same regular people. They're not just seeing the profits funnelled upwards, some of those profits are being generated through their own works!
And before someone tries to argue "that's just how art etc. work" - sure, but the difference is quantity. If I get inspired by another artist, I can generate output at the speed of one artist. With current AI models, it's like a big company is training millions of artists on your style to pump out new pieces as fast as possible.
It's also worth noting that computers have been largely an equalizing force, since you don't need much capital at all to produce software, just a PC and know-how that's freely available online. You can generally build software cheaper alone or in a small group than as a large company.
AI is taking away and monopolizing the means of production, and making what few white collar workers remain rent their productivity. This is a completely different dynamic.
>Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI.
I saw a series of ads in a train station the other day for some company claiming to offer "AI employees" that had slogans like "our employees never complain about overtime", "our employees don't ask about vacations", etc. and was just shocked at the brazenness of it.
You will find many such Marie Antoinettes in certain social circles, m/v, but mostly m. Living in such a bubble tends to warp one's perspective, also as self-justification. Those people below become resources, accounted for like energy, materials and other consumables. People wouldn't notice it anymore, but it is still a telltale sign how much a company value humans if they delegate herding to a so called Human Resources department.
The default rebuttal is that Human Resources is just a standard term. <= the point
Yeah. And the same people seem puzzled when they see random citizens hating on AI with passion and treating it as a threat.
It would be funny if the ads were made by a sole proprietor.
Wow this is bubble talk. Who are you talking to?! I regularly engage with people who are totally ambivalent towards genAI at best and horrifed by genAI at worst. The only people "immediately seeing the value" seem to be marketing grifters on LinkedIn.
"The regular people" …what??
From the article, an OpenAI researcher apparently:
> “Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”
Um, this person needs help? Serious mental issues, hello?! It's really concerning me how many people are having breaks with reality, and I don't only mean the poor people who are sadly taking their own lives.
I think its another example of the Gell-mann amnesia effect, if your'e an expert in something then the AI is often wrong and you're confused why people are saying its great, if you're less skilled then it can be impressive.
https://archive.is/WWBO4