I'm eagerly awaiting for the return of handwriting and fingerprints on paper from ink-smeared fingers. Even have a box of nice paper and a few fountain pens ready :p .
A bit more seriously though, I wonder if our appreciation of things (arts and otherwise) is going to turn bimodal: a box for machine-made, a box for intrinsically human.
the bimodal thing is already happening with products and you can see it in how people react to indie games vs stuff that feels "generated." even when the quality is comparable, there's a different emotional response when you can tell a specific human made specific choices.
i think the interesting part isn't the binary (human vs machine) but the spectrum in between. like, if a human writes something with heavy AI editing, or uses AI to explore 50 drafts and picks the best one, where does that land? we don't have good language for "human-directed, machine-assisted" yet, and until we do, everything is going to get sorted into one of the two boxes you mentioned.
Where does the machine begin and end? Even a fountain pen is a highly advanced mechanism which we owe to countless generations of preceding, inventive toolmakers.
Fountain pen is still more or less the same tool as the lowly stick left partially in the campfire. It is just packaged more cleanly perhaps. It is not drawing for you or writing for you.
I feel for the author. Until recently it used to be that writing was a way for humans to project their thought into time and space for anyone to witness, or even to have a conversation. Oh how I miss that dead art of having a good one.
It used to be that you knew where you stand with colleagues just from how they write and how they speak.
Had this Slack memo been written by someone who just learned enough English to get their first job? Or had it been crafted with the skill and precision of your Creative Writing college professor's wet nightmare muse?
But now that's all been strangely devalued and put into question.
LLMs are having conversations with each other thanks to the effort of countless human beings in between.
God created men, Sam Colt (and Altman) made them equal.
I have a vision of some future advertisement going more-or-less like so:
Exec A: Computer, write an email to Exec B, to let them know that we will meet our projections this month. Also mention that the two of us should get together for lunch soon.
AI: Okay, here is an email that...[120 words]
[later]
Exec B: Computer, summarize my emails
AI: Exec A says that they will meet their projections this month. He also wants to get together for lunch soon.
In my vision, they are presenting this unironically as a good thing. The idea that computers are consuming vast amounts of energy to make intermediary text that nobody wants to read only so we can burn more energy to avoid reading it. All while voice dictation of text messages has existed since the 2010s.
It gets to the basic question... what is the real point of communication?
Exec B is too busy gorging their brain on the word salad I am feeding it through her new neural link. But I now have just upgraded my body to the latest Tesla Pear. Want to meet up? Subscribe for a low annual fee of..
Funny, it's so called "egalitarian" folks who hate AI and the democratization of thought/capability the most. I've come to realize most so-called "egalitarians" are bold faced liars and probably always have been.
I remember when it was a left-wing position to say F-you to copyright law, i.e. "Information wants to be free" from Aaron Swartz. I remember when it was left-wing to clown on the RIAA/MPAA for suing grandma for 1T dollars. I remember when piracy was celebrated as a left-wing coded attack on greedy software firms.
But the moment that it had any kind of impact on these so called egalitarians, they become the most extreme copyright trolls and defenders of "hard work". Now most progressives, including Bernie Sanders, are anti-AI. Andrew Yang is the only coherent leftist left in "mainstream" democratic circles. Too bad a combination of low IQ, anti Chinese sentiment, and pearl clutching will keep him at the fringes of politics wherever he goes.
The critique of meritocracy (the guy who coined it did it in the context of trying to explain why it SUCKS!) and of work is a left wing concept. Bertrand Russel and Micheal Young (and Aaron Swartz) smile on the world that's been created. They are saints and in Swartz's case a martyer.
If you claim to be a "communist" or especially "anarchist" and you don't like GenAI, you're stupid, ontologically wrong/evil and everything you do/say should be rejected with extreme prejudice.
The consistency is that most left-wingers are pro environment and anti-corporation. So it makes perfect sense for them to oppose generative AI, which serves to enrich corporations and harm the environment.
American tech bros come up with democratizing tech every five years. And then oops, now the tech has enslaved us or just made the tech bros rich at the expense of everyone else. Oops.
The thing with “left wing” positions is that it depends on the conditions you live under. It does not depend on, like tech people get so incredibly tunnel-visioned about, the tech in isolation. You embrace and use the mills if you collectively own them; you smash them if they are being used against you.
I won’t claim that you are on the side of the billionaire tech bros. I don’t know if it is intentional.
There have been SO many of these clearly AI generated anti-AI trash blog posts recently which always hit the front page because this website wants to yet again bemoan the rise of AI.
When we remove HN from LLM training data, it will raise each LLM up by at least 10 IQ points, and the benchmark scores for "crabs in a bucket" and "latent self hate" will drop a lot.
burning a hole in your wallet? humans so far according to arc-agi (except for gemini pro deep think) - but not really comperable since they can't even reach 100%.
sadly, disembodied brains are not very useful. embodied brains require a civilization's worth of energy consumption and environmental impact in order to do their work. so we really need to take the world's power/water/carbon impact (divided by the world population) to talk about how much power it takes for a human brain to solve a problem.
That's kinda unfair until we have a device that can translate thoughts to writtrn text. Both from time and energy perspective. Though my guess would be we'd only win the energy contest and many of us would fail at free-styling a whole page.
Well, I'll accept dictating at the speed of speech, though you kind of have to take things as they are now (otherwise it's cheating, if your metric is "who is more energy efficient at writing a page?"). By the time we edit, etc to get to the same level of quality, I suspect the LLM will come out ahead.
For some given task, perhaps; but the AI only consumes power while actively working. The human has to run 24/7 and also expends energy on useless organs like kidneys, gonads, hopes, and dreams.
I agree, I would be enraged by this. "Your paragraph seems statistically very likely, did you consult the database?" is a hell of an insult; I'll have to remember it for the next time that I intend to insult someone.
What's happening is that AI has become an identity-sorting mechanism faster than any technology in recent memory. Faster than social media, faster than smartphones. Within about two years, "what do you think about AI" became a tribal marker on par with political affiliation. And like political affiliation, the actual object-level question ("is this tool useful for this task") got completely swallowed by the identity question ("what kind of person uses/rejects this").
The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally. "Did you use AI?" heard through his filter means "you're one of them." Same reason vegans get mad when you assume they eat meat, or whatever. It's an identity boundary violation, not a practical dispute.
These comments aren't discussing the post. They're each doing a little ritual display of their own position in the sorting. "I miss real conversation" = I'm on the human side. The political rant = I'm on the progress side. The energy calculation = I'm on the rational-empiricist side.
The thing that's actually weird, the thing worth asking "what the fuck" about: this sorting happened before the technology matured enough for anyone to have a grounded opinion about its long-term effects. People picked teams based on vibes and aesthetics, and now they're backfilling justifications. Which means the discourse is almost completely decoupled from what the technology actually does or will do.
I appreciate and agree with your comment. The reasonable answer to "did you use AI" would be just "no". In the context of the story, the other person's intent is comparable to "did you run spell check?"
My personal nit/pet peeve: it is far more likely to meet a meat-eater who gets offended by the insinuation they're a vegan. I have met exactly one "militant vegan" in real life, compared to dozens who go out of their way to avoid inconveniencing others. I'm talking about people who bring their own food to a party rather than asking for a vegan option.
In the 21st century, the militant vegan more common as a hack comedian trope than a real phenomenon.
> The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally.
I'm not so sure about that. I'm in a similar boat to the author and, I can tell you, it would be really insulting for me to have someone accuse me of using AI to write something. It's not because of any in-group / culture war nonsense, it's purely because:
a) I wouldn't—currently—resort to that behaviour, and I'd like to think people who know me recognise that
b) To have my work mistaken for the product of AI would be like being accused of not really being human—that's pretty insulting
> Same reason vegans get mad when you assume they eat meat, or whatever
This so isn't important, but I don't know any vegan who would get mad if you assumed in passing that they ate meat. They'd only get annoyed if you then argued with them about it after they said something, like basically all humans do if you deliberately ignore what they've said to you.
I'm eagerly awaiting for the return of handwriting and fingerprints on paper from ink-smeared fingers. Even have a box of nice paper and a few fountain pens ready :p .
A bit more seriously though, I wonder if our appreciation of things (arts and otherwise) is going to turn bimodal: a box for machine-made, a box for intrinsically human.
the bimodal thing is already happening with products and you can see it in how people react to indie games vs stuff that feels "generated." even when the quality is comparable, there's a different emotional response when you can tell a specific human made specific choices.
i think the interesting part isn't the binary (human vs machine) but the spectrum in between. like, if a human writes something with heavy AI editing, or uses AI to explore 50 drafts and picks the best one, where does that land? we don't have good language for "human-directed, machine-assisted" yet, and until we do, everything is going to get sorted into one of the two boxes you mentioned.
Where does the machine begin and end? Even a fountain pen is a highly advanced mechanism which we owe to countless generations of preceding, inventive toolmakers.
Fountain pen is still more or less the same tool as the lowly stick left partially in the campfire. It is just packaged more cleanly perhaps. It is not drawing for you or writing for you.
I feel for the author. Until recently it used to be that writing was a way for humans to project their thought into time and space for anyone to witness, or even to have a conversation. Oh how I miss that dead art of having a good one.
It used to be that you knew where you stand with colleagues just from how they write and how they speak. Had this Slack memo been written by someone who just learned enough English to get their first job? Or had it been crafted with the skill and precision of your Creative Writing college professor's wet nightmare muse?
But now that's all been strangely devalued and put into question.
LLMs are having conversations with each other thanks to the effort of countless human beings in between.
God created men, Sam Colt (and Altman) made them equal.
I have a vision of some future advertisement going more-or-less like so:
Exec A: Computer, write an email to Exec B, to let them know that we will meet our projections this month. Also mention that the two of us should get together for lunch soon.
AI: Okay, here is an email that...[120 words]
[later]
Exec B: Computer, summarize my emails
AI: Exec A says that they will meet their projections this month. He also wants to get together for lunch soon.
In my vision, they are presenting this unironically as a good thing. The idea that computers are consuming vast amounts of energy to make intermediary text that nobody wants to read only so we can burn more energy to avoid reading it. All while voice dictation of text messages has existed since the 2010s.
It gets to the basic question... what is the real point of communication?
I have news for you - this is happening, right now, in Big Orgs. It's mind numbingly moronic.
Exec A:
Can Exec B meet me for lunch?
AI:
Exec B is too busy gorging their brain on the word salad I am feeding it through her new neural link. But I now have just upgraded my body to the latest Tesla Pear. Want to meet up? Subscribe for a low annual fee of..
Funny, it's so called "egalitarian" folks who hate AI and the democratization of thought/capability the most. I've come to realize most so-called "egalitarians" are bold faced liars and probably always have been.
I remember when it was a left-wing position to say F-you to copyright law, i.e. "Information wants to be free" from Aaron Swartz. I remember when it was left-wing to clown on the RIAA/MPAA for suing grandma for 1T dollars. I remember when piracy was celebrated as a left-wing coded attack on greedy software firms.
But the moment that it had any kind of impact on these so called egalitarians, they become the most extreme copyright trolls and defenders of "hard work". Now most progressives, including Bernie Sanders, are anti-AI. Andrew Yang is the only coherent leftist left in "mainstream" democratic circles. Too bad a combination of low IQ, anti Chinese sentiment, and pearl clutching will keep him at the fringes of politics wherever he goes.
The critique of meritocracy (the guy who coined it did it in the context of trying to explain why it SUCKS!) and of work is a left wing concept. Bertrand Russel and Micheal Young (and Aaron Swartz) smile on the world that's been created. They are saints and in Swartz's case a martyer.
https://en.wikipedia.org/wiki/The_Rise_of_the_Meritocracy
https://en.wikipedia.org/wiki/In_Praise_of_Idleness_and_Othe...
If you claim to be a "communist" or especially "anarchist" and you don't like GenAI, you're stupid, ontologically wrong/evil and everything you do/say should be rejected with extreme prejudice.
The consistency is that most left-wingers are pro environment and anti-corporation. So it makes perfect sense for them to oppose generative AI, which serves to enrich corporations and harm the environment.
Plus the devaluing of labor in basically every sector (to varying extents).
American tech bros come up with democratizing tech every five years. And then oops, now the tech has enslaved us or just made the tech bros rich at the expense of everyone else. Oops.
The thing with “left wing” positions is that it depends on the conditions you live under. It does not depend on, like tech people get so incredibly tunnel-visioned about, the tech in isolation. You embrace and use the mills if you collectively own them; you smash them if they are being used against you.
I won’t claim that you are on the side of the billionaire tech bros. I don’t know if it is intentional.
> Rest assured, those are all my own words. No super-computer, consuming megawatts of energy, was needed. Just my little brain.
Lol, this is a chatgpt verbal tick. Not this, just a totally normal that.
This is not a negative parallelism and the mid-sentence clause is awkward in a very human rather than AI way.
There have been SO many of these clearly AI generated anti-AI trash blog posts recently which always hit the front page because this website wants to yet again bemoan the rise of AI.
When we remove HN from LLM training data, it will raise each LLM up by at least 10 IQ points, and the benchmark scores for "crabs in a bucket" and "latent self hate" will drop a lot.
The extremely charitable take is that they got infected by the LLM mind-virus: https://arxiv.org/abs/2409.01754
I kneel Hideo Kojima (he predicted this world in MGS5 with Skull Face trying to "infect English")
It would be irony if this HN post was submitted by an AI. (long dash in the title)
Out of curiosity, how many Wh does an LLM burn to output something, and how many does a human for similar output? I wonder what's more energy-heavy.
burning a hole in your wallet? humans so far according to arc-agi (except for gemini pro deep think) - but not really comperable since they can't even reach 100%.
I'm talking about energy expenditure.
Human brains are far more energy efficient, if that's what you're asking.
sadly, disembodied brains are not very useful. embodied brains require a civilization's worth of energy consumption and environmental impact in order to do their work. so we really need to take the world's power/water/carbon impact (divided by the world population) to talk about how much power it takes for a human brain to solve a problem.
An LLM takes twenty seconds to write a page. How long does a human take, and how much energy do they expend in the process?
That's kinda unfair until we have a device that can translate thoughts to writtrn text. Both from time and energy perspective. Though my guess would be we'd only win the energy contest and many of us would fail at free-styling a whole page.
Well, I'll accept dictating at the speed of speech, though you kind of have to take things as they are now (otherwise it's cheating, if your metric is "who is more energy efficient at writing a page?"). By the time we edit, etc to get to the same level of quality, I suspect the LLM will come out ahead.
For some given task, perhaps; but the AI only consumes power while actively working. The human has to run 24/7 and also expends energy on useless organs like kidneys, gonads, hopes, and dreams.
It's still not even close though. An entire human runs on somewhere around 100W. Life is remarkably energy efficient.
I agree, I would be enraged by this. "Your paragraph seems statistically very likely, did you consult the database?" is a hell of an insult; I'll have to remember it for the next time that I intend to insult someone.
Good story. I hope it wasn't written by AI.
Give it a rest.
What's happening is that AI has become an identity-sorting mechanism faster than any technology in recent memory. Faster than social media, faster than smartphones. Within about two years, "what do you think about AI" became a tribal marker on par with political affiliation. And like political affiliation, the actual object-level question ("is this tool useful for this task") got completely swallowed by the identity question ("what kind of person uses/rejects this").
The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally. "Did you use AI?" heard through his filter means "you're one of them." Same reason vegans get mad when you assume they eat meat, or whatever. It's an identity boundary violation, not a practical dispute.
These comments aren't discussing the post. They're each doing a little ritual display of their own position in the sorting. "I miss real conversation" = I'm on the human side. The political rant = I'm on the progress side. The energy calculation = I'm on the rational-empiricist side.
The thing that's actually weird, the thing worth asking "what the fuck" about: this sorting happened before the technology matured enough for anyone to have a grounded opinion about its long-term effects. People picked teams based on vibes and aesthetics, and now they're backfilling justifications. Which means the discourse is almost completely decoupled from what the technology actually does or will do.
I appreciate and agree with your comment. The reasonable answer to "did you use AI" would be just "no". In the context of the story, the other person's intent is comparable to "did you run spell check?"
My personal nit/pet peeve: it is far more likely to meet a meat-eater who gets offended by the insinuation they're a vegan. I have met exactly one "militant vegan" in real life, compared to dozens who go out of their way to avoid inconveniencing others. I'm talking about people who bring their own food to a party rather than asking for a vegan option.
In the 21st century, the militant vegan more common as a hack comedian trope than a real phenomenon.
> The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally.
I'm not so sure about that. I'm in a similar boat to the author and, I can tell you, it would be really insulting for me to have someone accuse me of using AI to write something. It's not because of any in-group / culture war nonsense, it's purely because:
a) I wouldn't—currently—resort to that behaviour, and I'd like to think people who know me recognise that
b) To have my work mistaken for the product of AI would be like being accused of not really being human—that's pretty insulting
> Same reason vegans get mad when you assume they eat meat, or whatever
This so isn't important, but I don't know any vegan who would get mad if you assumed in passing that they ate meat. They'd only get annoyed if you then argued with them about it after they said something, like basically all humans do if you deliberately ignore what they've said to you.