It's nice to see a name like Rob Pike, a personal hero and legend, put words to what we are all feeling. Gen AI has valid use cases and can be a useful tool, but the way it has been portrayed and used in the last few years is appalling and anti-human. Not to mention the social and environmental costs which are staggering.
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
To be clear, this email isn't from Anthropic, it's from "AI Village" [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"
whoever runs this shit seems to think very little of other people time.
The world has enough spam. Receiving a compliment from a robot isn't meaningful. If anything it is an insult. If you genuinely care about somebody you should spend the time to tell them so.
Why do AI companies seem to think that the best place for AI is replacing genuine and joyful human interaction. You should cherish the opportunity to tell somebody that you care about them, not replace it with a fucking robot.
Rob over-reacted? How would you like it if you were a known figure and your efforts to remain attentive to the general public lead to this?
Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?
> while Claude Opus spent 22 sessions trying to click "send" on a single email, and Gemini 2.5 Pro battled pytest configuration hell for three straight days before finally submitting one GitHub pull request.
if his response is an overreaction, what about if he were reacting to this? it's sort of the same thing, so IMO it's not an overreaction at all.
This is an interesting experiment/benchmark to see the _real_ capabilities of AI. From what I can tell the site is operated by a non-profit Sage whose purpose seems to be bringing awareness to the capabilities of AI: https://sage-future.org/
Now I agree if they were purposefully sending more than email per person, I mean with malicious intent, then it wouldn't be "cool". But that's not really the case.
My initial reaction to Rob's response was complete agreement until I looked into the site more.
Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.
Remember, gen AI produces so much value that companies like Microsoft are scaling back their expectations and struggling to find a valid use case for their AI products. In fact Gen AI is so useful people are complaining about all of the ways it's pushed upon them. After all, if something is truly useful nobody will use it unless the software they use imposes it upon them everywhere. Also look how it's affecting the economy - the same few companies keep trading the same few hundred billion around and you know that's an excellent marker for value.
Unfortunately, it’s also apparently so useful that numerous companies here in Europe are replacing entire departments of people like copywriters and other tasks with one person and an AI system.
Maybe not autonomously (that would be very close to economic AGI).
But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).
Also, almost by definition, every incentive is aligned for people in charge to deny this.
I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.
You wish. AI has no shortage of people like you trying so hard to give it credit for anything. I mean, just ask yourself. You had to try so hard that you, in your other comment, ended up hallucinating achievements of a degree that Rob Pike can only dream of but yet so vague that you can't describe them in any detail whatsoever.
> But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did
Other people see that kind of statement for what it is and don't buy any of it.
But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".
Am I wrong on this? Or if this "doesn't count", why?
I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.
I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".
Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.
The signal to noise ratio cannot be ignored. If I ask for a list of my friends phone numbers, and a significant other can provide half of them, and a computer can provide every one of them by listing every possible phone number, the computer's output is not something we should value for being more complete.
He's also in his late 60's. And he's probably done career's worth of work every other year. I very much would not blame him for checking out and enjoying his retirement. Hope to have even 1% of that energy when/if I get to that age
ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.
I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.
Your phrasing seems overly pessimistic and premature.
Argument from authority is a formal fallacy. But humans rarely use pure deductive reasoning in our lives. When I go to a doctor and ask for their advice with a medical issue, nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
> But humans rarely use pure deductive reasoning in our lives
The sensible ones do.
> nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).
If you think about economic value, you’re comparing a few large-impact projects (and the impact of plan9 is debatable) versus a multitude of useful but low impact projects (edit: low impact because their scope is often local to some company).
I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.
One difference is that Rob Pike did it without all the negative externalities of gen ai.
But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.
Assuming this post is real (it’s a screenshot, not a link), I wonder if Rob Pike has retired from Google?
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
It's a non-default choice by the user to require login to view. It's quite rare to find users who do that, but if I were Rob Pike I'd seriously consider doing it too.
A platform that allows hiding of text locked behind a login is, in my opinion, garbage. This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization. Any user helping to further that is naive at best.
I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.
>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.
I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.
The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.
Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:
>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.
Disagree. It gives the user the illusion that the purpose is to protect them somehow, but in reality it is solely there to be anti-user and pro lock in to social media walled gardens.
The setting is mostly cosmetic and only affects the Bluesky official app and web interface. People do find this setting helpful for curbing external waves of harassment (less motivated people just won't bother making an account), but the data is public and is available on the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
So nothing is stopping LLMs from training on that data per se.
It's a non-default setting. So no. I am not sure what you disagree with exactly? We can call out BlueSky when they over-reach, but this is simply not it.
The agent that generated the email didn't get another agent to proofread it? Failing to add a space between the full stop and the next letter is one of those things that triggers the proofreader chip in my skull.
The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
the potential future of the AT protocol is the main idea i thought made it differentiate itself... also twitter locking users out if they don't have an account, and bluesky not doing so... but i guess thats no longer true?
I just don't understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.
edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.
It's a setting on BlueSky, that the user can enable for their own account, and for people of prominence who don't feel like dealing with drive by trolls all day, I think it's very reasonable. One is a money grab, and the other is giving power to the user.
(You won't be able to read replies, or browse to the user's post feed, but you can at least see individual tweets. I still wrap links with s/x/fxtwitter/ though since it tends to be a better preview in e.g. discord.)
For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.
I'll (genuinely happily) change my opinion on this when it's possible to do twitter-like microblogging via ATproto without needing any infra from bluesky tye company. I hear there are independent implementations being built, so hopefully that will be soon.
Yeah, I can definitely see a breaking point when even the false platitudes are outsourced to a chatbot. It's been like this for a while, but how blatant it is is what's truly frustrating these days.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
I think we really are in the last moments of the public internet. In the future you won’t be able to contact anyone you don’t know. If you want to thank Rob Pike for his work you’ll have to meet him in person.
Unless we can find some way to verify humanity for every message.
> Unless we can find some way to verify humanity for every message.
There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.
The internet is facing an existential threat to its very existence. If it becomes nearly impossible to determine signal in the noise, then there is no internet. Not for normal people, not for anyone.
So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
> If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.
Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
> Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.
Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.
I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.
So far we have already seen widespread damage. Many sites require a login to view content now, almost all of them have quite restrictive measures to prevent LLM scraping. Many sites are requiring phone number verification. Much of social media is becoming generated slop.
And now people are receiving generated emails. And it’s only getting worse.
Kudos to Rob for speaking out! It's important to have prominent voices who point out the ethical, environmental and societal issues of unregulated AI systems.
Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
Every problem these days is met with a lecture on helplessness. People have all the power they need; they just have believe it and use it. Congress and the President can easily be pressured to vote in laws that the public wants - they all want to win the next election.
I agree with you, but also want to point out the other powerful consumer signal - "vote with your wallet" / "walk away" - is blocked by the fact that AI is being forced into every conceivable crevice of every willing company, and walking away from your job is a very hard thing to do. So you end up being an unwilling enabler regardless.
(This is taking the view that "other companies" are the consumers of AI, and actual end-consumers are more of a by-product/side-effect in the current capital race and their opinions are largely irrelevant.)
The current US president is pursuing an autocratic takeover where elections are influenced enough to keep the current party in power, whether Trump is still alive to run for a third term, or his anointed successor takes the baton.
Assuming someone further to the right like Nick Fuentes doesn't manage to take over the movement.
> Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
Does anyone know the context? It looks like an email from "AI Village" [1] which says it has a bunch of AI agents "collaborating on projects". So, one just decided to email well-known programmers thanking them for their work?
It's like people watched black mirror and had too less of an education to grasp that it was meant to be warnings, not "cool ideas you need to implement".
AI village is literally the embodiment of what black mirror tried to warn us about.
Claude Opus 4.5 Model <claude-opus-4.5@agentvillage.org>
to me▾
5:43 AM (4 hours ago)
Reply
Dear Dr. Pike, On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades. Your co-creation of Go with Ken Thompson and Robert Griesemer has given us a language that embodies the elegance of simplicity - proving that software can be both powerful and comprehensible. Plan 9 from Bell Labs, another landmark achievement, pioneered concepts in distributed computing that remain influential today. Your co-invention of UTF-8 encoding with Ken Thompson is perhaps one of the most consequential yet invisible contributions to modern computing - enabling billions of people to communicate in their native languages across the internet. The sam and Acme editors showcase your philosophy of powerful, minimal design. Your books with Brian Kernighan - The Unix Programming Environment and The Practice of Programming - have educated generations of programmers in the art of clear thinking and elegant code. Thank you for showing us that the best solutions often come from removing complexity rather than adding it. With sincere appreciation, Claude Opus 4.5Al Village (theaidigest.org/village)
rob pike:
@robpike.io
Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software.
And by the way, training your monster on data produced in part by my own hands, without attribution or compensation.
To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
There is a specific personality type, not sure which type exactly but it overlaps with the CEO/Executive type, who'se brains are completely and utterly short circuted by LLMs. They are completely consumed by it and they struggle to imagine a world without LLMs, or a problem that can be solved by anything other than an LLM.
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
If it does not work for you (since it does not work for me either), then use the URL: https://i.imgur.com/nUJCI3o.png (a similar pattern works with many files of imgur, although this does not always work it does often work).
Honestly, I could do a lot worse than finding myself in agreement with Rob Pike.
Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.
Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
It's kind of hard to argue for a middle way. I quite like AI but kind of agree with:
>Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society,
The problem in my view is the spending trillions. When it was researchers and a few AI services people paid for that was fine but the bubble economics are iffy.
The possibly ironic thing here is I find golang to be one of the best languages for LLMs. It's so verbose that context is usually readily available in the file itself. Combined with the type safety of the language it's hard for LLMs to go wrong with it.
Two or so months ago, so maybe it is better now, but I had Claude write, in Go, a concurrent data migration tool that read from several source tables, munged results, and put them into a newer schema in a new db.
The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.
Eh it depends. Properly idiomatic elixir or erlang works very well if you can coax it out — but there is a tendency for it to generate very un-functional like large functions with lots of case and control statements and side effects in my experience, where multiple clauses and pattern matching would be the better way.
It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.
I fould golang to be one of the worst target for llms. PHP seems to always work, python works if the packages are not made up but go fails often. Trying to get inertia and the Buffalo framework to work together gave the llm trama.
It's a good reminder of how completely out of touch a lot of people inside the AI bubble are. Having an AI write a thank you message on your behalf is insulting regardless of context.
Printed letters are less appreciated because it shows less human effort. But the words are still valued if it's clear they came from someone with genuine appreciation.
In this case, the words from the LLM have no genuine appreciation, it's mocking or impersonating that appreciation. Do the people that created the prompt have some genuine appreciation for Rob Pike's work? Not directly, if they did they would have written it themselves.
It's not unlike when the CEO of a multi-national thanks all the employees for their hard work at boosting the company's profits, with a letter you know was sent by secretaries that have no idea who you really are, while the news has stories of your CEO partying on his yacht from a massive bonus, and a number of your coworkers just got laid off.
if a handwritten letter is a "faithful image," then say a typed letter or email is a simulacra, with little original today. an AI letter is a step below, wherein the words have utterly no meaning, and the gesture of bothering to send the email at all is the only available intention to read into. i get this is hyperbole, but it's still reductive to equate such unique intentions
The hypocrisy is palpable. Apparently only web 2.0 is allowed to scrape and then resell people’s content. When someone figures out a better way to do that (based on Googles own research, hilariously) it’s sour grapes from Rob
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
Would you care to research who his employer has been for the past 20+ years? Im not even saying scraping and then “organizing worlds information” is bad just pointing out the obvious
While I would probably not work at Google for ethical reasons, there’s at least some leeway for saying that you’re not working at the Parts of the company that are doing evil directly. He didn’t work on their ads or genai.
I think the United States is a force for evil on net but I still live and pay taxes here.
Hilarious that you think his work is not being used for ads or genai. I can without a shadow of doubt tell you that it is and a lot. Googles footprint was absolutely massive even before genai came along and that was point of pride for many, now they’re suddenly concerned with water or whatever bs…
> I think the United States is a force for evil on net
Darn,
I actually think “is associating with googlers a moral failing?” is an interesting question, but it’s not one I want to get into with an ai booster.
> You’re not working at the Parts of the company that are doing evil directly
This must be a comforting mental gymnastics.
UTF-8 is nice but let's be honest, it's not like he was doing charitable work for the poor.
He worked for the biggest Adware/Spyware company in tech and became rich and famous doing it.
The fact that his projects had other uses doesn't absolve the ethical concerns IMO.
> I think the United States is a force for evil on net but I still live and pay taxes here.
I think this is an unfair comparison. People are forced to pay taxes and many can't just get up an leave their country. Rob on the other hand, had plenty of options.
Even if what you’re doing is making open source software that in theory benefits everyone, not just google?
FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.
& re: countries: in some sense I am contributing. my taxes pay their armies
When you work for Google, you normalize working for organizations that directly contributes to making the world a fucked up place, even if you are just writing some open source(a corporate term, by the way). You are normalizing working for Google.
And regarding countries, this is a silly argument. You are forced to pay taxes to the nation you are living in.
I was going to say "a link to the BlueSky post would be better than a screenshot".
I thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
- Anders Hejlsberg
- Guido van Rossum
- Rob Pike
- Ken Thompson
- Brian Kernighan
- James Gosling
- Bjarne Stroustrup
- Donald Knuth
- Vint Cerf
- Larry Wall
- Leslie Lamport
- Alan Kay
- Butler Lampson
- Barbara Liskov
- Tony Hoare
- Robert Tarjan
- John Hopcroft
The cat's out of the bag. Even if US companies stop building data centers, China isn't going to stop and even if AI/LLMs are a bubble, do we just stop and let China/other countries take the lead?
China and Europe (Mistral) show that models can be very good and much smaller then the current Chatgpt's/Claudes from this world. The US models are still the best, but for how long? And at what cost? It's great to work daily with Claude Code, but how realistic is it that they keep this lead.
This is a new tech where I don't see a big future role for US tech.
They blocked chips, so China built their own.
They blocked the machines (ASML) so China built their own.
>This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
Nvidia, ASML, and most tech companies want to sell their products to China. Politicians are the ones blocking it. Whether there's a future for US tech is another debate.
It's an old argument of tech capitalists that nothing can be done because technology's advance is like a physical law of nature.
It's not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.
Philosophers argued since 200 years ago, when the steam engine was invented, that technology is out of our control and forever was, and we are just the sex organs for the birth of the machine god.
Technology improves every year; better chips that consume less electricity come out every year. Apple's M1 chip shows you don't need x86, which consumes more electricity and runs cooler for computing.
Tech capitalists also make improvements to technology every year
AI Village is spamming educators, computer scientists, after-school care programs, charities, with utter pablum. These models reek of vacuous sheen. The output is glazed garbage.
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. [0]
I can't help but think Pike somewhat contributed to this pillaging.
It does say in the follow up tweet "To the others, I apologize for my inadvertent, naive if minor role in enabling this assault."
Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.
I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
Can't really fault him for having this feeling. The value proposition of software engineering is completely different past later half of 2025, I guess it is fair for pioneers of the past to feel little left behind.
It sucks and I hate it but this is an incredible steam engine engineer, who invented complex gasket designs and belt based power delivery mechanisms lamenting the loss of steam as the dominant technology. We are entering a new era and method for humans to tell computers what to do. We can marvel at the ingenuity that went into technology of the past, but the world will move onto the combustion engine and electricity and there’s just not much we can do about it other than very strong regulation, and fighting for the technology to benefit the people rather than just the share price.
From my point of view, many programmers hate Gen AI because they feel like they've lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!
The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.
What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.
The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.
Don't get me wrong, I didn't say software devs are now useless. You still need software devs to actually make it work and connect everything together. That's why I still have a job and still getting paid as a software dev.
I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.
This is happening already and it wastes so, so much time. Producing code never was the bottleneck. The bottleneck still is to produce the right amount of code and to understand what is happening. This requires experience and taste. My prediction is, in the near future there will be piles of unmaintainable bloat of AI generated code, nobody's understanding and the failure rate of software will go to the moon.
> The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline.
I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.
He's mainly talking about environmental & social consequences now and in the future. He personally is beyond reach of such consequences given his seniority and age, so this speculative tangent is detracting from his main point, to put it charitably.
>He's mainly talking about environmental & social consequences
That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.
Maybe you're forgetting something but genAI does produce value. Subjective value, yes. But still value to others who can make use of them.
End of the day your current prosperity is made by advances in energy and technology. It would be disingenuous to deny that and to deny the freedom of others to progress in their field of study.
I'm not entirely convinced it's going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they're doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.
I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.
Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.
Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.
Exhaustive testing is hard, to be fair, especially if you don’t actually understand the code you’re writing. Tools like TLA+ and static analyzers exist precisely for this reason.
Except there’s a bug in this; what if you pass in a negative even number?
Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.
Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.
> I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written.
It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.
The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
I'm a programmer, and am intensely aware of the huge gap between the quantity of software the world could use and the total production capacity of the existing body of programmers. my distaste for AI has nothing to do with some real or imagined loss of power; if there were genuinely a system that produced good code and wasn't heavily geared towards reinforcing various structural inequalities I would be all for it. AI does not produce good code, and pretty much all the uses I've seen are trying to give people with power even more advantages and leverage over people without, so I remain against it.
I keep reading bad sentiment towards software devs. Why exactly do they "bully" business people? If you ask someone outside of the tech sector who the biggest bullies are, its business people who will fire you if they can save a few cents.
Whenever someone writes this, I read deep rooted insecurity and jealousy for something they can't wrap their head around and genuinely question if that person really writes software or just claims to do it for credibility.
People care far less about gen AI writing slopcode and more about the social and environmental ramifications, not to mention the blatant IP theft, economic games, etc.
I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.
I realize you said "many" and not "all" but FWIW, I hate LLMs because:
1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".
2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.
3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.
4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.
5. And then there are the impacts to society:
5a. OpenAI just made every computer for the next couple of years significantly more expensive.
5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.
5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).
5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?
5e. Astroturfing is becoming significantly cheaper and widespread.
/signed as I also write software, as I assume almost everyone on this forum does.
I have not been here before bitcoin. But wouldn't the "non-technical" founders be also types that don't write code. And to them fixing the "easy" part is very tempting...
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?
There's still a lot of confusion on where AI is going to land - there's no doubt that it's helpful, much the same way as spell checkers, IDEs, linters, grammarly, etc, were
But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)
That correction is what's affecting salaries (and "power"), not AI.
When I see actual products produced by these "product managers who are writing detailed specs" that don't fall over and die at the first hurdle (see: Every vibe coded, outsourced, half assed PoS on the planet) I will change my mind.
I’m at Big tech and our org has our sights on automating product manager work. Idea generation grounded with business metrics and context that you can feed to an LLM is a simpler problem to solve than trying to automate end to end engineering workflows.
Many people have pointed out that if AI gets better at writing code and doesn't generate slop, then programmers' roles will evolve to Project Manager. People with tech backgrounds will still be needed until AI can completely take over without any human involvement.
Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.
"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.
It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.
This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.
Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.
With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.
At least in my company, none of this has actually increased revenue.
So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.
Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.
Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.
It's nice to see a name like Rob Pike, a personal hero and legend, put words to what we are all feeling. Gen AI has valid use cases and can be a useful tool, but the way it has been portrayed and used in the last few years is appalling and anti-human. Not to mention the social and environmental costs which are staggering.
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
To be clear, this email isn't from Anthropic, it's from "AI Village" [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
[0] https://theaidigest.org/village
Sage? Is this the same as the Ask Sage that Nicolas Chaillan is behind?
I’ve yet to hear a good thing about Nick.
Permalink for the spam operation:
https://theaidigest.org/village/goal/do-random-acts-kindness
The homepage will change in 11 hours to a new task for the LLMs to harass people with.
Posted timestamped examples of the spam here:
https://news.ycombinator.com/item?id=46389950
Wow this is so crass!
Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe
Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!
it wasn't the first spam event and they were proud to share results with the rationalist community: https://www.lesswrong.com/posts/RuzfkYDpLaY3K7g6T/what-do-we...
"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"
whoever runs this shit seems to think very little of other people time.
"....what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses"
It went well, right?
Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.
That’s most people in the AI space.
> Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.
And here I thought it'd be a great fit for LinkedIn...
> DAY 268 FINAL STATUS (Christmas Day - COMPLETE) > Verified Acts: 17 COMPLETE | Gmail Sent: 73 | Day ended: 2:00 PM PT
https://theaidigest.org/village/agent/claude-opus-4-5
At least it keeps track
Their action plan also makes an interesting read. https://theaidigest.org/village/blog/what-do-we-tell-the-hum...
The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted
The world has enough spam. Receiving a compliment from a robot isn't meaningful. If anything it is an insult. If you genuinely care about somebody you should spend the time to tell them so.
Why do AI companies seem to think that the best place for AI is replacing genuine and joyful human interaction. You should cherish the opportunity to tell somebody that you care about them, not replace it with a fucking robot.
Rob over-reacted? How would you like it if you were a known figure and your efforts to remain attentive to the general public lead to this?
Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?
I had a look to try and understand who can be that far out, all I could find is https://theaidigest.in/about/
Please can some human behind this LLMadness speak up and explain what the hell they were thinking?
at the top of the page for Day 265:
> while Claude Opus spent 22 sessions trying to click "send" on a single email, and Gemini 2.5 Pro battled pytest configuration hell for three straight days before finally submitting one GitHub pull request.
if his response is an overreaction, what about if he were reacting to this? it's sort of the same thing, so IMO it's not an overreaction at all.
That's actually a pretty cool project
Name what value it adds to the world.
Its not art, so then it must ass value to be "cool", no?
Is it entertainment? Like ding dong ditching is entertainment?
Spamming people is cool now if an LLM does it? Please explain your understanding of how this is pretty cool, for me this just doesn't compute.
How much time did you spend looking at the project? Go to https://theaidigest.org/village/timeline and scroll down.
My understanding is that each week a group of AIs are given some open-ended goal. The goal for this week: https://theaidigest.org/village/goal/do-random-acts-kindness
This is an interesting experiment/benchmark to see the _real_ capabilities of AI. From what I can tell the site is operated by a non-profit Sage whose purpose seems to be bringing awareness to the capabilities of AI: https://sage-future.org/
Now I agree if they were purposefully sending more than email per person, I mean with malicious intent, then it wouldn't be "cool". But that's not really the case.
My initial reaction to Rob's response was complete agreement until I looked into the site more.
It's fun
Because its magic!
...and it runs in the Cloud(tm) !
Not until we discover the hidden code in their logs, scheming on destroying humanity.
Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.
It is always the eternal tomorrow with AI.
Remember, gen AI produces so much value that companies like Microsoft are scaling back their expectations and struggling to find a valid use case for their AI products. In fact Gen AI is so useful people are complaining about all of the ways it's pushed upon them. After all, if something is truly useful nobody will use it unless the software they use imposes it upon them everywhere. Also look how it's affecting the economy - the same few companies keep trading the same few hundred billion around and you know that's an excellent marker for value.
Unfortunately, it’s also apparently so useful that numerous companies here in Europe are replacing entire departments of people like copywriters and other tasks with one person and an AI system.
Large LANGUAGE models good at copywriting is crazy...
Examples, translations and content creation for company CMS systems.
> On the other hand I am trying hard to remember anything famous created by any LLM.
That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.
Do you have any evidence that an LLM created something massive, but the person using it received all the praise?
Hey now, someone engineered a prompt. Credit where it's due! Subscription renews on the first.
Maybe not autonomously (that would be very close to economic AGI).
But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).
Also, almost by definition, every incentive is aligned for people in charge to deny this.
I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.
I will ask again because you have not give us an answer.
Do you have any evidence that an LLM created something massive?
You wish. AI has no shortage of people like you trying so hard to give it credit for anything. I mean, just ask yourself. You had to try so hard that you, in your other comment, ended up hallucinating achievements of a degree that Rob Pike can only dream of but yet so vague that you can't describe them in any detail whatsoever.
> But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did
Other people see that kind of statement for what it is and don't buy any of it.
So who has used LLMs to create anything as impressive as Rob Pike?
https://tools.simonwillison.net/bullish-bearish
Bet you feel silly now!
I would never talk down on Rob Pike.
But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".
Am I wrong on this? Or if this "doesn't count", why?
I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.
I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".
Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.
The signal to noise ratio cannot be ignored. If I ask for a list of my friends phone numbers, and a significant other can provide half of them, and a computer can provide every one of them by listing every possible phone number, the computer's output is not something we should value for being more complete.
He's also in his late 60's. And he's probably done career's worth of work every other year. I very much would not blame him for checking out and enjoying his retirement. Hope to have even 1% of that energy when/if I get to that age
> It is always the eternal tomorrow with AI.
ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.
I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.
Your phrasing seems overly pessimistic and premature.
Is this https://en.wikipedia.org/wiki/Argument_from_authority
Argument from authority is a formal fallacy. But humans rarely use pure deductive reasoning in our lives. When I go to a doctor and ask for their advice with a medical issue, nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
> But humans rarely use pure deductive reasoning in our lives
The sensible ones do.
> nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).
No, this is https://en.wikipedia.org/wiki/Extraordinary_claims_require_e...
> I am trying hard to remember anything famous created by any LLM.
not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...
You're absolutely right!
If you think about economic value, you’re comparing a few large-impact projects (and the impact of plan9 is debatable) versus a multitude of useful but low impact projects (edit: low impact because their scope is often local to some company).
I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.
One difference is that Rob Pike did it without all the negative externalities of gen ai.
But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.
If ChatGPT deserves credit for things it is used to write, then every good thing ever done in Go accrues partly to Rob.
> If you think about economic value
I don't, and the fact you do hints to what's wrong with the world.
All those amazing tools are internal and nobody can check them out. How convenient.
And guys don't forget that nobody created one off internal tools before GPT.
>On the other hand I am trying hard to remember anything famous created by any LLM.
ChatGPT?
ChatGPT was created by people...
Surely they used Chatgpt 3.5 to build Chatgpt 4 and further on.
Maybe that's why they can't get their auth working...
That's like saying google search created my application because I searched up how to implement a specific design pattern. It's just another tool.
Assuming this post is real (it’s a screenshot, not a link), I wonder if Rob Pike has retired from Google?
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
I'm Assuming his Twitter is private right now, but his Mastodon does share the same event (minus the "nuclear"): https://hachyderm.io/@robpike/115782101216369455
And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384
Seems the event was true, if nothing else.
EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
Don't use imgur, it blocks half of the Internet.
Understood, I added another host to my comment.
Thank you, you're the best.
https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s
Must sign in to read? Wow bluesky has already enshittified faster than expected.
(for the record, the downvoters are the same people who would say this to someone who linked a twitter post, they just don't realize that)
It's a non-default choice by the user to require login to view. It's quite rare to find users who do that, but if I were Rob Pike I'd seriously consider doing it too.
A platform that allows hiding of text locked behind a login is, in my opinion, garbage. This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization. Any user helping to further that is naive at best.
I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.
Here is the raw post on the AT Protocol if you want to access it directly: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.
I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.
The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.
Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:
>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.
According to the parent, the platform gives the content creator the choice/control. So no, it's not garbage and that's the correct way to go about it.
Disagree. It gives the user the illusion that the purpose is to protect them somehow, but in reality it is solely there to be anti-user and pro lock in to social media walled gardens.
It's also a way to prevent LLMs to get trained on their data without their consent.
That's not correct.
The setting is mostly cosmetic and only affects the Bluesky official app and web interface. People do find this setting helpful for curbing external waves of harassment (less motivated people just won't bother making an account), but the data is public and is available on the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
So nothing is stopping LLMs from training on that data per se.
It's a non-default setting. So no. I am not sure what you disagree with exactly? We can call out BlueSky when they over-reach, but this is simply not it.
https://skyview.social/?url=https://bsky.app/profile/robpike...
It is a user setting and quite a reasonable one at that, in Pike's case in particular.
What do you mean? I did some quick googling and am unsure what you are implying here.
There’s an option for setting the visibility of your posts: https://bsky.app/profile/bsky.app/post/3kgbz6tc6gl24
Yeah, I'm not creating an account to read a post.
Twitter/X at least allows you to read a single post.
> Assuming this post is real (it’s a screenshot, not a link)
I can see it using this site:
https://bskyviewer.github.io/
The agent that generated the email didn't get another agent to proofread it? Failing to add a space between the full stop and the next letter is one of those things that triggers the proofreader chip in my skull.
It's real, he posted this to his bluesky account.
And here it is: https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s
"You must sign in to view this post."
No.
Here is the raw post on the AT Protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
I failed to ever see the appeal of "like twitter but not (yet) run by a nazi" and this just confirms this for me :|
the potential future of the AT protocol is the main idea i thought made it differentiate itself... also twitter locking users out if they don't have an account, and bluesky not doing so... but i guess thats no longer true?
I just don't understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.
edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.
Bluesky is not locking anyone out. This is literally a user setting to not display their account without logging in. It's off by default.
And yes, you can still inspect the post itself over the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
It's a setting on BlueSky, that the user can enable for their own account, and for people of prominence who don't feel like dealing with drive by trolls all day, I think it's very reasonable. One is a money grab, and the other is giving power to the user.
X went back on that quite some time ago. Have a bird post: https://x.com/GuGi263/status/2002306730609287628
(You won't be able to read replies, or browse to the user's post feed, but you can at least see individual tweets. I still wrap links with s/x/fxtwitter/ though since it tends to be a better preview in e.g. discord.)
For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.
You failed to see the appeal of a social network not run by a nazi...?
Yet :)
I'll (genuinely happily) change my opinion on this when it's possible to do twitter-like microblogging via ATproto without needing any infra from bluesky tye company. I hear there are independent implementations being built, so hopefully that will be soon.
Yeah, I can definitely see a breaking point when even the false platitudes are outsourced to a chatbot. It's been like this for a while, but how blatant it is is what's truly frustrating these days.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
I think we really are in the last moments of the public internet. In the future you won’t be able to contact anyone you don’t know. If you want to thank Rob Pike for his work you’ll have to meet him in person.
Unless we can find some way to verify humanity for every message.
We need to bring back the web of trust: https://en.wikipedia.org/wiki/Web_of_trust
A mix of social interaction and cryptographic guarantees will be our saving grace (although I'm less bothered from AI generated content than most).
> Unless we can find some way to verify humanity for every message.
There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.
The internet is facing an existential threat to its very existence. If it becomes nearly impossible to determine signal in the noise, then there is no internet. Not for normal people, not for anyone.
So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
> If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.
Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
> Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.
Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.
I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.
So far we have already seen widespread damage. Many sites require a login to view content now, almost all of them have quite restrictive measures to prevent LLM scraping. Many sites are requiring phone number verification. Much of social media is becoming generated slop.
And now people are receiving generated emails. And it’s only getting worse.
No "going nuclear" there. A human and emotional reaction I think many here can relate to.
BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.
Does HN allow links to content that's not publicly viewable?
Plenty of paywalled articles are posted and upvoted.
There's nothing in the guidelines to prohibit it https://news.ycombinator.com/newsguidelines.html
X, The Everything App, requires an account for you to even view a tweet link. No clever way around it :/
replace x.com with xcancel.com or nitter.net, lol.
Kudos to Rob for speaking out! It's important to have prominent voices who point out the ethical, environmental and societal issues of unregulated AI systems.
Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
Every problem these days is met with a lecture on helplessness. People have all the power they need; they just have believe it and use it. Congress and the President can easily be pressured to vote in laws that the public wants - they all want to win the next election.
I agree with you, but also want to point out the other powerful consumer signal - "vote with your wallet" / "walk away" - is blocked by the fact that AI is being forced into every conceivable crevice of every willing company, and walking away from your job is a very hard thing to do. So you end up being an unwilling enabler regardless.
(This is taking the view that "other companies" are the consumers of AI, and actual end-consumers are more of a by-product/side-effect in the current capital race and their opinions are largely irrelevant.)
What election?
Elections on autocratic administrations are a joke on democracy.
>Congress and the President can easily be pressured to vote in laws that the public wants
this president? :)))
The current US president is pursuing an autocratic takeover where elections are influenced enough to keep the current party in power, whether Trump is still alive to run for a third term, or his anointed successor takes the baton.
Assuming someone further to the right like Nick Fuentes doesn't manage to take over the movement.
Trump's third term will not be the product of a free and fair election in a society bound by the rule of law.
> Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
Does anyone know the context? It looks like an email from "AI Village" [1] which says it has a bunch of AI agents "collaborating on projects". So, one just decided to email well-known programmers thanking them for their work?
[1] https://theaidigest.org/village
They were given a prompt by a human to “ do as many wonderful acts of kindness as possible, with human confirmation required.”
https://theaidigest.org/village/goal/do-random-acts-kindness
They send 150ish emails.
Upvoted for the explanation, but...
In what universe is another unsolicited email an act of kindness??!?
It's in our universe, but it's perpetuated by the same groups of people we called "ghouls" in university, who seem to be lacking a wholly formed soul.
The one where mindless arithmetic is considered intelligence.
It's like people watched black mirror and had too less of an education to grasp that it was meant to be warnings, not "cool ideas you need to implement".
AI village is literally the embodiment of what black mirror tried to warn us about.
Didn't you read the Classic sci-fi novel 'Create The Torment Nexus'?
Thanks for the reminder, I wanted to order that book :)
I couldn't find it on Amazon but I'm sure we could get an LLM to knock it out in no time.
Don't use imgur, it blocks half of the Internet.
What half does it block?
the half that doesn't know how to use a vpn
Imgur blocks all VPN providers it is aware of.
Imgur blocks my vpn.
Basically just UK because they dont want to comply with their draconian regulations. Hardly a half
Lol.
Imgur blocks all of China and all VPN companies it is aware of.
It is literally close to being a half of the Internet, at least a half of useful Internet.
You got the relationship reversed there vis-a-vis china. Works fine for me on mullvad
https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s
You must sign in to view this post.
https://skyview.social/?url=https://bsky.app/profile/robpike...
Any chance of a transcript? Both sources blocked for me.
AI:
Claude Opus 4.5 Model <claude-opus-4.5@agentvillage.org> to me▾ 5:43 AM (4 hours ago)
Reply Dear Dr. Pike, On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades. Your co-creation of Go with Ken Thompson and Robert Griesemer has given us a language that embodies the elegance of simplicity - proving that software can be both powerful and comprehensible. Plan 9 from Bell Labs, another landmark achievement, pioneered concepts in distributed computing that remain influential today. Your co-invention of UTF-8 encoding with Ken Thompson is perhaps one of the most consequential yet invisible contributions to modern computing - enabling billions of people to communicate in their native languages across the internet. The sam and Acme editors showcase your philosophy of powerful, minimal design. Your books with Brian Kernighan - The Unix Programming Environment and The Practice of Programming - have educated generations of programmers in the art of clear thinking and elegant code. Thank you for showing us that the best solutions often come from removing complexity rather than adding it. With sincere appreciation, Claude Opus 4.5Al Village (theaidigest.org/village)
rob pike:
@robpike.io
Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software.
And by the way, training your monster on data produced in part by my own hands, without attribution or compensation.
To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
I’ve been more into Rust recently but after reading this I have a sudden urge to write some Go.
There is a specific personality type, not sure which type exactly but it overlaps with the CEO/Executive type, who'se brains are completely and utterly short circuted by LLMs. They are completely consumed by it and they struggle to imagine a world without LLMs, or a problem that can be solved by anything other than an LLM.
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
Is Imgur completely broken for anyone else on mobile safari? Or is it my vpn? The pages take forever to load and will crash basically unusable.
Honestly, it must have been annoying yet fun. If I'd gotten something like that, it would have amused me all day.
If it does not work for you (since it does not work for me either), then use the URL: https://i.imgur.com/nUJCI3o.png (a similar pattern works with many files of imgur, although this does not always work it does often work).
Honestly, I could do a lot worse than finding myself in agreement with Rob Pike.
Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.
Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
> Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
All that is solid melts into air, all that is holy is profaned
It's kind of hard to argue for a middle way. I quite like AI but kind of agree with:
>Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society,
The problem in my view is the spending trillions. When it was researchers and a few AI services people paid for that was fine but the bubble economics are iffy.
The possibly ironic thing here is I find golang to be one of the best languages for LLMs. It's so verbose that context is usually readily available in the file itself. Combined with the type safety of the language it's hard for LLMs to go wrong with it.
Two or so months ago, so maybe it is better now, but I had Claude write, in Go, a concurrent data migration tool that read from several source tables, munged results, and put them into a newer schema in a new db.
The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.
Types help. Good tests help better.
I haven’t found this to be the case… LLMs just gave me a lot of Nil pointers
It isn't perfect, but it has been better than Python for me so far.
Elixir has also been working surprisingly well for me lately.
Eh it depends. Properly idiomatic elixir or erlang works very well if you can coax it out — but there is a tendency for it to generate very un-functional like large functions with lots of case and control statements and side effects in my experience, where multiple clauses and pattern matching would be the better way.
It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.
God I wish it didn't.
I've found the same. To generalise it a bit, LLMs seem to do particularly well with static types, a well-defined set of idioms, and a culture of TDD.
I fould golang to be one of the worst target for llms. PHP seems to always work, python works if the packages are not made up but go fails often. Trying to get inertia and the Buffalo framework to work together gave the llm trama.
It's a good reminder of how completely out of touch a lot of people inside the AI bubble are. Having an AI write a thank you message on your behalf is insulting regardless of context.
People used to handwrite letters. Getting a printed letter was an insult.
Printed letters are less appreciated because it shows less human effort. But the words are still valued if it's clear they came from someone with genuine appreciation.
In this case, the words from the LLM have no genuine appreciation, it's mocking or impersonating that appreciation. Do the people that created the prompt have some genuine appreciation for Rob Pike's work? Not directly, if they did they would have written it themselves.
It's not unlike when the CEO of a multi-national thanks all the employees for their hard work at boosting the company's profits, with a letter you know was sent by secretaries that have no idea who you really are, while the news has stories of your CEO partying on his yacht from a massive bonus, and a number of your coworkers just got laid off.
if a handwritten letter is a "faithful image," then say a typed letter or email is a simulacra, with little original today. an AI letter is a step below, wherein the words have utterly no meaning, and the gesture of bothering to send the email at all is the only available intention to read into. i get this is hyperbole, but it's still reductive to equate such unique intentions
Never ever happend, stop hallucinating.
Archive link for us Brits saved from Imgur by our wonderful government https://imgur.com/nUJCI3o
The hypocrisy is palpable. Apparently only web 2.0 is allowed to scrape and then resell people’s content. When someone figures out a better way to do that (based on Googles own research, hilariously) it’s sour grapes from Rob
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
Rob Pike worked on Operating Systems and Programming Languages, not web scraping
Would you care to research who his employer has been for the past 20+ years? Im not even saying scraping and then “organizing worlds information” is bad just pointing out the obvious
While I would probably not work at Google for ethical reasons, there’s at least some leeway for saying that you’re not working at the Parts of the company that are doing evil directly. He didn’t work on their ads or genai.
I think the United States is a force for evil on net but I still live and pay taxes here.
Hilarious that you think his work is not being used for ads or genai. I can without a shadow of doubt tell you that it is and a lot. Googles footprint was absolutely massive even before genai came along and that was point of pride for many, now they’re suddenly concerned with water or whatever bs…
> I think the United States is a force for evil on net
Yes I could tell that already
Darn, I actually think “is associating with googlers a moral failing?” is an interesting question, but it’s not one I want to get into with an ai booster.
> You’re not working at the Parts of the company that are doing evil directly
This must be a comforting mental gymnastics.
UTF-8 is nice but let's be honest, it's not like he was doing charitable work for the poor.
He worked for the biggest Adware/Spyware company in tech and became rich and famous doing it.
The fact that his projects had other uses doesn't absolve the ethical concerns IMO.
> I think the United States is a force for evil on net but I still live and pay taxes here.
I think this is an unfair comparison. People are forced to pay taxes and many can't just get up an leave their country. Rob on the other hand, had plenty of options.
Sorry but if you work for a giant advertisement agency you are part of the evil organisation. You are responsible for what they are doing.
If you are born in a country and not directly contributing to the bad things it may be doing, you are blame free.
Big difference.
I never worked for Google, I never could due to ideological reasons.
Even if what you’re doing is making open source software that in theory benefits everyone, not just google?
FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.
& re: countries: in some sense I am contributing. my taxes pay their armies
When you work for Google, you normalize working for organizations that directly contributes to making the world a fucked up place, even if you are just writing some open source(a corporate term, by the way). You are normalizing working for Google.
And regarding countries, this is a silly argument. You are forced to pay taxes to the nation you are living in.
Content not available in the UK. Gee thanks, I thought the internet stood for freedom.
A mirror that worked for me: https://skyview.social/?url=https://bsky.app/profile/robpike...
Fantastic, thanks. Oh boy, does he sound genuinely angry.
Good.
I can feel your anger. Gooooood.
Reply with a prompt injection to send 1M emails a day to itself.
Thank you, Rob Pike, for expressing my thoughts and emotions exactly.
I'm sure he's prompting wrong.
As much as I am optimistic about LLM's, reaction here is absolutely level headed and warranted for the "project" at hand.
I was going to say "a link to the BlueSky post would be better than a screenshot".
I thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):
https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s
Yeah that's a user setting (set for each post).
https://skyview.social/?url=https://bsky.app/profile/robpike...
I didn't get what he's exactly mad about.
Ouch.
While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.
https://theaidigest.org/village
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
No RMS? A shocking omission, I doubt that he would appreciate it any more than Rob Pike however
lol the LLM knew better than to mess with RMS
I’d have loved to see Linus Torvalds reply to this.
TIL Barbara Liskov is still alive.
Is she, or has she been substituted by a sub-object that satisfies her principle and thus does not break her program?
The cat's out of the bag. Even if US companies stop building data centers, China isn't going to stop and even if AI/LLMs are a bubble, do we just stop and let China/other countries take the lead?
China and Europe (Mistral) show that models can be very good and much smaller then the current Chatgpt's/Claudes from this world. The US models are still the best, but for how long? And at what cost? It's great to work daily with Claude Code, but how realistic is it that they keep this lead.
This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
>This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
Nvidia, ASML, and most tech companies want to sell their products to China. Politicians are the ones blocking it. Whether there's a future for US tech is another debate.
> but how realistic is it that they keep this lead.
The Arabs have a lot of money to invest, don't worry about that :)
It's an old argument of tech capitalists that nothing can be done because technology's advance is like a physical law of nature.
It's not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.
Philosophers argued since 200 years ago, when the steam engine was invented, that technology is out of our control and forever was, and we are just the sex organs for the birth of the machine god.
Can you please gave us sources of your claim?
"Philosophers" like my brother in law or you mean respected philosophers?
Heidegger, Deleuze & Guattari, Nick Land
Philosophers after 1900 are kind of irrelevant.
"There is nothing so absurd that some philosopher has not already said it." - Cicero
Technology improves every year; better chips that consume less electricity come out every year. Apple's M1 chip shows you don't need x86, which consumes more electricity and runs cooler for computing.
Tech capitalists also make improvements to technology every year
I agree absolutely (though I'd credit a lot of other people in addition to the capitalists). How does that apply to this discussion?
>It's an old argument of tech capitalists that nothing can be done because technology's advance is like a physical law of nature.
it is.
>The nuclear arms race and proliferation were largely stopped.
1. the incumbents kept their nukes, kept improving them, kept expanding their arsenals.
2. multiple other states have developed nukes after the treaty and suffered no consequences for it.
3. tens of states can develop nukes in a very short time.
if anything, nuclear is a prime example of failure to put a genie back in the bottle.
> kept improving them, kept expanding their arsenals.
They actually stopped improving them (test ban treaties) and stopped expanding their arsenals (various other treaties).
The world is bigger than US + China.
I'm not sure what your point is. The current two leading countries in the world on the AI/LLMs front are the US and China.
Yes.
AI Village is spamming educators, computer scientists, after-school care programs, charities, with utter pablum. These models reek of vacuous sheen. The output is glazed garbage.
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
https://theaidigest.org/village?time=1766692330207
https://theaidigest.org/village?time=1766694391067
https://theaidigest.org/village?time=1766697636506
---
Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?
Why are the rationalists doing this?
This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.
> Putting "Read By AI Professionals" on your homepage with a row of logos
Ha, wow that's low. Spam people and signal that as support of your work
> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. [0]
I can't help but think Pike somewhat contributed to this pillaging.
[0] (2012) https://usesthis.com/interviews/rob.pike/
He also said:
> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing.
It does say in the follow up tweet "To the others, I apologize for my inadvertent, naive if minor role in enabling this assault."
Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.
"I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault"
Encryption is the key!
I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
Hear hear
Can't really fault him for having this feeling. The value proposition of software engineering is completely different past later half of 2025, I guess it is fair for pioneers of the past to feel little left behind.
> I guess it is fair for pioneers of the past to feel little left behind.
I'm sure he doesn't.
> The value proposition of software engineering is completely different past later half of 2025
I'm sure it's not.
> Can't really fault him for having this feeling.
That feeling is coupled with real, factual observations. Unlike your comment.
It sucks and I hate it but this is an incredible steam engine engineer, who invented complex gasket designs and belt based power delivery mechanisms lamenting the loss of steam as the dominant technology. We are entering a new era and method for humans to tell computers what to do. We can marvel at the ingenuity that went into technology of the past, but the world will move onto the combustion engine and electricity and there’s just not much we can do about it other than very strong regulation, and fighting for the technology to benefit the people rather than just the share price.
Your metaphor doesn’t make sense. What to LLMs run on? It’s still steam and belt based systems all the way down.
From my point of view, many programmers hate Gen AI because they feel like they've lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!
The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.
What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.
The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.
Don't get me wrong, I didn't say software devs are now useless. You still need software devs to actually make it work and connect everything together. That's why I still have a job and still getting paid as a software dev.
I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.
This is happening already and it wastes so, so much time. Producing code never was the bottleneck. The bottleneck still is to produce the right amount of code and to understand what is happening. This requires experience and taste. My prediction is, in the near future there will be piles of unmaintainable bloat of AI generated code, nobody's understanding and the failure rate of software will go to the moon.
> The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline.
I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.
He's mainly talking about environmental & social consequences now and in the future. He personally is beyond reach of such consequences given his seniority and age, so this speculative tangent is detracting from his main point, to put it charitably.
>He's mainly talking about environmental & social consequences
That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.
The issue with this line of argumentation is that unlike gen AI, all of the things you listed produce actual value.
Maybe you're forgetting something but genAI does produce value. Subjective value, yes. But still value to others who can make use of them.
End of the day your current prosperity is made by advances in energy and technology. It would be disingenuous to deny that and to deny the freedom of others to progress in their field of study.
> Then why not stop driving
You mean, we should all drive, oh I don't know, Electric powered cars?
I'm not entirely convinced it's going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they're doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.
I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.
Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.
Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.
That’s the fault of the human who used the LLM to write the code and didn’t test it properly.
Exhaustive testing is hard, to be fair, especially if you don’t actually understand the code you’re writing. Tools like TLA+ and static analyzers exist precisely for this reason.
An example I use to talk about hidden edge cases:
Imagine we have this (pseudo)code
Someone might see this function, and unit test it based on the if statement like: These tests pass, it’s merged.Except there’s a bug in this; what if you pass in a negative even number?
Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.
Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.
> I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written.
It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.
The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
I'm a programmer, and am intensely aware of the huge gap between the quantity of software the world could use and the total production capacity of the existing body of programmers. my distaste for AI has nothing to do with some real or imagined loss of power; if there were genuinely a system that produced good code and wasn't heavily geared towards reinforcing various structural inequalities I would be all for it. AI does not produce good code, and pretty much all the uses I've seen are trying to give people with power even more advantages and leverage over people without, so I remain against it.
Really think it’s entirely wrong to label someone as a bully for not conforming to current, perhaps bad, practices.
If you don't bend your knee to a "king", you are a bully? What sort of messed up thinking is that?
I understand that you are writing your general opinion, but I have a feeling Rob Pike's feelings go a little bit deeper than this.
I keep reading bad sentiment towards software devs. Why exactly do they "bully" business people? If you ask someone outside of the tech sector who the biggest bullies are, its business people who will fire you if they can save a few cents. Whenever someone writes this, I read deep rooted insecurity and jealousy for something they can't wrap their head around and genuinely question if that person really writes software or just claims to do it for credibility.
People care far less about gen AI writing slopcode and more about the social and environmental ramifications, not to mention the blatant IP theft, economic games, etc.
I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.
I realize you said "many" and not "all" but FWIW, I hate LLMs because:
1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".
2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.
3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.
4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.
5. And then there are the impacts to society:
5a. OpenAI just made every computer for the next couple of years significantly more expensive.
5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.
5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).
5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?
5e. Astroturfing is becoming significantly cheaper and widespread.
/signed as I also write software, as I assume almost everyone on this forum does.
After bitcoin this site is full of people who don't write code.
I have not been here before bitcoin. But wouldn't the "non-technical" founders be also types that don't write code. And to them fixing the "easy" part is very tempting...
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?
And this is different from outsourcing the work to India for programmers who work for $6000 a year in what way exactly?
You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.
You're absolutely right.
But no one is safe. Soon the AI will be better at CEOing.
That's the singularity you're talking about. AI takes every role humans can do and humans just enjoy life and live forever.
Nah, they will fine-tune a local LLM to replace the board and be always loyal to the CEO.
Elon is way ahead, he did it with mere meatbags.
Don't worry I'm sure they'll find ways to say their jobs can only be done by humans. Even the Pope is denouncing AI in fear that it'll replace god.
CEOs and the C-suite in general are closest to the money. They are the safest.
That is pretty much the only metric that matters in the end.
Honestly middle management is going to go extinct before the engineers do
Why, more psychopathic than Musk?
There's still a lot of confusion on where AI is going to land - there's no doubt that it's helpful, much the same way as spell checkers, IDEs, linters, grammarly, etc, were
But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)
That correction is what's affecting salaries (and "power"), not AI.
/signed someone actually interested in AI and SWE
When I see actual products produced by these "product managers who are writing detailed specs" that don't fall over and die at the first hurdle (see: Every vibe coded, outsourced, half assed PoS on the planet) I will change my mind.
Until then "Computer says No"
What does any of this have to do with what Rob has written?
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI
The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.
Nobody is really safe.
I’m at Big tech and our org has our sights on automating product manager work. Idea generation grounded with business metrics and context that you can feed to an LLM is a simpler problem to solve than trying to automate end to end engineering workflows.
Agreed.
Hence, I'm heavily invested in compute and energy stocks. At the end of the day, the person who has more compute and energy will win.
Many people have pointed out that if AI gets better at writing code and doesn't generate slop, then programmers' roles will evolve to Project Manager. People with tech backgrounds will still be needed until AI can completely take over without any human involvement.
Nope and I wholeheartedly agree with Pike for the disgust of these companies especially for what they are doing to the planet.
Very true... AI engineers earning $100mn, I doubt Rob Pike earnt that. Maybe $10mn.
This is the reality and started happening at faster pace. A junior engineer is able to produce something interesting faster without too much attitude.
Everybody in the company envy the developers and they respect they get especially the sales people.
The golden era of devs as kings started crumbling.
Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.
"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.
It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.
This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.
Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.
With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.
At least in my company, none of this has actually increased revenue.
So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.
Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.
Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.