I gave up on r/programming after an article I wrote (thoughtfully, without AI, even though the content might not have been super interesting) got mod-slapped with a stickied comment "This content is low quality, stolen, blogspam, or clearly AI generated".
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.
Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
It may not be a, in denial, hiding their heads in the sand situation.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.
AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?
I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vinecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.
There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?
But "there can't be any interesting discussion about AI programming" is completely false.
I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘
All of those are opinions about programming. Which framework, which language, etc.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.
My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.
You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.
Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
They switched their best sorting algorithm to be engagement based rather than upvote based [1]. Upvotes are just one of many metrics, but heavy comment interaction is another. It incentivizes rage bait and performing for the crowd with every comment and post. They also switched into an almost purely moderator curated frontpage [2] rather than allowing users to vote.
I've wondered the same thing, but you growing up definitely has to be a factor.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
In my experience, that kind of thing might only get called out by moderators or the outliers who reply the most. They're the ones with the strongest interest in proving anything. Only then will the rest of the community dogpile. Otherwise, it goes ignored.
Favourite genres of posts on HN in the past 2 years:
* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
As someone who wrote recently wrote the latter post (https://news.ycombinator.com/item?id=47183527), the more nuanced approach that "AI has good and bad things" is a more real-world reflective than an absolute "AI is good" or "AI is bad", and at the least it's more conductive for civil discussion.
See dang’s comments on https://news.ycombinator.com/item?id=47340079 . (That link itself is a submission about HN’s recent guidelines changes to include “Don't post generated comments or AI-edited comments. HN is for conversation between humans.”)
Maybe this was a genius move made precisely to be ambiguous on whether it was April Fools or not... so that the author can later read the room and clarify whether it was or was not April Fools, without much repercussion either way.
If you enjoy comedy, you should check the status of subreddits like /r/selfhosted or /r/homelab, etc. I find them interesting because they are on the edge of computers pro-users and software developers. Used to be a nice community
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
This is to be expected. There's a definite split in the engineering community between those who are embracing AI, and those who are rejecting it. It's now become political, like systemd and wayland.
The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
I created an account and started reading this site primarily for programming news when r/programming took a precipitous dive in quality around 2020 or so. Before it was an example of one of the few good communities there, but it quickly became show and tell (ironically this was against its unenforced rules). And any real interesting posts had no discussion. But then I noticed the "Other Communities" tab would show posts from a HN posts sub that tracked posts here, and suddenly I was able to get great information. A post about CockroachDB that had 20 boorish comments complaining about its name over there would have the designer of it over here answering technical questions about its capabilities.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
But hasnt it gone down in quality with broader mainstream appeal, more ai slop, and just general self promotion? I feel like a lot of niche communities have also lost their core or original user bases that are not as active any more or it could just be me? For example off the top of my head not digging too deep, r/juststart used to be very high signal and strongly moderated but now not so much. But, on the other hand, i did discover r/laundry recently with some awesome content around “spa day” but again thats mainly one user responsible. I guess another big gripe is having to use the reddit mobile app after they closed their api’s and shutting down third party apps because now i cant browse its more feed-like. Sorry for the ramble not sure what my point is but hoping others can share their experiences and any advice too i guess
You think this place, the people in my circles infamously refer to as the "orange site", is considered a bastion of good conversation among the people that don't frequent it?
Reddit is doomed anyway. People are using AI to start threads, and other people are using AI to comment on these threads. You can never know what you're interacting with.
Worse, I am repeatedly being accused nowadays of being an LLM. It probably doesn’t help that I riff-write with only a rough outline of what I want to say, not how to say it.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
I gave up on r/programming after an article I wrote (thoughtfully, without AI, even though the content might not have been super interesting) got mod-slapped with a stickied comment "This content is low quality, stolen, blogspam, or clearly AI generated".
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
Still salty about it.
I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.
Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
I think they just don't want every post to be about llm, vibe coding, harness and if claude is down.
Some sub reddits forbid memes, because else they get flooded and the good content drowns in it.
Some sub reddits only allow certain content of certain days to counter this.
What do you want to mods to do?
It may not be a, in denial, hiding their heads in the sand situation.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
> I don't understand the people digging in as zero LLM absolutists.
Relevant read: https://en.wikipedia.org/wiki/Luddite
I feel like it’s easy to understand what’s motivating these individuals to take that stance.
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.
AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?
I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vinecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
Good decision.
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.
There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?
But "there can't be any interesting discussion about AI programming" is completely false.
I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘
All of those are opinions about programming. Which framework, which language, etc.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.
My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.
You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
This is far too negative and reductionist
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
> or how well you can manage 16 parallel agents.
Claude does that for me. :)
That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.
Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
There’s something off about Reddit. Either I grew up or it became hollow from within. Just angry people scolding each other all the time.
There are some true gems however but usually in smaller focused subreddits.
Yeah, the smaller subreddits are good. The problem is it’s basically killed off alternative forums.
I never thought I’d miss vBulletin so much.
Think any platform becomes terrible over time once it hits a certain level of mass appeal. I loved Reddit and Quora in 2010.
They switched their best sorting algorithm to be engagement based rather than upvote based [1]. Upvotes are just one of many metrics, but heavy comment interaction is another. It incentivizes rage bait and performing for the crowd with every comment and post. They also switched into an almost purely moderator curated frontpage [2] rather than allowing users to vote.
1: https://www.reddit.com/r/blog/comments/o5tjcn/evolving_the_b...
2: https://news.ycombinator.com/item?id=36040282
I've wondered the same thing, but you growing up definitely has to be a factor.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
Reddit is still a step above other alternatives.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
In my experience, that kind of thing might only get called out by moderators or the outliers who reply the most. They're the ones with the strongest interest in proving anything. Only then will the rest of the community dogpile. Otherwise, it goes ignored.
HN should also limit all these self-promoting AI posts.
Favourite genres of posts on HN in the past 2 years:
* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
As someone who wrote recently wrote the latter post (https://news.ycombinator.com/item?id=47183527), the more nuanced approach that "AI has good and bad things" is a more real-world reflective than an absolute "AI is good" or "AI is bad", and at the least it's more conductive for civil discussion.
See dang’s comments on https://news.ycombinator.com/item?id=47340079 . (That link itself is a submission about HN’s recent guidelines changes to include “Don't post generated comments or AI-edited comments. HN is for conversation between humans.”)
I interpreted the GP as "personal blog posts about AI/LLMs", not LLM-generated comments.
dang’s comments in the link above address “Show HN” submissions. (That was my interpretation of “self-promoting AI posts”… :)
I've been hiding them all. Makes the front page look a lot better.
That sounds absolutely amazing. I will reconsider creating a new account and using Reddit again after walking away about a decade ago.
I deleted my account a few years ago, I might actually create one now, it'll be preferable to HN if they stick with this new rule.
Maybe this was a genius move made precisely to be ambiguous on whether it was April Fools or not... so that the author can later read the room and clarify whether it was or was not April Fools, without much repercussion either way.
Nope:
> Timing just worked out this way. New month, ideal timing for testing a new rule.
If you enjoy comedy, you should check the status of subreddits like /r/selfhosted or /r/homelab, etc. I find them interesting because they are on the edge of computers pro-users and software developers. Used to be a nice community
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
[1] https://old.reddit.com/r/selfhosted/comments/1r9s2rn/musicgr...
[2] https://old.reddit.com/r/selfhosted/comments/1rckopd/huntarr...
[3] https://old.reddit.com/r/selfhosted/comments/1rs275q/psa_thi...
A question to people here: what’s a smallish community for tech with a slightly more serious level of discourse that this subreddit?
https://lobste.rs/
Can y’all give me an invite
...Hacker News?
This is to be expected. There's a definite split in the engineering community between those who are embracing AI, and those who are rejecting it. It's now become political, like systemd and wayland.
The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
/r/horsecarriage bans all discussion of cars
/r/assembly bans all discussion of 4GL
LLM programming isn't going away by not talking about it. It's time to move on, and eventually considering farming.
> /r/horsecarriage bans all discussion of cars
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
/r/horsecarriages banning discussion of cars makes sense though. It's not a horse carriage. If you want to discuss cars, go to /r/cars.
OK I see your point, the problem is more being off-topic rather than the LLM programming itself. And that's correct, we are strict people, after all.
It's not about wishing it goes away, it's that people don't want to see JavaScript/Java/Swift blog articles when they visit r/assembly.
More like /r/cars bans all discussion of electric cars.
As others have noticed in the thread, the timing is suspicious - could be April's fools.
The original post was edited with "this is not April Fool's"
I created an account and started reading this site primarily for programming news when r/programming took a precipitous dive in quality around 2020 or so. Before it was an example of one of the few good communities there, but it quickly became show and tell (ironically this was against its unenforced rules). And any real interesting posts had no discussion. But then I noticed the "Other Communities" tab would show posts from a HN posts sub that tracked posts here, and suddenly I was able to get great information. A post about CockroachDB that had 20 boorish comments complaining about its name over there would have the designer of it over here answering technical questions about its capabilities.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
> Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
If only, just this once, it were true. Sigh.
Sweet, so the LLM can interact on topics not about LLM
People still use Reddit?
What do you recommend instead? Reddit is like reading YouTube comments nowadays, I miss when discussions were literate and informed.
Ignorance isn't bliss. They never had a downturn yoy in their userbase yet.
But hasnt it gone down in quality with broader mainstream appeal, more ai slop, and just general self promotion? I feel like a lot of niche communities have also lost their core or original user bases that are not as active any more or it could just be me? For example off the top of my head not digging too deep, r/juststart used to be very high signal and strongly moderated but now not so much. But, on the other hand, i did discover r/laundry recently with some awesome content around “spa day” but again thats mainly one user responsible. I guess another big gripe is having to use the reddit mobile app after they closed their api’s and shutting down third party apps because now i cant browse its more feed-like. Sorry for the ramble not sure what my point is but hoping others can share their experiences and any advice too i guess
You think this place, the people in my circles infamously refer to as the "orange site", is considered a bastion of good conversation among the people that don't frequent it?
Not being able to discuss the biggest change to our job in living memory is such a reddit thing to do, just sticking their heads in the sand.
Reddit is doomed anyway. People are using AI to start threads, and other people are using AI to comment on these threads. You can never know what you're interacting with.
Do you think that this is not happening here?
Worse, I am repeatedly being accused nowadays of being an LLM. It probably doesn’t help that I riff-write with only a rough outline of what I want to say, not how to say it.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
Got any proof?
Check a larger thread. It is pretty clear since there are people doing nothing to hide the writing style.
You are absoawesomeamazingaffirmativeabundantauthenticabsolutely right!
> Check a larger thread. It is pretty clear
It tends to get downvoted and flagged.
If you email comment links to the mods that you believe are AI-assisted, they’ll review and act on that. Footer contact link. It’s not hopeless.
Clankers outta here! Wish there was an HN toggle to enable hiding all LLM programming submissions.