If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
What would make sense for me is to use an AI to turn implicit context that is only there in the moment into explicit context that is stored in the ticket.
E.g. maybe you have your application open in a browser and are currently viewing a page with a very prominent red button. You hit that /issue command with "button should be yellow not red".
That half-sentence makes sense if you also have that open browser window as context, but would be completely cryptic without.
An AI could use both the input and the browser window to generate a description like "The background color of the #submit_unsafe button widget in frontend/settings/advanced.tsx should be changed from red to yellow." or something.
Sort of like a semantic equivalent to realpath if you want.
But maybe the button only appears on that URL if you've first pressed something else, or if you're logged in/out, or maybe that URL has a different token each day that makes it seem like a completely different URL, or...
> You hit that /issue command with "button should be yellow not red".
Wouldn’t it be easier to just open the inspector, find the css class, grep the source code, and then edit the properties? It could be even easier in an SPA where you just have to find the component file.
If you are a web programmer sure. I write embedded code, I know all the things you are talking about in the abstract, but I'm not good at them because none of them have ever been relevant for anything I do. Give me a few hours and I can figure it out (maybe minutes, maybe days? - if you know this area your guess might be better than mine), but it isn't something worth my time.
there are a lot of people who are not programmers at all. I can teach my plumber everything you said (learning it myself is the easy part), but it will take years. In the end they just know "that button I'm pointing my finger at should be yellow not red". How to we transfer that pointed finger to a ticket is the question here.
> How to we transfer that pointed finger to a ticket is the question here.
There’s a reason the Support and IT Technician role exists. They’re there for talking to the end user. And they in turn will write a proper report to Engineering.
If you want to wear both hats at once, that is fine. If you want an agent to be your support middleman, that is also fine. But most LLM proponents are acting like it’s a miracle solution to some engineering bottleneck.
On the Web, the github.com/*/*/issues namespace is home to the worst bugtracker behavior in the world. A bugtracker should should be restricted to bug reports and (well-informed) proposals and discussion about the bug/bugfix. The bug report should contain, at minimum and at maximum:
1. Clear steps to reproduce (ideally, using the prepared testcase as input, if applicable)
2. A description of the behavior observed from the program
3. A description of the expected behavior
4. Optionally, your justification for why the program should be changed to behave the way described in #3 and not #4
Everything else belongs on a message board, mailing list, or social media.
But this is all totally foreign to, like, 80% of GitHub's userbase (including the majority of the project managers aka maintainers who are in charge of allowing/disallowing the sorts of things that people post as a way of shaping the tone and tenor of the space).
> Everything else belongs on a message board, mailing list, or social media.
There's a reason that collaborative code platform (not just GH but also GL) "issues" end up being used for much more than bugs:
- message boards suffer from the SSO friction issue. No thanks I will not sign up at some phpBB board of questionable admin quality that will get 0wned sooner than later, or have the board owner bombard me with advertising themselves.
- mailing lists are even worse usability-wise because these by design leak your email address, on top of that their management UI often enough is Mailman which means it probably still stores passwords in cleartext, and spam filters, attachment size limits and overeager virus scanners make it a living hell
- IRC suffers from context loss. Netsplit, go for a smoke and the laptop goes to sleep, whoops, you disconnected and don't see what happened in the meantime. Yes, there's bouncers, but honestly, the UX sucks hard. Also, no file transfers to a channel, no native screenshot/paste functionality.
- Discord, Slack etc. solve the pains of IRC but are walled gardens
- Social media... yikes. No, no, no. Eventually, people that follow both you and the author of some FOSS software get pissed off by your conversation spamming their feed. (Too) many are still only active on Twitter which excludes people who don't want to be on that hellsite. Bluesky, good luck finding non-commies there. Mastodon, good luck and pray that your instance operator and the instance operator of the project team didn't end up in some bxtchfight escalating in defederation. Facebook groups, not everyone wants to leak their real name.
- messenger groups (especially Telegram)... blergh. You will drown in spam.
GH/GL are the sweet spot between UX/SO friction (because pretty much everyone who would want to file an issue has an account) and features, and on top of that both platforms have deals with email providers preventing them from getting blocked. That's why these two platforms are so far superior above everything else mentioned.
> message boards suffer from the SSO friction issue
GitHub is an SSO provider and has been for a long time. This criticism is ignorant.
Aside from that, there's nothing stopping anyone from using GitHub's dedicated message boards for message board stuff, or, before those existed, shunting it all off into the "issues" of a separate "$PROJECT/community-bullshit" "repo" instead of cluttering up the actual bugtracker.
> Social media... yikes. No, no, no.
I'm talking about the appropriate-for-social-media stuff people are already posting on GitHub issues. It's like you started writing your comment and lost the context. People are today already misusing GitHub issues for this. I'm saying keep the stuff best kept to social media and email... on social media and email. Don't clutter the bugtracker with it, and for project managers: don't let other users do it either. (You will lose contributors who know how to use a bugtracker efficaciously and are accustomed to it but have a fixed time budget and don't want to have to sift through junk for the privilege of doing free and thankless QA on your software.)
> You will drown in spam.
The irony. Help. It burns.
For emphasis: Everything that isn't a bug belongs on a message board, mailing list, or social media, and not on the bugtracker. Anyone who can't abide by this simple, totally reasonable request should be booted.
What I'm hearing is, "Your Honor, I know that I shouldn't have blown through that stop sign, but hear me out: I wanted to. If I didn't, then I would have had to stop and wait for traffic to pass and respect other people's time. Does that strike you as reasonable? I think we can agree that it does not."
> - mailing lists are even worse usability-wise because these by design leak your email address...
So does git and GitHub. Last I checked, authoring a git commit with an email address associated with your GitHub account is what makes GitHub attribute that commit to your account. I assume Gitlab works in a very similar way.
"But 'git clone' is soooo much harder than reading through mailing list archives!" Nah.
If you don't want to expose your email address but you still want commits to be associated to your account, Github lets you use a noreply email address [1].
I actually was looking into this recently (exploring how much of a PITA changing my github email would be) and found it interesting that, while in principle your GH email is public to anyone interfacing with your commits via `git`, they have gone to some length to avoid displaying it anywhere in the web interface. The docs actually mention it being shown on your 'profile' page but I don't see it anywhere there.
It's the default on new accounts for stuff when people do things through the GitHub interface.
If you set user.email using git-config on your machine to a real email address and decide to author and publish commits with it, then GitHub will, of course, not be able to stop you (aside from maybe rejecting the commits when you tried to push them). It can't just arbitrarily rewrite the email address in the commit. That would break Git's data model.
I don't quite understand why one would waste resources on restating the points of the article in a four-paragraph hn comment, as if it scores them some points.
> I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
Because it does. The goal here isn't to create good code, it's to create an impression of a person who writes good code. Even now, when software career is in freefall, for many people in poor countries it's still their only way out of poverty so they'll try everything possible to build a portfolio and get a job and the suffering of your little pet project isn't a part of the equation. Those people aren't trying to get Nobel prizes, they're trying to get any job that isn't farming with literal medieval-era technology.
My very radical personal opinion is that either we have small elitist circles of trust, or the internet will remain a global ghetto.
> /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles
What kind of context may be there?
Also, the entire repository and issue tracker is context. Over time it gets only more complete.
The entire chat log up until the user asks it to generate a summary. And maybe "memories" and custom system prompts for good measure. A lot of potentially private information, in other words. "Just the prompt" only works in a very particular case where you ask it for something out of the blue.
> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution.
The thing is, my shitty AI issue was providing value.
Seems like shitty AI issue did more harm than good?
Yea I think the author is wrong as well. I have a similar skill but the key difference is the instructions are just to fix typos. Why would the author not just use Claude as the plumbing and retain his old nonsense issues is beyond me.
This is the bit that struck me as odd. The author is creating issue slop but blames the contributor for treating it as genuine. The author wants to continue creating slop issues and decides that blocking all external contributions is the solution, rather than spending less time creating slop.
Their slop issues do not actually have value because the fixes based on the slop are equal in their sloppiness.
Author could instead create these slop issues in a place where external contributors can't see them instead of shitting on the contributors for not reading their mind.
Really bizarre lack of self awareness. How do the internal contributors deal with the slop? I wonder what they say about this person in private.
Yeah, I'm baffled that "CEO creates low-effort bug report" -> "open source contributor ignores the low quality of that report and nonetheless fixes the issue in his company's product" is what he apparently considered a healthy open source workflow prior.
That's not his previous workflow. The previous workflow was:
"CEO creates low-effort bug report" -> "CEO uses the low-effort bug report as a starting point to further refine the report and eventually fix the issue in his company's product"
The author's fixes based on the slop are good, because he knows the issue is slop and therefore can improve and fix the sloppiness.
Ignoring AI for a moment: I don't expect anyone to be able to write a design-doc from my own random notes about a problem. They are semi-formed, disconnected ideas that need a lot of refinement. I know that and I have plans around them and know much more context, but if some random person were to take them the outcome would be very bad, or at least require a lot more effort.
A random person has very little chance of being successful with that.
This issue is very similar, only with some AI tools intermediating the notes.
> A few years ago I submitted a full TypeScript rewrite of a text editor because I thought it would be fun. I hope the maintainers didn't read it. Sorry.
Love the transparency. To be fair, rewrites are almost impossible to review. Anything >5k diff takes at least multiple review cycles. I don't know how some maintainers do it while also working on the codebase themselves
> If writing the code is the easy part, why would I want someone else to write it?
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
Ultimately, this would lead to.a situation where only the customer-facing (if there are any) or "business-facing" (i.e. C-suite) roles remain. I'm not sure, I like that.
I don't think this will be an issue, given history. COBOL was developed so that someone higher up could use more human language to write software. (BASIC too? I don't know, I wasn't around for either).
More modern-day, low/no-code platforms are advertised as such... and yet, they don't replace software developers. (in fact, some projects my employer does is migrating away from low/no-code platforms in favor of code, because performance and other nonfunctionals are hidden away. We had a major outage as a result when traffic increased.)
Do you think any of them cares about long term? Regardless of AI, your head is always on a chopping block. You always grab that promo in front of you, even if it means you’ll be axed in two years by your own decisions.
I mean I understand that you want your business to not fall behind right now, sure. But I don't understand people in management who are audibly _excited_ about the prospect of these developments even behind closed doors. I guess some of them imagine they are the next Steve Jobs only held back by their dev teams, but most of them are in for a rude awakening lol. And I guess a lot are just grifting. The amount of psychotic B2B SaaS rambling on Twitter is already unbearable as is.
Offtopic but, the funny part about that statement is that the only place the socialist ever got the poor's support was in China during its civil war and then only because Mao was directly giving (bribing) them with land in exchange for their support. Everywhere else the main population of socialists were government bureaucrats. Ironically, all authoritarian ideologies (fascism too) happen this way, with low level government officials being the most zealous and the core supporters. The whole "revolution of the people" narrative is just propaganda, like your quote.
> The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
Everything comes down to this. Its not just open source projects; companies are also slowly adjusting to this reality.
There's roughly two characteristics that humans need in this new environment: Long-ranging technical leadership about how the system should be built (Lead+ Software Engineer), and deep product knowledge about how its used (PM).
>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
Guy uses his project's GitHub issues as personal TODO list, realizes his one line GitHub issues look unprofessional, uses AI to hallucinate them into fake but realistic looking issues, and then complains when he gets AI slop PRs.
An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.
Arguably, AI just accelerated a trend that was already happening and was already incorrect and unsustainable beforehand. The end of it just came a lot quicker.
The idea of pull requests by anyone everywhere at any time as the default was based on the assumption that we'd only ever encounter other hackers like us.
For a time, public discourse acknowledged that this wasn't exactly true, but was very busy framing it as a good thing. Because something something new perspectives, viewpoints, whatever.
Some of that framing was actually true, of course, but often happened to exist in a vacuum, pretending that reality did not exist; downplaying (sometimes to the point of actual gaslighting) the many downsides that came with reduced friction.
Which leads us back to current day, where said reality got supercharged by AI and crashed their car (currently on fire) into your living room.
I feel like we could've not went to these extremes with a bit more modesty, honesty and time. But those values weren't really compatible with our culture in the last 15+ years.
Which leaves me wondering where we will find ourselves 15+ years from now.
I suppose this is banal/obvious to many, but I found this very interesting given the practical context.
>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.
>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
...
>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
The negative net value of external contributiona is to make the decision. End external contributions.
For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.
AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.
> Authors would solve a problem in a way that ignored existing patterns
if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.
I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.
It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
It depends what you're getting out of it. I've signed CLAs because it was more convenient for me to have my PR upstreamed rather than maintaining a fork.
I should have been more nuanced: signing a CLA is the same as releasing your code under MIT license: if your code is worth anything, a large powerful entity will steal it and claim it as their own.
> If writing the code is the easy part, why would I want someone else to write it?
Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).
Interesting point. To me, it seems more like those donations where you’re offerred some money in exchange for taking an action which you know is going to take more time/cost way more than the donation amount. Tho to be completely fair, it’s similar with large non-LLM pull requests as well.
tldraw can afford to use the latest models without worrying about AI costs, but many open source projects can’t. In those projects, maintainers often know the code best, just like tldraw, and would benefit more from AI credits than from external contributions. I hope something like that gets implemented.
> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
> If writing the code is the easy part, why would I want someone else to write it?
When was writing code ever the hard part?
If contributors aren't solving problems, what good are they? Code that doesn't solve a problem is cruft. And if a problem could be solved trivially, you probably wouldn't need contributions from others to solve it in the first place.
IMO, you’re not really an open source project if you’re not accepting contributions with reasonably low friction.
I’ll call this what it is: a commercial product (they have a pricing page) that uses open source as marketing to sell more licenses.
The only PRs they want are ones that offer free professional level labor.
They’re too uncaring about the benefits of an open community to come up with a workflow to adapt to AI.
It honestly gives me a lack of confidence that they can maintain their own code quality standards with their own employees.
Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?
I've been writing open source code for 30 years. I can count on 1 hand the number of times a random 3rd party PR contributed value to the project. The main contributions of the community are usually debugging and feedback. While it does happen that a good contribution comes from the community, the main values from opening code are increased user trust and identifying bugs. Almost every single open source project has a small number of devs who write almost all the code. The reasons for this are always about code quality. The idea that "the community" writes any open source projects is just fantasy. So refusing AI slop is just continuing on with these same policies that have worked for decades.
"Just show me the prompt."
If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
What would make sense for me is to use an AI to turn implicit context that is only there in the moment into explicit context that is stored in the ticket.
E.g. maybe you have your application open in a browser and are currently viewing a page with a very prominent red button. You hit that /issue command with "button should be yellow not red".
That half-sentence makes sense if you also have that open browser window as context, but would be completely cryptic without.
An AI could use both the input and the browser window to generate a description like "The background color of the #submit_unsafe button widget in frontend/settings/advanced.tsx should be changed from red to yellow." or something.
Sort of like a semantic equivalent to realpath if you want.
I do see utility in that.
I think a URL and screenshot would be way more useful than a bunch of text for that use case.
But maybe the button only appears on that URL if you've first pressed something else, or if you're logged in/out, or maybe that URL has a different token each day that makes it seem like a completely different URL, or...
> You hit that /issue command with "button should be yellow not red".
Wouldn’t it be easier to just open the inspector, find the css class, grep the source code, and then edit the properties? It could be even easier in an SPA where you just have to find the component file.
If you are a web programmer sure. I write embedded code, I know all the things you are talking about in the abstract, but I'm not good at them because none of them have ever been relevant for anything I do. Give me a few hours and I can figure it out (maybe minutes, maybe days? - if you know this area your guess might be better than mine), but it isn't something worth my time.
there are a lot of people who are not programmers at all. I can teach my plumber everything you said (learning it myself is the easy part), but it will take years. In the end they just know "that button I'm pointing my finger at should be yellow not red". How to we transfer that pointed finger to a ticket is the question here.
> How to we transfer that pointed finger to a ticket is the question here.
There’s a reason the Support and IT Technician role exists. They’re there for talking to the end user. And they in turn will write a proper report to Engineering.
If you want to wear both hats at once, that is fine. If you want an agent to be your support middleman, that is also fine. But most LLM proponents are acting like it’s a miracle solution to some engineering bottleneck.
> where you just have to find the component file.
This can be a substantial effort, especially if you're not familiar with the project.
On the Web, the github.com/*/*/issues namespace is home to the worst bugtracker behavior in the world. A bugtracker should should be restricted to bug reports and (well-informed) proposals and discussion about the bug/bugfix. The bug report should contain, at minimum and at maximum:
1. Clear steps to reproduce (ideally, using the prepared testcase as input, if applicable)
2. A description of the behavior observed from the program
3. A description of the expected behavior
4. Optionally, your justification for why the program should be changed to behave the way described in #3 and not #4
Everything else belongs on a message board, mailing list, or social media.
But this is all totally foreign to, like, 80% of GitHub's userbase (including the majority of the project managers aka maintainers who are in charge of allowing/disallowing the sorts of things that people post as a way of shaping the tone and tenor of the space).
> Everything else belongs on a message board, mailing list, or social media.
There's a reason that collaborative code platform (not just GH but also GL) "issues" end up being used for much more than bugs:
- message boards suffer from the SSO friction issue. No thanks I will not sign up at some phpBB board of questionable admin quality that will get 0wned sooner than later, or have the board owner bombard me with advertising themselves.
- mailing lists are even worse usability-wise because these by design leak your email address, on top of that their management UI often enough is Mailman which means it probably still stores passwords in cleartext, and spam filters, attachment size limits and overeager virus scanners make it a living hell
- IRC suffers from context loss. Netsplit, go for a smoke and the laptop goes to sleep, whoops, you disconnected and don't see what happened in the meantime. Yes, there's bouncers, but honestly, the UX sucks hard. Also, no file transfers to a channel, no native screenshot/paste functionality.
- Discord, Slack etc. solve the pains of IRC but are walled gardens
- Social media... yikes. No, no, no. Eventually, people that follow both you and the author of some FOSS software get pissed off by your conversation spamming their feed. (Too) many are still only active on Twitter which excludes people who don't want to be on that hellsite. Bluesky, good luck finding non-commies there. Mastodon, good luck and pray that your instance operator and the instance operator of the project team didn't end up in some bxtchfight escalating in defederation. Facebook groups, not everyone wants to leak their real name.
- messenger groups (especially Telegram)... blergh. You will drown in spam.
GH/GL are the sweet spot between UX/SO friction (because pretty much everyone who would want to file an issue has an account) and features, and on top of that both platforms have deals with email providers preventing them from getting blocked. That's why these two platforms are so far superior above everything else mentioned.
> message boards suffer from the SSO friction issue
GitHub is an SSO provider and has been for a long time. This criticism is ignorant.
Aside from that, there's nothing stopping anyone from using GitHub's dedicated message boards for message board stuff, or, before those existed, shunting it all off into the "issues" of a separate "$PROJECT/community-bullshit" "repo" instead of cluttering up the actual bugtracker.
> Social media... yikes. No, no, no.
I'm talking about the appropriate-for-social-media stuff people are already posting on GitHub issues. It's like you started writing your comment and lost the context. People are today already misusing GitHub issues for this. I'm saying keep the stuff best kept to social media and email... on social media and email. Don't clutter the bugtracker with it, and for project managers: don't let other users do it either. (You will lose contributors who know how to use a bugtracker efficaciously and are accustomed to it but have a fixed time budget and don't want to have to sift through junk for the privilege of doing free and thankless QA on your software.)
> You will drown in spam.
The irony. Help. It burns.
For emphasis: Everything that isn't a bug belongs on a message board, mailing list, or social media, and not on the bugtracker. Anyone who can't abide by this simple, totally reasonable request should be booted.
> GitHub is an SSO provider and has been for a long time. This criticism is ignorant.
The problem is, a "sign up to contribute" is a friction source. It will almost always leak my email. In contrast, I'm already logged in to Github.
What I'm hearing is, "Your Honor, I know that I shouldn't have blown through that stop sign, but hear me out: I wanted to. If I didn't, then I would have had to stop and wait for traffic to pass and respect other people's time. Does that strike you as reasonable? I think we can agree that it does not."
What do you think about Discourse in particular.
GH has a "Discussions" feature for message-boards attached to your project. Same sign-on as GH. Nobody turns it on.
> - mailing lists are even worse usability-wise because these by design leak your email address...
So does git and GitHub. Last I checked, authoring a git commit with an email address associated with your GitHub account is what makes GitHub attribute that commit to your account. I assume Gitlab works in a very similar way.
"But 'git clone' is soooo much harder than reading through mailing list archives!" Nah.
If you don't want to expose your email address but you still want commits to be associated to your account, Github lets you use a noreply email address [1].
[1] https://docs.github.com/en/account-and-profile/reference/ema...
I actually was looking into this recently (exploring how much of a PITA changing my github email would be) and found it interesting that, while in principle your GH email is public to anyone interfacing with your commits via `git`, they have gone to some length to avoid displaying it anywhere in the web interface. The docs actually mention it being shown on your 'profile' page but I don't see it anywhere there.
> Github lets you use a noreply email address
Oh, I was unaware of that. I've not seen anyone use it, [0] but I've only paid any attention to the Big Corporate and Traditional Hacker populations.
Thanks much for the information.
[0] I'm certain that folks do use it, so folks shouldn't bother pointing out people that do.
It's the default on new accounts for stuff when people do things through the GitHub interface.
If you set user.email using git-config on your machine to a real email address and decide to author and publish commits with it, then GitHub will, of course, not be able to stop you (aside from maybe rejecting the commits when you tried to push them). It can't just arbitrarily rewrite the email address in the commit. That would break Git's data model.
I don't quite understand why one would waste resources on restating the points of the article in a four-paragraph hn comment, as if it scores them some points.
> I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
Because it does. The goal here isn't to create good code, it's to create an impression of a person who writes good code. Even now, when software career is in freefall, for many people in poor countries it's still their only way out of poverty so they'll try everything possible to build a portfolio and get a job and the suffering of your little pet project isn't a part of the equation. Those people aren't trying to get Nobel prizes, they're trying to get any job that isn't farming with literal medieval-era technology.
My very radical personal opinion is that either we have small elitist circles of trust, or the internet will remain a global ghetto.
The context windows before a prompt is often large and contains all sorts of information though, it wouldn't be just a prompt in isolation.
I was going by this example:
> /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles
What kind of context may be there?
Also, the entire repository and issue tracker is context. Over time it gets only more complete.
I don't get it either. The LLM-generated issue from the above prompt is just the same information written more verbosely.
The entire chat log up until the user asks it to generate a summary. And maybe "memories" and custom system prompts for good measure. A lot of potentially private information, in other words. "Just the prompt" only works in a very particular case where you ask it for something out of the blue.
> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution. The thing is, my shitty AI issue was providing value.
Seems like shitty AI issue did more harm than good?
As I understood it, the "issues" were more like todo list items of "look into whether this is an actual problem" than "this should be fixed".
Yea I think the author is wrong as well. I have a similar skill but the key difference is the instructions are just to fix typos. Why would the author not just use Claude as the plumbing and retain his old nonsense issues is beyond me.
This is the bit that struck me as odd. The author is creating issue slop but blames the contributor for treating it as genuine. The author wants to continue creating slop issues and decides that blocking all external contributions is the solution, rather than spending less time creating slop.
Their slop issues do not actually have value because the fixes based on the slop are equal in their sloppiness.
Author could instead create these slop issues in a place where external contributors can't see them instead of shitting on the contributors for not reading their mind.
Really bizarre lack of self awareness. How do the internal contributors deal with the slop? I wonder what they say about this person in private.
Yeah, I'm baffled that "CEO creates low-effort bug report" -> "open source contributor ignores the low quality of that report and nonetheless fixes the issue in his company's product" is what he apparently considered a healthy open source workflow prior.
That's not his previous workflow. The previous workflow was:
"CEO creates low-effort bug report" -> "CEO uses the low-effort bug report as a starting point to further refine the report and eventually fix the issue in his company's product"
The author's fixes based on the slop are good, because he knows the issue is slop and therefore can improve and fix the sloppiness.
Ignoring AI for a moment: I don't expect anyone to be able to write a design-doc from my own random notes about a problem. They are semi-formed, disconnected ideas that need a lot of refinement. I know that and I have plans around them and know much more context, but if some random person were to take them the outcome would be very bad, or at least require a lot more effort.
A random person has very little chance of being successful with that.
This issue is very similar, only with some AI tools intermediating the notes.
> A few years ago I submitted a full TypeScript rewrite of a text editor because I thought it would be fun. I hope the maintainers didn't read it. Sorry.
Love the transparency. To be fair, rewrites are almost impossible to review. Anything >5k diff takes at least multiple review cycles. I don't know how some maintainers do it while also working on the codebase themselves
> If writing the code is the easy part, why would I want someone else to write it?
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
Ultimately, this would lead to.a situation where only the customer-facing (if there are any) or "business-facing" (i.e. C-suite) roles remain. I'm not sure, I like that.
I don't think this will be an issue, given history. COBOL was developed so that someone higher up could use more human language to write software. (BASIC too? I don't know, I wasn't around for either).
More modern-day, low/no-code platforms are advertised as such... and yet, they don't replace software developers. (in fact, some projects my employer does is migrating away from low/no-code platforms in favor of code, because performance and other nonfunctionals are hidden away. We had a major outage as a result when traffic increased.)
Do you think any of them cares about long term? Regardless of AI, your head is always on a chopping block. You always grab that promo in front of you, even if it means you’ll be axed in two years by your own decisions.
I mean I understand that you want your business to not fall behind right now, sure. But I don't understand people in management who are audibly _excited_ about the prospect of these developments even behind closed doors. I guess some of them imagine they are the next Steve Jobs only held back by their dev teams, but most of them are in for a rude awakening lol. And I guess a lot are just grifting. The amount of psychotic B2B SaaS rambling on Twitter is already unbearable as is.
How does the saying go?
>Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.
Offtopic but, the funny part about that statement is that the only place the socialist ever got the poor's support was in China during its civil war and then only because Mao was directly giving (bribing) them with land in exchange for their support. Everywhere else the main population of socialists were government bureaucrats. Ironically, all authoritarian ideologies (fascism too) happen this way, with low level government officials being the most zealous and the core supporters. The whole "revolution of the people" narrative is just propaganda, like your quote.
> The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
Everything comes down to this. Its not just open source projects; companies are also slowly adjusting to this reality.
There's roughly two characteristics that humans need in this new environment: Long-ranging technical leadership about how the system should be built (Lead+ Software Engineer), and deep product knowledge about how its used (PM).
>As a high-powered tech CEO, I'm
cough linkedin cringe cough
That's interesting. I assumed this was self-deprecating. I wonder which of us is right.
(The web site doesn't make it very obvious, but it appears to be a UK company, and Ruiz seems to live in the UK. So not impossible!)
That's certainly plausible too. If so perhaps something to save for real life interactions - tone gets lost in blogs
I guarantee that his employees are wishing he would stop fucking up their issue list and get on with being the CEO
>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
Guy uses his project's GitHub issues as personal TODO list, realizes his one line GitHub issues look unprofessional, uses AI to hallucinate them into fake but realistic looking issues, and then complains when he gets AI slop PRs.
An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.
Arguably, AI just accelerated a trend that was already happening and was already incorrect and unsustainable beforehand. The end of it just came a lot quicker.
The idea of pull requests by anyone everywhere at any time as the default was based on the assumption that we'd only ever encounter other hackers like us. For a time, public discourse acknowledged that this wasn't exactly true, but was very busy framing it as a good thing. Because something something new perspectives, viewpoints, whatever.
Some of that framing was actually true, of course, but often happened to exist in a vacuum, pretending that reality did not exist; downplaying (sometimes to the point of actual gaslighting) the many downsides that came with reduced friction.
Which leads us back to current day, where said reality got supercharged by AI and crashed their car (currently on fire) into your living room.
I feel like we could've not went to these extremes with a bit more modesty, honesty and time. But those values weren't really compatible with our culture in the last 15+ years.
Which leaves me wondering where we will find ourselves 15+ years from now.
I suppose this is banal/obvious to many, but I found this very interesting given the practical context.
>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.
>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
...
>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
The negative net value of external contributiona is to make the decision. End external contributions.
For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.
AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.
> Authors would solve a problem in a way that ignored existing patterns
if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.
I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.
It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
You should never sign a CLA unless you're getting paid to.
It depends what you're getting out of it. I've signed CLAs because it was more convenient for me to have my PR upstreamed rather than maintaining a fork.
I should have been more nuanced: signing a CLA is the same as releasing your code under MIT license: if your code is worth anything, a large powerful entity will steal it and claim it as their own.
> If writing the code is the easy part, why would I want someone else to write it?
Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).
Interesting point. To me, it seems more like those donations where you’re offerred some money in exchange for taking an action which you know is going to take more time/cost way more than the donation amount. Tho to be completely fair, it’s similar with large non-LLM pull requests as well.
tldraw can afford to use the latest models without worrying about AI costs, but many open source projects can’t. In those projects, maintainers often know the code best, just like tldraw, and would benefit more from AI credits than from external contributions. I hope something like that gets implemented.
So...donations? Sponsors?
> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
> If writing the code is the easy part, why would I want someone else to write it?
When was writing code ever the hard part?
If contributors aren't solving problems, what good are they? Code that doesn't solve a problem is cruft. And if a problem could be solved trivially, you probably wouldn't need contributions from others to solve it in the first place.
We need a chrome extension like SponsorBlock, which publicly tags slop contributors. Maintainers can just reject PRs from those users.
there are 1000x sponsorblock users per (public, large) youtube channel. there are 1000x slop contributors than (public, large) repo
IMO, you’re not really an open source project if you’re not accepting contributions with reasonably low friction.
I’ll call this what it is: a commercial product (they have a pricing page) that uses open source as marketing to sell more licenses.
The only PRs they want are ones that offer free professional level labor.
They’re too uncaring about the benefits of an open community to come up with a workflow to adapt to AI.
It honestly gives me a lack of confidence that they can maintain their own code quality standards with their own employees.
Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?
I've been writing open source code for 30 years. I can count on 1 hand the number of times a random 3rd party PR contributed value to the project. The main contributions of the community are usually debugging and feedback. While it does happen that a good contribution comes from the community, the main values from opening code are increased user trust and identifying bugs. Almost every single open source project has a small number of devs who write almost all the code. The reasons for this are always about code quality. The idea that "the community" writes any open source projects is just fantasy. So refusing AI slop is just continuing on with these same policies that have worked for decades.
tldraw used to be FOSS but they changed the license in 2023. https://tldraw.substack.com/p/license-updates-for-the-tldraw...
BigBlueButton had to fork tldraw because of this. https://docs.bigbluebutton.org/new-features/#we-have-forked-...