I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.
> Writing and programming are both a form of working at a problem through text…
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
Developers do not in fact tend to read all the software they use. I have never once looked at the code for jq, nor would I ever want to (the worst thing I could learn about that contraption is that the code is beautiful, and then live out the rest of my days conflicted about my feelings about it). This "developers read code" thing is just special pleading.
You're a user of jq in the sense of the comment you're replying to, not a developer. The developer is the developer _of jq_, not developers in general.
They can be “a developer” and use jq as a component within what they are developing. They do not need to be a developer of jq to have reason to look at jq’s code.
Yes, that's exactly how I meant it. I might _rarely_ peruse some code if I'm really curious about it, but by and large I just trust the developers of the software I use and don't really care how it works. I care about what it does.
But you read your coworkers PRs. I decided this week I wouldn't read/correct the AIgen doc and unit tests from 3 of my coworkers today, because else I would never be able to work. They produce twice as much poor output in 10 time the number of line change, that's too much.
Right, I'm not arguing developers don't read their own code or their teammates code or anything that merges to main in a repo they're responsible for. Just that the "it's only worth reading if someone took the time to actually write it" objection doesn't meaningfully apply to code in Show HN's --- there's no expectation that code gets read at all. That's why moderation is so at pains to ensure there's some way people can play with whatever it is being shown ("sign up pages can't be Show HN's").
Key part is *where reliability matters*, there are not that many cases where it matters.
We tell stories of Therac 25 but 90% of software out there doesn’t kill people. Annoys people and wastes time yes, but reliability doesn’t matter as much.
E-mail, internet and networking, operations on floating point numbers are only kind of somewhat reliable. No one is saying they will not use email because it might not be delivered.
As we give more and more autonomy to agents, that % may change. Just yesterday I was looking at hexapods and the first thing it tells you ( with a disclaimer its for competitions only ) that it has a lot of space for weapon install. I had to briefly look at the website to make sure I did not accidentally click on some satirical link.
Main point is that there is many more lines of code of CRUD business apps running on AWS and instances of applications than even non-autonomous car software even though we do have lots of cars.
> it can make real things happen even if no one reads it (it’s executable).
"One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator.
The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism).
I guess it depends on whether you're only executing the code or if you're submitting it for humans to review. If your use case is so low-stakes that a review isn't required, then vibe coding is much more defensible. But if code quality matters even slightly, such that you need to review the code, then you run into the same problems that you do with AI-generated prose: nobody wants to read what you couldn't be bothered to write.
There’s lots of times where I just don’t care how it’s implemented.
I got Claude to make a test suite the other day for a couple RFCs so I could check for spec compliance. It made a test runner and about 300 tests. And an html frontend to view the test results in a big table. Claude and I wrote 8500 lines of code in a day.
I don’t care how the test runner works, so long as it works. I really just care about the test results. Is it finding real bugs? Well, we went though the 60 or so failing tests. We changed 3 tests, because Claude had misunderstood the rfc. The rest were real bugs.
I’m sure the test runner would be more beautiful if I wrote it by hand. But I don’t care. I’ve written test runners before. They’re not interesting. I’m all for beautiful, artisanal code. I love programming. But sometimes I just want to get a job done. Sometimes the code isn’t for reading. It’s for running.
It makes sense. A vibe-coded tool can sometimes do the job, just like some cheap Chinese-made widget. Not every task requires hand-crafted professional grade tools.
For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it.
>Code has a pretty important property that ordinary prose doesn’t have
But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.
I don't see how the two correlate - commercial, closed source software usually have teams of professionals behind them with a vested and shared interest in not shipping crap that will blow up in their customers' face. I don't think the motivations of "guy who vibe coded a shitty app in an afternoon" are the same.
And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription."
> But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do).
It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).
Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.
And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.
probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need
Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.
came from a bit of innovation that LLMs are incapable of.
I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.
> There's a middle ground of "written by human and LLM together".
Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).
Yeah, sure, you could create a social media or photo-sharing site, but most people that want to share cat photos with their friends could just as easily print out their photos and stick them in the mail already.
Hell, I'd read an instruction manual that AI wrote as long as it accurately describes.
I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.
I agree with your sentiment, and it touches on one of the reasons I left academia for IT. Scientific research is preoccupied with finding the truth, which is beautiful but very stressful. If you're a perfectionist, you're always questioning yourself: "Did I actually find something meaningful, or is it just noise? Did I gaslight myself into thinking I was just exploring the data when I was actually p-hacking the results?" This took a real toll on my mental health.
Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it."
Okay but it is probably not going to be a tool that will be reliable or work as expected for too long depending on how complex it is, how easily it can be understood, and how it can handle updates to libraries, etc. that it is using.
Also, what is our trust with this “tool”? E.g. this is to be used in a brain surgery that you’ll undergo, would you still be fine with using something generated by AI?
Earlier you couldn’t even read something it generated, but we’ll trust a “tool” it created because we believe it works? Why do we believe it will work? Because a computer created it? That’s our own bias towards computing that we assume that it is impartial but this is a probabilistic model trained on data that is just as biased as we are.
I cannot imagine that you have not witnessed these models creating false information that you were able to identify. Understanding their failure on basic understandings, how then could we trust it with engineering tasks? Just because “it works”? What does that mean and how can we be certain? QA perhaps but ask any engineer here if companies are giving a single shit about QA while they’re making them shove out so much slop, and the answer is going to be disappointing.
I don’t think we should trust these things even if we’re not developers. There isn’t anyone to hold accountable if (and when) things go wrong with their outputs.
All I have seen AI be extremely good at is deceiving people, and that is my true concern with generative technologies. Then I must ask, if we know that its only effective use case is deception, why then should I trust ANY tool it created?
Maybe the stakes are quite low, maybe it is just a video player that you use to watch your Sword and Sandal flicks. Ok sure, but maybe someone uses that same video player for an exoscope and the data it is presenting to your neurosurgeon is incorrect causing them to perform an action they otherwise would have not done if provided with the correct information.
We should not be so laissez-faire with this technology.
> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
Very true, and it's not just creepy elites either. Before I got into tech I worked a blue collar job that involved zero emailing. When I first started office work I was so incredibly nervous about how to write emails and would agonize over trivial details. Turns out just being clear and concise is all most people care about.
There might be other professions where people get more hung up on formalities but my partner works in a non-tech field and it's the same way there. She's far more likely to get an email dashed off with a sentence fragment or two than a long formal message. She has learned that short emails are more likely to be read and acted on as well.
This is the dark comedy of the AI communication era — two LLMs having a conversation with each other while their human operators have already checked out. The email equivalent of two answering machines leaving messages for each other in the 90s.
The real cost isn't the tokens, it's the attention debt. Every CC'd person now has to triage whether any of those paragraphs contain an actual decision or action item. In my experience running multiple products, the signal-to-noise ratio in AI-drafted comms is brutal. The text looks professional, reads smoothly, but says almost nothing.
I've started treating any email over ~4 paragraphs the same way I treat Terms of Service — skim the first sentence of each paragraph and hope nothing important is buried in paragraph seven.
> the signal-to-noise ratio in AI-drafted comms is brutal
This is also the case for AI generated projects btw, the backend projects that I’ve been looking at often contains reimplementations of common functionality that already exists elsewhere, such as in-memory LRU caches when they should have just used a library.
What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
In other words, AI partially solves the technique problem, but not the taste problem.
In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another.
AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work.
The definition of slop is poor taste. By that definition a lot of human work is also slop.
But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description."
> "I am not interested in reading something that you could not be bothered to actually write"
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
I feel like dealing with robo-calls for the past couple years had led me to this conclusion a bit before this boom in ai-generated text. When I answer my phone, if I hear a recording or a bot of some sorts, I hang up immediately with the thought "if it were important, a human would have called". I've adjusted this slightly for my kid's school's automated notifications, but otherwise, I don't have the time to listen to robots.
Robocalls nowadays tend to wait for you to break dead air before they start playing the recording (I don't know why.) So I've recently started not speaking immediately when someone calls me, and if after 10 seconds the counterparty hasn't said something I hang up.
The truth is now that nobody will bother to read anything you write AI or not mostly, creating things is like buying a lottery ticket in terms of audience. Creating something lovingly by hand and pouring countless hours into it is like a golden lottery ticket that has 20x odds, but if it took 50x longer to produce, you're getting significantly outperformed by people who just spam B+ content.
Exactly, I think perplexity had the right idea of where to go with AI (though obviously fumbled execution). Essentially creating more advanced primitives for information search and retrieval. So it can be great at things we have stored and need to perform second order operations on (writing boilerplate, summarizing text, retrieving information).
It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.
Except its not. What's a programmer without a vision? Code needs vision. The model is taking your vision. With writing a blog post, comment or even book, I agree.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
Don't you think their is an opposite of that effect too?
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
I am saying a lot of the time these type of posts are a nonexistent problem, a problem that is already solved, or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding.
The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.
I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.
> or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding
You might be on to something. Maybe its self-selection (as in people who want to engage deeply with a certain topic but lack domain expertise might be more likely to go for "vibecodable" solutions).
I compare it to a project I worked on when I was very junior a very long time ago - I built by hand this complicated harness of scripts to deploy VM's on bare metal and do stuff like create customizable, on-the-fly test environments for the devs on my team. It worked fine, but it was a massive time sink, lots of code, and was extremely difficult to maintain and could have weird behavior or bad assumptions quite often.
I made it because at that point in my career I simply didn't know that ansible existed, or cloud solutions that were very cheap to do the same thing. I spent a crazy amount of effort doing something that ansible probably could have done for me in an afternoon. That's what sometimes these projects feel like to me. It's kind of like a solution looking for a problem a lot of the time.
I just scanned through the front page of the show HN page and quickly eyeballed several of these type of things.
This is why I make it a goal to have a very good knowledge about the tools I use. So many problems, can be solved by piping a few unix tools together or have a whole chapter in the docs (emacs, vim, postgres,…) about how to solve it.
I write software when the scripts are no longer suitable.
I've read opinions in the same vein of what you said, except painting this as a good outcome. The gist of the argument is why spend time looking for the right tool and effort learning its uses when you can tell an agent to work out the "problem" for you and spit out a tailored solution.
It's about being oblivious, I suppose. Not too different to claiming there will be no need to write new fiction when an LLM will write the work you want to read by request.
It's a reasonable question - I would probably answer, having shipped some of these naive solutions before, that you'll find out later it doesn't do entirely what you wished, is very difficult/impossible to maintain, has severe flaws you're unable to be aware of because you lack the domain expertise, or the worst in my opinion, becomes completely unable to adapt to new features you need with it, where as the more mature solutions most likely already had spent considerable amount of time thinking about these things.
I was dabbling in consulting infrastructure for a bit, often prospects would come to me with stuff like this "well I'll just have AI do it" and my response has been "ok, do that, but do keep me in mind if that becomes very difficult a year or two down the road." I haven't yet followed up with any of them to see how they are doing, but some of the ideas I heard were just absolute insanity to me.
I'm building an education platform. 95% is vibe coded. What isn't vibe coded though is the content. AI is really uninspiring with how to teach technical subjects. Also, the full UX? I do that. Marketing plan? 90% is me.
But AI does the code. Well... usually.
People call my project creative. Some are actually using it.
I feel many technical things aren't really technical things they are simply a problem where "have a web app" is part of the solution but the real part of the solution is in the content and the interaction design of it, not in how you solved the challenge technically.
I do believe you, but I have to ask: what are these incredibly tedious "easy, time consuming parts of projects" everyone seems to bring up? Refactoring I can see, but I have a sense that's not what you mean here.
That's actually a great point. I feel like unless you know for sure that you will never need something again, nothing is disposable. I find myself diving into places I thought I would never care about again ALL the time.
Every single time I have vibe coded a project I cared about, letting the AI rip with mild code review and rigorous testing has bit me in the ass, without fail. It doesn't extend it in the taste that I want, things are clearly spiraling out of control, etc. Just satisfying some specs at the time of creation isn't enough. These things evolve, they're a living being.
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
This is precisely it. If anything, AI gives me more freedom to think about more novel ideas, both on the implementation and the final design level, because I'm not stuck looking up APIs and dealing with already solved problems.
It's kind of freeing to put a software project together and not have to sweat the boilerplate and rote algorithm work. Boring things that used to dissuade me. Now, I no longer have that voice in my head saying things like: "Ugh, I'm going to have to write yet another ring buffer, for the 14th time in my career."
The boring parts where you learn. "Oh, I did that, this is now not that and it does this! But it was so boring building a template parser" - You've learnt.
Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.
Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.
You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.
If learning about individual cogs is what's important, and once you've done that it's okay to move on and let AI do it, than you can build the specific thing you want to learn about in detail in isolation, as a learning project — like many programmers already do, and many CS courses already require — perhaps on your own, or perhaps following along with a substantial book on the matter; then once you've gained that understanding, you can move on to other things in projects that aren't focused on learning about that thing.
It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.
Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:
"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."
"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."
"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."
"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."
"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."
"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."
Or even just things like
"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."
"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."
> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
There are a lot of reasons one might not be able to, or want to, use existing dependencies.
I assume you use JavaScript? TypeScript or Go perhaps?
Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.
Can you elaborate on the implied claim that you've never built a project that you spent more than two months thinking about? I could maybe see this being true of an undergraduate student, but not a professional programmer.
Yesterday I had two hours to work on a side project I've been dreaming about for a week. I knew I had to build some libraries and that it would be a major pain. I started with AI first, which created a script to download, extract, and build what needed. Even with the script I indeed encountered problems. But I blitzed through each problem until the libraries were built and I could focus on my actual project, which was not building libraries! I actually reached a satisfying conclusion instead of half-way through compiling something I do not care about.
I think you're missing the general point of the post.
>AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
The trigger for the post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders, whatever their coding chops are. For Show HN posts, the sentence I quoted precisely describes the things that would be mind-numbingly boring to Show HN readers.
pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves and also have some biochemist commenting, "I'm working at a so-and-so research lab and this is exactly what I was looking for!"
Now the biochemist is out there vibe-coding their own solution, and now, there is no way for the HN reader to differentiate your "robust" entry from a completely vibe-code noobie entry, no matter how long you worked on the "important stuff".
Why? because the barrier of entry has been completely obliterated. What we took for granted was that "knowing how to code" was a proxy filter for "thought and worked hard on the problem." And that filter allowed for high-quality posts.
That is why the observation that you know longer can guarentee or have any way of telling quickly that the posters spent some time on the problem is a great observation.
The very value that you gain from vibe-coding is also the very thing that threatens to turn Show HN into a glorified Product Hunt cesspool.
"No one goes there any more, it's too crowded." etc etc
I tend to agree, this has been my experience with LLM-powered coding, especially more recently with the advent of new harnesses around context management and planning. I’ve been building software for over ten years so I feel comfortable looking under the hood, but it’s been less of that lately and more talking with users and trying to understand and effectively shape the experience, which I guess means I’m being pushed toward product work.
That's the key: use AI for labor substitution, not labor replacement. Nothing necessarily wrong with labor saving for trivial projects, but we should be using these tools to push the boundaries of tech/science!
You don't fit the profile OP is complaining about. You might not even be "vibe" coding in the strictest sense of that word.
For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
In my opinion, the value of art cant be the quality of the output it must be the intention of the artist.
There are plenty of times in which people will prefer the technically inferior or less aesthetically pleasing output because of the story accompanying it. Different people select different intention to value, some select for the intention to create an accurate depiction of a beautiful landscape, some select for the intention to create a blurry smudge of a landscape.
I can appreciate the art piece made my someone who only has access to a pencil and their imagination more than someone who has access to adobe CC and the internet because its not about the output to me its about the intention and the story.
Saying I made this drawing implies that you at least sat down and had the intention to draw the thing. Then revealing that you actually used AI to generate it changes the baseline assumption and forces people to re-evaluate it. So its not "finding a creative result that they value, but they retroactively devaluing it if it’s not created by a process that they consider artistic
AI bros: "You're gatekeeping because you think the result isn't art!"
Rest of the world: "No, we're gatekeeping because we think the result isn't good."
If someone can cajole their LLM to emit something worthwhile, e.g. Terence Tao's LLM generated proofs, people will be happy to acknowledge it. Most people are incapable of that and no number of protestations of gatekeeping can cover up the unoriginality and poor quality of their LLM results.
What concerns me is how easily the “rest of the world” is changing their opinions about what’s good. If the result isn’t good, then it isn’t good, sure. But in my experience there’s a large contingent of people, especially the youth, that are more reactionary about AI than they are interested in creativity. Their idea of creative value is inherently tied to self-expression and individualism, which AI and systems-based creative processes are threatening. When they don’t understand the philosophical case for non-individualistic/systems-based creative processes, they can’t differentiate between computer assisted creativity and computer assisted slop
Sometimes you do, which is why there’s not only a single type of brush in a studio. You want something very controllable if you’re doing lineart with ink.
Even with digital painting, there’s a lot of fussing with the brush engine. There’s even a market for selling presets.
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
> Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
> That may be, but it's also exposing a lot of gatekeeping
"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".
If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.
I would argue the term "gatekeeping" is being twisted around when it comes to AI. I see genuine gatekeeping when people with a certain skill or qualification try to discourage newcomers by making their field seem mysterious and only able to be done by super special people, and intimidating or making fun of newbies who come along and ask naive questions.
"Gatekeeping" is NOT when you require someone to be willing learn a skill in order to join a community of people with that skill.
And in fact, saying "you are too stupid to learn that on your own, use an AI instead" is kind of gatekeeping on its own, because it implicitly creates a shrinking elite who actually have the knowledge (that is fed to the AI so it can be regurgitated for everyone else), shutting out the majority who are stuck in the "LLM slum".
While at first glance LLMs do help expose and even circumvent gatekeeping, often it turns out that gatekeeping might have been there for a reason.
We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.
I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.
Really? You think LLMs are a bigger shift in how internet communities are than big corporations like Google, Facebook etc.? I personally see much less change last few years than I did 15 years ago.
I think that having some difficulty and having to "bloody your forehead" acts as a filter that you cared enough to put a lot of effort into it. From a consumer side, someone having spent a lot of time on something certainly isn't a guarantee that it is good, but it provides _some_ signal about the sincerity of the producer's belief in it. IMO it's not gatekeeping to only want to pay attention to things that care went into: it's just normal human behavior to avoid unreasonable asymmetries of effort.
Most ideas aren't interesting. Implementations are interesting. I don't care if you worked hard on your implementation or not, but I do care if it solves the problem in a novel or especially efficient way. These are not the hallmarks of AI solutions.
"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."
Right, so it's about the person and how they've qualified themselves, and not about what they've built.
I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.
It's not about status. It's about interest. A joiner is not going to have an interesting conversation about joinery with someone who has put some flatpak furniture together.
I think the valuable learning experience can be what makes a Show HN worth viewing, if it's worth viewing. (I don't feel precious about it though.. I didn't think Show HN was particularly engaging before AI either)
Gatekeeping can be a good thing -- if you have to put effort into what you create, you're going to be more selective about what ideas you invest in. I wouldn't call that "bloodying your forehead", I'd call it putting work into something before demanding attention
It's not about having to put in effort for the sake of it, the point is that building something by hand you will gain insight into the problem, which insight then becomes a valuable contribution.
What if the AI produces writing that better accomplishes my goal than writing it myself? Why do you feel differently about these two acts?
For what it's worth, the unifying idea behind both is basically a "hazing ritual", or more neutrally phrased, skin in the game. It takes time and energy to look at things people produce. You should spend time and energy making sure I'm not looking at a pile of shit. Doesn't matter if it's a website or prose.
Obviously some people don't. And that's why the signal to noise ratio is becoming shit very quickly.
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
AI enables the stereotypical "idea guy" to suddenly be a "builder". Of course, they are learning in realtime that having the idea was always the easy part...
I had found this rebuttal: Ideas are cheap only if you have cheap ideas.
I would argue good ideas are not so easy to find. It is harder than it seems to fit the market, and that is why most of apps fail. At the end of the day, everyone is blinded by hubris and ignorance... I do include myself in that.
They may not be the easiest thing to find, but I'd submit that good ideas are way more common than the skill and resources needed to capitalise on them
Well the claim was that AI makes you boring. The counter is that interesting people remain interesting, it's just that a flood of previously already boring people are pouring into tech. We could make some predictions that depend on how you model this. For instance, the absolute number of interesting projects posted to HN could increase or decrease, and likewise for the relative number vs total projects. You might expect different outcomes
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
We built an editor for creating posts on LinkedIn where - to avoid the "all AI output is similar/boring" issue - it operates more like a muse than a writer. i.e. it asks questions, pushes you to have deeper insights, connects the dots with what others are writing about etc.
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
A tangent, I wonder what kind of people actually stick around and read LinkedIn, the posts on there are so bad. I only go on there when I'm trying to get recruiters to find me for a new job. Instagram too I can't bear to use Instagram it has so many ads.
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.
I do something similar: give it a bunch of ideas I have or a general point form structure, have it help me simplify and organize those notes into something more structured, then I write it out myself.
Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.
I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
> AI writing will make people who write worse than average, better writers.
Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.
> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.
The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.
"You're describing the default output, and you're right — it's bad. But that's like judging a programming language by its tutorial examples.
The actual skill is in the prompting, editing, and knowing when to throw the output away entirely. I use LLMs daily for technical writing and the first draft is almost never the final product. It's a starting point I can reshape faster than staring at a blank page.
The real problem isn't that AI can't produce concise, precise writing — it's that most people accept the first completion and hit send. That's a user problem, not a tool problem."
i think a lot of people that use AI to help them write want it specifically BECAUSE it makes them boring and generic.
and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?
for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.
office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.
Highly doubt that since its the complete opposite for coding. Whats missing for people of all skill levels is that writing helps you organize your thoughts, but that can happen at prompt time?
Good code is marked by productivity, conformance to standards, and absence of bugs. Good writing is marked by originality and personality and not overusing the rhetorical crutches AI overrelies on to try to seem engaging.
Time, effort, and skill being equal, I would suggest that AI access generally improves the quality of any given output. The issue is that AI use is only externally identifiable when at least one of those inputs is low, which makes it easy to develop poor heuristics.
No one finds AI-assisted prose/code/ideas boring, per se. They find bad prose/code/ideas boring. "AI makes you boring" is this generation's version of complaining about typing or cellular phones. AI is just a tool; it's up to humans how to use it.
After telling Copilot to lose the em-dash, never say “It’s not A, it’s B” and avoid alternating one-sentence and long paragraphs it had the gall to tell me it wrote better than most people.
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
you can prompt it to stop doing that, and to behave exactly how you need it. my prompts say "no flattery, no follow up questions, PhD level discourse, concise and succinct responses, include grounding, etc"
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture.
Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know.
I was once told by people in the video game industry that games were usually buggy because they were short lived.
Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
> Not more blog posts, more emails, more boilerplate — but something structurally new?
This is a point that often results in bad faith arguments from both AI enthusiasts and AI skeptics. Enthusiasts will say "everything is a remix and the most creative works are built on previous works" while skeptics will say "LLMs are stochastic parrots and cannot create anything new by technical definition".
The truth is somewhere in the middle, which unfortunately invokes the Golden Mean Fallacy that makes no one happy.
Funny thing is, LLMs can create novel ideas, they're just crap. Turn the temperature setting up or expand top_p and it'll come up with increasingly wacky responses, some of which might even be good.
Creativity often requires reasoning in unusual ways, and evaluating those ideas requires learning. The first part we can probably get LLMs to do; the latter part we can't (RL is a separate process and not really scalable).
Even without any of that, you can prompt your way into new things. I'm building a camper out of wood, and I've gotten older LLM models to make novel camper designs just by asking it questions and choosing things. You can make other AI models make novel music by prompting it to combine different aspects of music into a new song. Human creativity works that way too. Think of all the failed attempts at new things that humans come up with before a good one actually sticks.
Whoa there. Let's not oversimplify in either direction here.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
Most ideas people have aren't original, but the original ideas people do have come after struggling with a lot of unoriginal ideas.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that "
Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
Additionally, the boredom atrophies any future collaboration. Why make a helpful library and share it when you can brute force the thing that it helps? Why make a library when the bots will just slurp it up as delicious corpus and barf it back out at us? Why refactor? Why share your thoughts, wasting time typing? Why collaborate at all when the machine does everything with less mental energy? There was a article recently about offloading/outsourcing your thoughts, i.e. your humanity, which is part of why it's all so unbelievably boring.
The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
There's a version of this argument I agree with and one I don't.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
> if you use AI as a draft generator [...] you're spending your cognitive budget on the high-value parts (ideas, structure
I don't follow. If you have ideas and a structure you already have a working draft. Just start revising that. What would an AI add other than superfluous yapping that should be edited out, without also becoming the replacement for thinking described in your negative example?
No, AI makes it easier to get boring ideas to more than an elevator pitch or "Wouldn't it be cool..". Countless people right now are able to get that initial bit of excitement over an idea and instead of a little groundwork on various aspects of it.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
I think there should be 10x more hardcore AI denialists and doomers to offset the obnoxiousness and absurdity of the other side. As usual, the reality is somewhere in the middle, perhaps slightly on the denialist side, but the pro-AI crowd has completely lost the plot.
I'm all for well founded arguments against premature AGI empowerment..
But what I'm replying to, and the vast majority of the AI denial I see, is rooted in a superficial, defensive, almost aesthetic knee jerk rejection of unimportant aspects of human taste and preference.
The article does not fit the description of blind AI denialism, though. The author even acknowledges that the tool can be useful. It makes a well articulated case that by not putting any thought into your work and words, and allowing the tool to do the thinking for you, the end product is boring. You may agree or disagree with this opinion, but I think the knee jerk rejection is coming from you.
I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.
At least this CEO gets it. Hopefully more will start to follow.
Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
This is a very bad idea unless you have 100% accuracy in identifying AI generated writing, which is impossible. Otherwise, your tool will be more-often used to harass people who use those tropes organically without AI.
This behavior has already been happening with Pangram Labs which supposedly does have good AI detection.
I agree with the risks. However the primary goal of the site is educational not accusatory. I mostly want people to be able to recognise these patterns.
The WIP features measure breadth and density of these tropes, and each trope has frequency thresholds. Also I don't use AI to identify AI writing to avoid accusatory hallucinations.
I do appreciate the feedback though and will take it into consideration.
As much as I'd like to know whether a text was written by a human or not, I'm saddened by the fact that some of these writing patterns have been poisoned by these tools. I enjoy, use, and find many of them to be an elegant way to get a point across. And I refuse to give up the em dash! So if that flags any of my writing—so be it.
Absolutely vibe coded, I'm sure I disclosed it somewhere on the site. As much as I hate using AI for creative endeavours I have to agree that it excels as nextjs/vercel/quick projects like this. I was mostly focused on the curation of the tropes and examples.
Believe me I've had to adjust my writing a lot to avoid these tells, even academics I know are second guessing everything they've ever been taught. It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
> It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
I applaud your optimism, but I think the internet is a lost cause. Humans who value communicating with other humans will need to retreat into niche communities with zero tolerance for bots. Filtering out bot content will likely continue to be impossible, but we'll eventually settle on a good way to determine if someone is human. I just hope we won't have to give up our privacy and anonymity for it.
This aligns with an article I titled "AI can only solve boring problems"[0]
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
> I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
I vibe code little productivity apps that would take me months to make and now I can make them in a few days. But tbh talking to Google's Gemini is like taking to a drunk programmer; while solving one bug it introduces another bug and we fight back and forth until it realizes what and how needs to be fixed.
> Original ideas are the result of the very work you’re offloading on LLMs.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.
AI is great for getting my first bullet points into paragraph form, rephrasing/seeing different ways of writing to keep me moving when I’m stuck, and just general copy-editing (grammar really). Like you said it generally doesn’t save me a ton of time, but I get a quality copy done maybe a little bit faster and I find it just keeps me working on something rather than constantly stopping and starting when I hit a mental wall. Sometimes I just need/wanf to get it done, and for that LLM’s can be great.
If actually making something with AI and showing it to people makes you boring ... imagine how boring you are when you blog about AI, where at most you only verbally describe some attributes of what AI made for you, if anything.
The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.
Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.
This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.
Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.
i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.
derivative work might be useful, but it's not interesting.
I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.
The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
In other words, AI raises the floor. If you were already near the ceiling, relying on it can (and likely will) bring you down. In areas where raising the floor is exceptionally good value (such as bespoke tools for visualizing data, or assistants that intelligently write code boilerplate, or having someone to speak to in a foreign language as opposed to talking to the wall), AI is amazing. In areas where we expect a high bar, such as an editorial, a fast and reliable messaging library, or a classic novel, it's not nearly as useful and often turns out to be a detriment.
I think I can observe the world and my relative state therein, no? I know I am unfortunately less ambitious, driven and outgoing than others, which are commonly associated with being interesting. And I don't complain about it, the word has a meaning after all and I'll not delude myself into changing its definition.
Definitely at a certain threshold it is for others to decide what is boring and not, I agree with that.
In any case, my simple point is that AI can definitely raise the floor, as the other comment more succinctly expressed. Irrelevant for people at the top, but good for the rest of us.
> I think I can observe the world and my relative state therein, no?
Yes, to an extent. You can, for example, evaluate if you’re sensitive or courageous or hard working. But some things do not concern only you, they necessitate another person, such as being interesting or friendly or generous.
A good heuristic might be “what could I not say about myself if I were the only living being on Earth?”. You can still be sensitive or hard working if you’re alone, but you can’t be friendly because there’s no one else to be friendly to.
Technically you could bore yourself, but in practice that’s something you do to other people. Furthermore, it is highly subjective, a D&D dungeon master will be unbearably boring to some, and infinitely interesting to others.
> I know I am unfortunately less ambitious, driven and outgoing than others
I disagree those automatically make someone boring.
I also disagree with LLMs improving your situation. For someone to find you interesting, they have to know what makes you tick. If what you have to share is limited by what everyone else can get (by querying an LLM), that is boring.
Brilliant. You put into words something that I've thought every time I've seen people flinging around slop, or ideating about ways to fling around slop to "accelerate productivity"...
Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.
I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.
I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.
And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
Have to admit I'm really struggling with the idea that the Wright brothers didn't do much thinking because they were self taught, never mind the idea that figuring out aeronautics from reading every publication they could get their hands on, intuiting wing warping and experimenting by hand-building mechanical devices looks much like asking Claude to make a CRUD app...
That's not what I'm saying. My point is that expertise, as in, credentials, institutional knowledge, accepted wisdom, was actively harmful to solving flight. The Wrights succeeded because they built a tool that made iteration cheap (the wind tunnel), tested 200 wing shapes without deference to what the existing literature said should work (Lilienthal's tables were wrong and everyone with "expertise" accepted them uncritically), and they closed the loop with reality by actually flying.
That's the same approach as vibe coding. Not "asking Claude to make a CRUD app.", but using it to cheaply explore solution spaces that an expert's priors would tell you aren't worth trying. The wind tunnel didn't do the thinking for the Wrights, it just made thinking and iterating cheap. That's what LLMs do for code.
The blog post's argument is that deep immersion is what produces original ideas. But what history shows is that deeply immersed experts are often totally wrong and the outsiders who iterate cheaply and empirically take the prize. The irony here is that LLM haters feel it falls victim to the Einstellung effect [1]. But the exact opposite is true: LLMs make it so cheap to iterate on what we thought in the past were suboptimal/broken solutions, which makes it possible to cheaply discover the more efficient and simpler methods, which means humans uniquely fall victim to the Einstellung effect whereas LLMs don't.
The Wright brothers were deeply immersed experts in flight like only a handful of other people other people at the time, and were obsessed with Lilenthal's tables to the point they figured out how aerodynamic errors killed Lilenthal.
The blog's actual point isn't some deference to credentials straw man you've invented, it's that stuff lazily hashed together that's got to "good enough" without effort is seldom as interesting as people's passion projects. And the Wright brothers' application of their hardware assembly skills and the scientific method to theory they'd gone to great lengths to get sent to Dayton Ohio is pretty much the antithesis of getting to "good enough" without effort. Probably nobody around the turn of the century devoted more thinking and doing time to powered flight
Using AI isn't necessary or sufficient for getting to "good enough" without much effort (and it's of course possible to expend lots of effort with AI), but it does act as a multiplier for creating passable stuff with little thought (to an even greater extent than templates and frameworks and stock photos). And sure, a founding tenet of online marketing (and YC) from long before Claude is that many small experiments to see if $market has any takers might be worth doing before investing thinking time in understanding and iterating, and some people have made millions from it, but that doesn't mean experiments stitched together in a weekend mostly from other people's parts aren't boring to look at or that their hit rate won't be low....
I've actually ran into few blogs that were incredibly shallow while sounding profound.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
"every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon."
They're token predictors. This is inherently a limited technology, which is optimized for making people feel good about interacting with it.
There may be future AI technologies which are not just token predictors, and will have different capabilities. Or maybe there won't be. But when we talk about AI these days, we're talking about a technology with a skill ceiling.
Try the formulation "anything about AI is boring."
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?
The turing test isn't designed to select for the most interesting individuals. Some people are less interesting than other people. If the machine is acting similar to a boring human, it can make you more boring, and still pass the test.
I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.
I think the point of the article is that on sites like HN, people used to need domain expertise to answer questions. Their answer was based on unique experience, and even if maybe it wasn't optimal it was unique. Now a lot of people just check chatgpt and answer the question without actually knowing what they're talking about. Worse the bar to submit something to Show HN has gotten lower, and people are vibe coding projects in an afternoon nobody wants or cares about. I don't think the article is really about writing style
I was onboard with the author until this paragraph:
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.
I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.
> Writing and programming are both a form of working at a problem through text…
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
Users typically don't read code, developers (of the software) do.
If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code.
Which means nobody understands it, beyond the external behaviour they've tested.
I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far.
But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong.
Developers do not in fact tend to read all the software they use. I have never once looked at the code for jq, nor would I ever want to (the worst thing I could learn about that contraption is that the code is beautiful, and then live out the rest of my days conflicted about my feelings about it). This "developers read code" thing is just special pleading.
You're a user of jq in the sense of the comment you're replying to, not a developer. The developer is the developer _of jq_, not developers in general.
They can be “a developer” and use jq as a component within what they are developing. They do not need to be a developer of jq to have reason to look at jq’s code.
Yes, that's exactly how I meant it. I might _rarely_ peruse some code if I'm really curious about it, but by and large I just trust the developers of the software I use and don't really care how it works. I care about what it does.
We're talking about Show HN here.
But you read your coworkers PRs. I decided this week I wouldn't read/correct the AIgen doc and unit tests from 3 of my coworkers today, because else I would never be able to work. They produce twice as much poor output in 10 time the number of line change, that's too much.
Right, I'm not arguing developers don't read their own code or their teammates code or anything that merges to main in a repo they're responsible for. Just that the "it's only worth reading if someone took the time to actually write it" objection doesn't meaningfully apply to code in Show HN's --- there's no expectation that code gets read at all. That's why moderation is so at pains to ensure there's some way people can play with whatever it is being shown ("sign up pages can't be Show HN's").
Key part is *where reliability matters*, there are not that many cases where it matters.
We tell stories of Therac 25 but 90% of software out there doesn’t kill people. Annoys people and wastes time yes, but reliability doesn’t matter as much.
E-mail, internet and networking, operations on floating point numbers are only kind of somewhat reliable. No one is saying they will not use email because it might not be delivered.
<< 90% of software out there doesn’t kill people.
As we give more and more autonomy to agents, that % may change. Just yesterday I was looking at hexapods and the first thing it tells you ( with a disclaimer its for competitions only ) that it has a lot of space for weapon install. I had to briefly look at the website to make sure I did not accidentally click on some satirical link.
Main point is that there is many more lines of code of CRUD business apps running on AWS and instances of applications than even non-autonomous car software even though we do have lots of cars.
We guarantee 5 nines of uptime, and 1 nine of not killing people
Most code will not kill people, but a lot of code could kill a business.
> it can make real things happen even if no one reads it (it’s executable).
"One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator.
The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism).
I guess it depends on whether you're only executing the code or if you're submitting it for humans to review. If your use case is so low-stakes that a review isn't required, then vibe coding is much more defensible. But if code quality matters even slightly, such that you need to review the code, then you run into the same problems that you do with AI-generated prose: nobody wants to read what you couldn't be bothered to write.
There’s lots of times where I just don’t care how it’s implemented.
I got Claude to make a test suite the other day for a couple RFCs so I could check for spec compliance. It made a test runner and about 300 tests. And an html frontend to view the test results in a big table. Claude and I wrote 8500 lines of code in a day.
I don’t care how the test runner works, so long as it works. I really just care about the test results. Is it finding real bugs? Well, we went though the 60 or so failing tests. We changed 3 tests, because Claude had misunderstood the rfc. The rest were real bugs.
I’m sure the test runner would be more beautiful if I wrote it by hand. But I don’t care. I’ve written test runners before. They’re not interesting. I’m all for beautiful, artisanal code. I love programming. But sometimes I just want to get a job done. Sometimes the code isn’t for reading. It’s for running.
It makes sense. A vibe-coded tool can sometimes do the job, just like some cheap Chinese-made widget. Not every task requires hand-crafted professional grade tools.
For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it.
>Code has a pretty important property that ordinary prose doesn’t have
But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.
Sometimes (or often) things with horrible security flaws "work" but not in the way that they should and are exposing you to risk.
If you refuse to run AI generated code for this reason, then you should refuse to run closed source code for the same reason.
I don't see how the two correlate - commercial, closed source software usually have teams of professionals behind them with a vested and shared interest in not shipping crap that will blow up in their customers' face. I don't think the motivations of "guy who vibe coded a shitty app in an afternoon" are the same.
And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription."
> But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do).
It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).
Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.
And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.
probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need
Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.
came from a bit of innovation that LLMs are incapable of.
I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.
> There's a middle ground of "written by human and LLM together".
Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).
> but I’d categorize that ‘bit’ as the innovation from the human.
Agreed.
Yeah, sure, you could create a social media or photo-sharing site, but most people that want to share cat photos with their friends could just as easily print out their photos and stick them in the mail already.
Hell, I'd read an instruction manual that AI wrote as long as it accurately describes.
I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.
I agree with your sentiment, and it touches on one of the reasons I left academia for IT. Scientific research is preoccupied with finding the truth, which is beautiful but very stressful. If you're a perfectionist, you're always questioning yourself: "Did I actually find something meaningful, or is it just noise? Did I gaslight myself into thinking I was just exploring the data when I was actually p-hacking the results?" This took a real toll on my mental health.
Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it."
> I only care if it works
Okay but it is probably not going to be a tool that will be reliable or work as expected for too long depending on how complex it is, how easily it can be understood, and how it can handle updates to libraries, etc. that it is using.
Also, what is our trust with this “tool”? E.g. this is to be used in a brain surgery that you’ll undergo, would you still be fine with using something generated by AI?
Earlier you couldn’t even read something it generated, but we’ll trust a “tool” it created because we believe it works? Why do we believe it will work? Because a computer created it? That’s our own bias towards computing that we assume that it is impartial but this is a probabilistic model trained on data that is just as biased as we are.
I cannot imagine that you have not witnessed these models creating false information that you were able to identify. Understanding their failure on basic understandings, how then could we trust it with engineering tasks? Just because “it works”? What does that mean and how can we be certain? QA perhaps but ask any engineer here if companies are giving a single shit about QA while they’re making them shove out so much slop, and the answer is going to be disappointing.
I don’t think we should trust these things even if we’re not developers. There isn’t anyone to hold accountable if (and when) things go wrong with their outputs.
All I have seen AI be extremely good at is deceiving people, and that is my true concern with generative technologies. Then I must ask, if we know that its only effective use case is deception, why then should I trust ANY tool it created?
Maybe the stakes are quite low, maybe it is just a video player that you use to watch your Sword and Sandal flicks. Ok sure, but maybe someone uses that same video player for an exoscope and the data it is presenting to your neurosurgeon is incorrect causing them to perform an action they otherwise would have not done if provided with the correct information.
We should not be so laissez-faire with this technology.
similarly, i think that something that someone took the time to proof-read/verify can be of value, even if they did not directly write it.
this is the literary equivalent of compiling and running the code.
> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
It's bad enough they didn't bother to actually write it, but often it seems like they also didn't bother to read it either.
Honest conversation in the AI era is just sending your prompts straight to each other.
I mean one thing we have learnt from Epstein is that the 'elite' don't spend much time crafting the perfect email!
Very true, and it's not just creepy elites either. Before I got into tech I worked a blue collar job that involved zero emailing. When I first started office work I was so incredibly nervous about how to write emails and would agonize over trivial details. Turns out just being clear and concise is all most people care about.
There might be other professions where people get more hung up on formalities but my partner works in a non-tech field and it's the same way there. She's far more likely to get an email dashed off with a sentence fragment or two than a long formal message. She has learned that short emails are more likely to be read and acted on as well.
This is the dark comedy of the AI communication era — two LLMs having a conversation with each other while their human operators have already checked out. The email equivalent of two answering machines leaving messages for each other in the 90s.
The real cost isn't the tokens, it's the attention debt. Every CC'd person now has to triage whether any of those paragraphs contain an actual decision or action item. In my experience running multiple products, the signal-to-noise ratio in AI-drafted comms is brutal. The text looks professional, reads smoothly, but says almost nothing.
I've started treating any email over ~4 paragraphs the same way I treat Terms of Service — skim the first sentence of each paragraph and hope nothing important is buried in paragraph seven.
> the signal-to-noise ratio in AI-drafted comms is brutal
This is also the case for AI generated projects btw, the backend projects that I’ve been looking at often contains reimplementations of common functionality that already exists elsewhere, such as in-memory LRU caches when they should have just used a library.
oh the irony
The short version of "I am not interested in reading something that you could not be bothered to actually write" is "ai;dr"
What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
In other words, AI partially solves the technique problem, but not the taste problem.
In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another.
AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work.
The definition of slop is poor taste. By that definition a lot of human work is also slop.
But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description."
> "I am not interested in reading something that you could not be bothered to actually write"
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
I feel like dealing with robo-calls for the past couple years had led me to this conclusion a bit before this boom in ai-generated text. When I answer my phone, if I hear a recording or a bot of some sorts, I hang up immediately with the thought "if it were important, a human would have called". I've adjusted this slightly for my kid's school's automated notifications, but otherwise, I don't have the time to listen to robots.
Robocalls nowadays tend to wait for you to break dead air before they start playing the recording (I don't know why.) So I've recently started not speaking immediately when someone calls me, and if after 10 seconds the counterparty hasn't said something I hang up.
The truth is now that nobody will bother to read anything you write AI or not mostly, creating things is like buying a lottery ticket in terms of audience. Creating something lovingly by hand and pouring countless hours into it is like a golden lottery ticket that has 20x odds, but if it took 50x longer to produce, you're getting significantly outperformed by people who just spam B+ content.
Exactly, I think perplexity had the right idea of where to go with AI (though obviously fumbled execution). Essentially creating more advanced primitives for information search and retrieval. So it can be great at things we have stored and need to perform second order operations on (writing boilerplate, summarizing text, retrieving information).
It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.
Except its not. What's a programmer without a vision? Code needs vision. The model is taking your vision. With writing a blog post, comment or even book, I agree.
For now.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
Don't you think their is an opposite of that effect too?
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
I am saying a lot of the time these type of posts are a nonexistent problem, a problem that is already solved, or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding.
The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.
I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.
> or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding
You might be on to something. Maybe its self-selection (as in people who want to engage deeply with a certain topic but lack domain expertise might be more likely to go for "vibecodable" solutions).
I compare it to a project I worked on when I was very junior a very long time ago - I built by hand this complicated harness of scripts to deploy VM's on bare metal and do stuff like create customizable, on-the-fly test environments for the devs on my team. It worked fine, but it was a massive time sink, lots of code, and was extremely difficult to maintain and could have weird behavior or bad assumptions quite often.
I made it because at that point in my career I simply didn't know that ansible existed, or cloud solutions that were very cheap to do the same thing. I spent a crazy amount of effort doing something that ansible probably could have done for me in an afternoon. That's what sometimes these projects feel like to me. It's kind of like a solution looking for a problem a lot of the time.
I just scanned through the front page of the show HN page and quickly eyeballed several of these type of things.
Yeah, the feeling that hits when you finally realize you spent THIS MUCH EFFORT in a problem and you realize you can do more with less.
> I made it because at that point in my career I simply didn't know that ansible existed
Channels Mark Twain. "Sorry for such a long letter, i didn't have the time to make it shorter."
This is why I make it a goal to have a very good knowledge about the tools I use. So many problems, can be solved by piping a few unix tools together or have a whole chapter in the docs (emacs, vim, postgres,…) about how to solve it.
I write software when the scripts are no longer suitable.
I've read opinions in the same vein of what you said, except painting this as a good outcome. The gist of the argument is why spend time looking for the right tool and effort learning its uses when you can tell an agent to work out the "problem" for you and spit out a tailored solution.
It's about being oblivious, I suppose. Not too different to claiming there will be no need to write new fiction when an LLM will write the work you want to read by request.
It's a reasonable question - I would probably answer, having shipped some of these naive solutions before, that you'll find out later it doesn't do entirely what you wished, is very difficult/impossible to maintain, has severe flaws you're unable to be aware of because you lack the domain expertise, or the worst in my opinion, becomes completely unable to adapt to new features you need with it, where as the more mature solutions most likely already had spent considerable amount of time thinking about these things.
I was dabbling in consulting infrastructure for a bit, often prospects would come to me with stuff like this "well I'll just have AI do it" and my response has been "ok, do that, but do keep me in mind if that becomes very difficult a year or two down the road." I haven't yet followed up with any of them to see how they are doing, but some of the ideas I heard were just absolute insanity to me.
I'm building an education platform. 95% is vibe coded. What isn't vibe coded though is the content. AI is really uninspiring with how to teach technical subjects. Also, the full UX? I do that. Marketing plan? 90% is me.
But AI does the code. Well... usually.
People call my project creative. Some are actually using it.
I feel many technical things aren't really technical things they are simply a problem where "have a web app" is part of the solution but the real part of the solution is in the content and the interaction design of it, not in how you solved the challenge technically.
we need a stackoverflow "dupe" structure, something meme-worthy
I do believe you, but I have to ask: what are these incredibly tedious "easy, time consuming parts of projects" everyone seems to bring up? Refactoring I can see, but I have a sense that's not what you mean here.
That's actually a great point. I feel like unless you know for sure that you will never need something again, nothing is disposable. I find myself diving into places I thought I would never care about again ALL the time.
Every single time I have vibe coded a project I cared about, letting the AI rip with mild code review and rigorous testing has bit me in the ass, without fail. It doesn't extend it in the taste that I want, things are clearly spiraling out of control, etc. Just satisfying some specs at the time of creation isn't enough. These things evolve, they're a living being.
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
This is precisely it. If anything, AI gives me more freedom to think about more novel ideas, both on the implementation and the final design level, because I'm not stuck looking up APIs and dealing with already solved problems.
It's kind of freeing to put a software project together and not have to sweat the boilerplate and rote algorithm work. Boring things that used to dissuade me. Now, I no longer have that voice in my head saying things like: "Ugh, I'm going to have to write yet another ring buffer, for the 14th time in my career."
The boring parts where you learn. "Oh, I did that, this is now not that and it does this! But it was so boring building a template parser" - You've learnt.
Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.
Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
> The boring parts where you learn.
Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.
You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.
If learning about individual cogs is what's important, and once you've done that it's okay to move on and let AI do it, than you can build the specific thing you want to learn about in detail in isolation, as a learning project — like many programmers already do, and many CS courses already require — perhaps on your own, or perhaps following along with a substantial book on the matter; then once you've gained that understanding, you can move on to other things in projects that aren't focused on learning about that thing.
> The boring parts where you learn.
It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.
Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:
"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."
"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."
"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."
"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."
"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."
"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."
Or even just things like
"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."
"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."
> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
There are a lot of reasons one might not be able to, or want to, use existing dependencies.
I think this is a terrible argument.
I assume you use JavaScript? TypeScript or Go perhaps?
Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.
Like any tool it can be put to productive use or it can be used to crank out absolute garbage.
Can you elaborate on the implied claim that you've never built a project that you spent more than two months thinking about? I could maybe see this being true of an undergraduate student, but not a professional programmer.
Yesterday I had two hours to work on a side project I've been dreaming about for a week. I knew I had to build some libraries and that it would be a major pain. I started with AI first, which created a script to download, extract, and build what needed. Even with the script I indeed encountered problems. But I blitzed through each problem until the libraries were built and I could focus on my actual project, which was not building libraries! I actually reached a satisfying conclusion instead of half-way through compiling something I do not care about.
I think you're missing the general point of the post.
>AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
The trigger for the post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders, whatever their coding chops are. For Show HN posts, the sentence I quoted precisely describes the things that would be mind-numbingly boring to Show HN readers.
pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves and also have some biochemist commenting, "I'm working at a so-and-so research lab and this is exactly what I was looking for!"
Now the biochemist is out there vibe-coding their own solution, and now, there is no way for the HN reader to differentiate your "robust" entry from a completely vibe-code noobie entry, no matter how long you worked on the "important stuff".
Why? because the barrier of entry has been completely obliterated. What we took for granted was that "knowing how to code" was a proxy filter for "thought and worked hard on the problem." And that filter allowed for high-quality posts.
That is why the observation that you know longer can guarentee or have any way of telling quickly that the posters spent some time on the problem is a great observation.
The very value that you gain from vibe-coding is also the very thing that threatens to turn Show HN into a glorified Product Hunt cesspool.
"No one goes there any more, it's too crowded." etc etc
I tend to agree, this has been my experience with LLM-powered coding, especially more recently with the advent of new harnesses around context management and planning. I’ve been building software for over ten years so I feel comfortable looking under the hood, but it’s been less of that lately and more talking with users and trying to understand and effectively shape the experience, which I guess means I’m being pushed toward product work.
That's the key: use AI for labor substitution, not labor replacement. Nothing necessarily wrong with labor saving for trivial projects, but we should be using these tools to push the boundaries of tech/science!
You don't fit the profile OP is complaining about. You might not even be "vibe" coding in the strictest sense of that word.
For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
In my opinion, the value of art cant be the quality of the output it must be the intention of the artist.
There are plenty of times in which people will prefer the technically inferior or less aesthetically pleasing output because of the story accompanying it. Different people select different intention to value, some select for the intention to create an accurate depiction of a beautiful landscape, some select for the intention to create a blurry smudge of a landscape.
I can appreciate the art piece made my someone who only has access to a pencil and their imagination more than someone who has access to adobe CC and the internet because its not about the output to me its about the intention and the story.
Saying I made this drawing implies that you at least sat down and had the intention to draw the thing. Then revealing that you actually used AI to generate it changes the baseline assumption and forces people to re-evaluate it. So its not "finding a creative result that they value, but they retroactively devaluing it if it’s not created by a process that they consider artistic
AI bros: "You're gatekeeping because you think the result isn't art!"
Rest of the world: "No, we're gatekeeping because we think the result isn't good."
If someone can cajole their LLM to emit something worthwhile, e.g. Terence Tao's LLM generated proofs, people will be happy to acknowledge it. Most people are incapable of that and no number of protestations of gatekeeping can cover up the unoriginality and poor quality of their LLM results.
What concerns me is how easily the “rest of the world” is changing their opinions about what’s good. If the result isn’t good, then it isn’t good, sure. But in my experience there’s a large contingent of people, especially the youth, that are more reactionary about AI than they are interested in creativity. Their idea of creative value is inherently tied to self-expression and individualism, which AI and systems-based creative processes are threatening. When they don’t understand the philosophical case for non-individualistic/systems-based creative processes, they can’t differentiate between computer assisted creativity and computer assisted slop
> Do you care where each bristle lands?
Sometimes you do, which is why there’s not only a single type of brush in a studio. You want something very controllable if you’re doing lineart with ink.
Even with digital painting, there’s a lot of fussing with the brush engine. There’s even a market for selling presets.
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
> Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
> That may be, but it's also exposing a lot of gatekeeping
"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".
If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.
I would argue the term "gatekeeping" is being twisted around when it comes to AI. I see genuine gatekeeping when people with a certain skill or qualification try to discourage newcomers by making their field seem mysterious and only able to be done by super special people, and intimidating or making fun of newbies who come along and ask naive questions.
"Gatekeeping" is NOT when you require someone to be willing learn a skill in order to join a community of people with that skill.
And in fact, saying "you are too stupid to learn that on your own, use an AI instead" is kind of gatekeeping on its own, because it implicitly creates a shrinking elite who actually have the knowledge (that is fed to the AI so it can be regurgitated for everyone else), shutting out the majority who are stuck in the "LLM slum".
Making ham radio operators learn Morse Code was "requiring someone to be willing to learn a skill". Also pure gatekeeping.
While at first glance LLMs do help expose and even circumvent gatekeeping, often it turns out that gatekeeping might have been there for a reason.
We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.
I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.
Really? You think LLMs are a bigger shift in how internet communities are than big corporations like Google, Facebook etc.? I personally see much less change last few years than I did 15 years ago.
I think that having some difficulty and having to "bloody your forehead" acts as a filter that you cared enough to put a lot of effort into it. From a consumer side, someone having spent a lot of time on something certainly isn't a guarantee that it is good, but it provides _some_ signal about the sincerity of the producer's belief in it. IMO it's not gatekeeping to only want to pay attention to things that care went into: it's just normal human behavior to avoid unreasonable asymmetries of effort.
Most ideas aren't interesting. Implementations are interesting. I don't care if you worked hard on your implementation or not, but I do care if it solves the problem in a novel or especially efficient way. These are not the hallmarks of AI solutions.
In the vast majority of contexts I don’t want “novel” and “interesting” implementations, I want boring and proven ones.
In the vast majority of contexts I'm not using "Show HN" submissions to manage important functions of my life.
It's not a hazing ritual, it's a valuable learning experience. Yes, it's nice to have the option of foregoing it, but it's a tradeoff.
So the point of a "Show HN" is to showcase your valuable learning experience?
What the article is saying is:
"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."
Right, so it's about the person and how they've qualified themselves, and not about what they've built.
I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.
It's not about status. It's about interest. A joiner is not going to have an interesting conversation about joinery with someone who has put some flatpak furniture together.
Oh, is that what Show HN is? A community of craftspeople discussing their craft? I hadn't realized.
I think the valuable learning experience can be what makes a Show HN worth viewing, if it's worth viewing. (I don't feel precious about it though.. I didn't think Show HN was particularly engaging before AI either)
Did you just "it's not x, it's y" me?
Gatekeeping can be a good thing -- if you have to put effort into what you create, you're going to be more selective about what ideas you invest in. I wouldn't call that "bloodying your forehead", I'd call it putting work into something before demanding attention
It's not about having to put in effort for the sake of it, the point is that building something by hand you will gain insight into the problem, which insight then becomes a valuable contribution.
What if the AI produces writing that better accomplishes my goal than writing it myself? Why do you feel differently about these two acts?
For what it's worth, the unifying idea behind both is basically a "hazing ritual", or more neutrally phrased, skin in the game. It takes time and energy to look at things people produce. You should spend time and energy making sure I'm not looking at a pile of shit. Doesn't matter if it's a website or prose.
Obviously some people don't. And that's why the signal to noise ratio is becoming shit very quickly.
It doesn't, is the problem. If it did, I would feel differently.
Nothing wrong with some degree of gatekeeping though. A measured amount of elitism is a force for good.
Some people here enjoy solutions to difficult technical problems? It's not product hunt
> what was interesting about a "Show HN" post was that someone had the technical competence to put something together
Wouldn't the masses of Show HN posts that have gotten no interest pre-AI refute that?
gatekeeping is just a synonym for curration by people who don't like the currators choice.
And we are going to need more curration so goddamned badly....
I can't believe the mods at /r/screenprinting took down my post on the CustomInk shirt I ordered.
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
AI enables the stereotypical "idea guy" to suddenly be a "builder". Of course, they are learning in realtime that having the idea was always the easy part...
I had found this rebuttal: Ideas are cheap only if you have cheap ideas.
I would argue good ideas are not so easy to find. It is harder than it seems to fit the market, and that is why most of apps fail. At the end of the day, everyone is blinded by hubris and ignorance... I do include myself in that.
They may not be the easiest thing to find, but I'd submit that good ideas are way more common than the skill and resources needed to capitalise on them
More interesting question than what? And also, say you have an answer to that question, what insight do you have now that you didn't have before?
Well the claim was that AI makes you boring. The counter is that interesting people remain interesting, it's just that a flood of previously already boring people are pouring into tech. We could make some predictions that depend on how you model this. For instance, the absolute number of interesting projects posted to HN could increase or decrease, and likewise for the relative number vs total projects. You might expect different outcomes
I'm going to guess the same way Money makes rich people turn into morons, AI will turn idiots into...oh...no
AI doesn't make people boring, boring people use AI to make projects they otherwise never would have.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
Buggy whips are a tool.
No one in their right mind would use one.
Using the wrong tool for the job results in disaster.
It's like watching a guy bang rocks together to "vibe build" a house. Good luck.
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
We built an editor for creating posts on LinkedIn where - to avoid the "all AI output is similar/boring" issue - it operates more like a muse than a writer. i.e. it asks questions, pushes you to have deeper insights, connects the dots with what others are writing about etc.
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
A tangent, I wonder what kind of people actually stick around and read LinkedIn, the posts on there are so bad. I only go on there when I'm trying to get recruiters to find me for a new job. Instagram too I can't bear to use Instagram it has so many ads.
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
I sometimes go in the opposite direction - generate LLM output and then rewrite it in my own words
The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice
This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.
I do something similar: give it a bunch of ideas I have or a general point form structure, have it help me simplify and organize those notes into something more structured, then I write it out myself.
It's a fantastic editor!
that's a perfect use, imhno, of AI-assisted writing. Someone (er-something) to help you bounce ideas, and organize....
Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.
Are you joking? The facts and references are the part we know it will hallucinate.
You can check the references.
I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.
I have an opinion of people that have opinions on AI
It's not them, it's you.
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
> AI writing will make people who write worse than average, better writers.
Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.
> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.
The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.
Here's my definition of good writing: it's efficient and communicates precisely what you want to convey in an easy to understand way
AI is almost the exact opposite. It's verbose fluff that's only superficially structured well. It's worse than average
(waiting for someone to reply that I can tell the AI to be concise and meaningful)
Here's AI responding to you:
"You're describing the default output, and you're right — it's bad. But that's like judging a programming language by its tutorial examples.
The actual skill is in the prompting, editing, and knowing when to throw the output away entirely. I use LLMs daily for technical writing and the first draft is almost never the final product. It's a starting point I can reshape faster than staring at a blank page.
The real problem isn't that AI can't produce concise, precise writing — it's that most people accept the first completion and hit send. That's a user problem, not a tool problem."
This response provides the recommended daily allowance of irony.
i think a lot of people that use AI to help them write want it specifically BECAUSE it makes them boring and generic.
and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?
for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.
office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.
A table saw doesn’t make you a better carpenter. It makes you faster - for better or worse.
LLMs and agents work the same way. They’re power tools. Skill and judgment determine whether you build more, or lose fingers faster.
Highly doubt that since its the complete opposite for coding. Whats missing for people of all skill levels is that writing helps you organize your thoughts, but that can happen at prompt time?
Good code is marked by productivity, conformance to standards, and absence of bugs. Good writing is marked by originality and personality and not overusing the rhetorical crutches AI overrelies on to try to seem engaging.
Time, effort, and skill being equal, I would suggest that AI access generally improves the quality of any given output. The issue is that AI use is only externally identifiable when at least one of those inputs is low, which makes it easy to develop poor heuristics.
No one finds AI-assisted prose/code/ideas boring, per se. They find bad prose/code/ideas boring. "AI makes you boring" is this generation's version of complaining about typing or cellular phones. AI is just a tool; it's up to humans how to use it.
This claim sounds plausible, but it is also testable. Do you know whether this has actually been tested in an experimental setting?
> AI writing will make people who write worse than average, better writers.
If they don't care enough to improve themselves at the task in the first place then why would they improve at all? Osmosis?
If this worked then letting a world renown author write all my letters for me will make me a better writer. Right?
Who cares if you're a "good writer?" Are you "easy to understand" is the real achievement.
After telling Copilot to lose the em-dash, never say “It’s not A, it’s B” and avoid alternating one-sentence and long paragraphs it had the gall to tell me it wrote better than most people.
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
you can prompt it to stop doing that, and to behave exactly how you need it. my prompts say "no flattery, no follow up questions, PhD level discourse, concise and succinct responses, include grounding, etc"
> PhD level discourse
What is that? Do you think PhDs have some special way of talking about things?
Yes
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture. Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
This is more about low effort than AI.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
> Not more blog posts, more emails, more boilerplate — but something structurally new?
This is a point that often results in bad faith arguments from both AI enthusiasts and AI skeptics. Enthusiasts will say "everything is a remix and the most creative works are built on previous works" while skeptics will say "LLMs are stochastic parrots and cannot create anything new by technical definition".
The truth is somewhere in the middle, which unfortunately invokes the Golden Mean Fallacy that makes no one happy.
Funny thing is, LLMs can create novel ideas, they're just crap. Turn the temperature setting up or expand top_p and it'll come up with increasingly wacky responses, some of which might even be good.
Creativity often requires reasoning in unusual ways, and evaluating those ideas requires learning. The first part we can probably get LLMs to do; the latter part we can't (RL is a separate process and not really scalable).
Even without any of that, you can prompt your way into new things. I'm building a camper out of wood, and I've gotten older LLM models to make novel camper designs just by asking it questions and choosing things. You can make other AI models make novel music by prompting it to combine different aspects of music into a new song. Human creativity works that way too. Think of all the failed attempts at new things that humans come up with before a good one actually sticks.
Whoa there. Let's not oversimplify in either direction here.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
Most ideas people have aren't original, but the original ideas people do have come after struggling with a lot of unoriginal ideas.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
I agree for writing, but not for coding.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
[1] https://news.ycombinator.com/item?id=47059206
Additionally, the boredom atrophies any future collaboration. Why make a helpful library and share it when you can brute force the thing that it helps? Why make a library when the bots will just slurp it up as delicious corpus and barf it back out at us? Why refactor? Why share your thoughts, wasting time typing? Why collaborate at all when the machine does everything with less mental energy? There was a article recently about offloading/outsourcing your thoughts, i.e. your humanity, which is part of why it's all so unbelievably boring.
Online ecosystem decay is on the horizon.
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
There's a version of this argument I agree with and one I don't.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
> if you use AI as a draft generator [...] you're spending your cognitive budget on the high-value parts (ideas, structure
I don't follow. If you have ideas and a structure you already have a working draft. Just start revising that. What would an AI add other than superfluous yapping that should be edited out, without also becoming the replacement for thinking described in your negative example?
No, AI makes it easier to get boring ideas to more than an elevator pitch or "Wouldn't it be cool..". Countless people right now are able to get that initial bit of excitement over an idea and instead of a little groundwork on various aspects of it.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
And AI denial makes you annoying.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
I think there should be 10x more hardcore AI denialists and doomers to offset the obnoxiousness and absurdity of the other side. As usual, the reality is somewhere in the middle, perhaps slightly on the denialist side, but the pro-AI crowd has completely lost the plot.
I'm all for well founded arguments against premature AGI empowerment..
But what I'm replying to, and the vast majority of the AI denial I see, is rooted in a superficial, defensive, almost aesthetic knee jerk rejection of unimportant aspects of human taste and preference.
The article does not fit the description of blind AI denialism, though. The author even acknowledges that the tool can be useful. It makes a well articulated case that by not putting any thought into your work and words, and allowing the tool to do the thinking for you, the end product is boring. You may agree or disagree with this opinion, but I think the knee jerk rejection is coming from you.
I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.
At least this CEO gets it. Hopefully more will start to follow.
Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
This is a very bad idea unless you have 100% accuracy in identifying AI generated writing, which is impossible. Otherwise, your tool will be more-often used to harass people who use those tropes organically without AI.
This behavior has already been happening with Pangram Labs which supposedly does have good AI detection.
I agree with the risks. However the primary goal of the site is educational not accusatory. I mostly want people to be able to recognise these patterns.
The WIP features measure breadth and density of these tropes, and each trope has frequency thresholds. Also I don't use AI to identify AI writing to avoid accusatory hallucinations.
I do appreciate the feedback though and will take it into consideration.
> However the primary goal of the site is educational not accusatory.
How then is it different from the Wikipedia page you linked?
The obvious question is: was it vibe coded? :)
As much as I'd like to know whether a text was written by a human or not, I'm saddened by the fact that some of these writing patterns have been poisoned by these tools. I enjoy, use, and find many of them to be an elegant way to get a point across. And I refuse to give up the em dash! So if that flags any of my writing—so be it.
Absolutely vibe coded, I'm sure I disclosed it somewhere on the site. As much as I hate using AI for creative endeavours I have to agree that it excels as nextjs/vercel/quick projects like this. I was mostly focused on the curation of the tropes and examples.
Believe me I've had to adjust my writing a lot to avoid these tells, even academics I know are second guessing everything they've ever been taught. It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
> It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
I applaud your optimism, but I think the internet is a lost cause. Humans who value communicating with other humans will need to retreat into niche communities with zero tolerance for bots. Filtering out bot content will likely continue to be impossible, but we'll eventually settle on a good way to determine if someone is human. I just hope we won't have to give up our privacy and anonymity for it.
This aligns with an article I titled "AI can only solve boring problems"[0]
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
> I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
I vibe code little productivity apps that would take me months to make and now I can make them in a few days. But tbh talking to Google's Gemini is like taking to a drunk programmer; while solving one bug it introduces another bug and we fight back and forth until it realizes what and how needs to be fixed.
> Original ideas are the result of the very work you’re offloading on LLMs.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
Back in my day, boring code was celebrated.
If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.
This is a great way to think about it. Put in the same effort and get farther.
AI is a bicycle, not a motorcycle.
AI is great for getting my first bullet points into paragraph form, rephrasing/seeing different ways of writing to keep me moving when I’m stuck, and just general copy-editing (grammar really). Like you said it generally doesn’t save me a ton of time, but I get a quality copy done maybe a little bit faster and I find it just keeps me working on something rather than constantly stopping and starting when I hit a mental wall. Sometimes I just need/wanf to get it done, and for that LLM’s can be great.
Just this week I made a todo list app and a fitness tracking app and put them both on the App Store. What did you make?
If actually making something with AI and showing it to people makes you boring ... imagine how boring you are when you blog about AI, where at most you only verbally describe some attributes of what AI made for you, if anything.
The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.
Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.
This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.
Look at the world Google is molding.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)
https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.
AI is a mirror. If you are boring, you will use AI in a boring way.
"You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think." Great line!
Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.
"productivity theatre" is a brilliant phrase. Thank you!
I land on this thread to ctrl-f "taste" and will refresh and repeat later
That is for sure the word of the year, true or not. I agree with it, I think
it was up to 3 when I first posted
it's at 10 now. note: the article does not say "taste" once
OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.
> OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.
Then prove it. Otherwise, you're just assuming AI use must be good, and making up things to confirm your bias.
i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.
derivative work might be useful, but it's not interesting.
I'm self-aware enough to know that AI is not the reason I'm boring.
Jokes on you, I was boring before AI.
I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.
The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
In other words, AI raises the floor. If you were already near the ceiling, relying on it can (and likely will) bring you down. In areas where raising the floor is exceptionally good value (such as bespoke tools for visualizing data, or assistants that intelligently write code boilerplate, or having someone to speak to in a foreign language as opposed to talking to the wall), AI is amazing. In areas where we expect a high bar, such as an editorial, a fast and reliable messaging library, or a classic novel, it's not nearly as useful and often turns out to be a detriment.
> As someone who is fairly boring
As determined by whom?
> conversing with AI models and thinking things through with them certainly decreased my blandness
Again, determined by whom?
I’m being genuine. Are those self-assessments? Because those specific judgement are something for other people to decide.
I think I can observe the world and my relative state therein, no? I know I am unfortunately less ambitious, driven and outgoing than others, which are commonly associated with being interesting. And I don't complain about it, the word has a meaning after all and I'll not delude myself into changing its definition.
Definitely at a certain threshold it is for others to decide what is boring and not, I agree with that.
In any case, my simple point is that AI can definitely raise the floor, as the other comment more succinctly expressed. Irrelevant for people at the top, but good for the rest of us.
> I think I can observe the world and my relative state therein, no?
Yes, to an extent. You can, for example, evaluate if you’re sensitive or courageous or hard working. But some things do not concern only you, they necessitate another person, such as being interesting or friendly or generous.
A good heuristic might be “what could I not say about myself if I were the only living being on Earth?”. You can still be sensitive or hard working if you’re alone, but you can’t be friendly because there’s no one else to be friendly to.
Technically you could bore yourself, but in practice that’s something you do to other people. Furthermore, it is highly subjective, a D&D dungeon master will be unbearably boring to some, and infinitely interesting to others.
> I know I am unfortunately less ambitious, driven and outgoing than others
I disagree those automatically make someone boring.
I also disagree with LLMs improving your situation. For someone to find you interesting, they have to know what makes you tick. If what you have to share is limited by what everyone else can get (by querying an LLM), that is boring.
Brilliant. You put into words something that I've thought every time I've seen people flinging around slop, or ideating about ways to fling around slop to "accelerate productivity"...
Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.
Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...
Or.. Only boring people use AI.
AI also makes you bored.
I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.
I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.
And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
It’s so dumb.
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
Have to admit I'm really struggling with the idea that the Wright brothers didn't do much thinking because they were self taught, never mind the idea that figuring out aeronautics from reading every publication they could get their hands on, intuiting wing warping and experimenting by hand-building mechanical devices looks much like asking Claude to make a CRUD app...
That's not what I'm saying. My point is that expertise, as in, credentials, institutional knowledge, accepted wisdom, was actively harmful to solving flight. The Wrights succeeded because they built a tool that made iteration cheap (the wind tunnel), tested 200 wing shapes without deference to what the existing literature said should work (Lilienthal's tables were wrong and everyone with "expertise" accepted them uncritically), and they closed the loop with reality by actually flying.
That's the same approach as vibe coding. Not "asking Claude to make a CRUD app.", but using it to cheaply explore solution spaces that an expert's priors would tell you aren't worth trying. The wind tunnel didn't do the thinking for the Wrights, it just made thinking and iterating cheap. That's what LLMs do for code.
The blog post's argument is that deep immersion is what produces original ideas. But what history shows is that deeply immersed experts are often totally wrong and the outsiders who iterate cheaply and empirically take the prize. The irony here is that LLM haters feel it falls victim to the Einstellung effect [1]. But the exact opposite is true: LLMs make it so cheap to iterate on what we thought in the past were suboptimal/broken solutions, which makes it possible to cheaply discover the more efficient and simpler methods, which means humans uniquely fall victim to the Einstellung effect whereas LLMs don't.
[1]: https://en.wikipedia.org/wiki/Einstellung_effect
The Wright brothers were deeply immersed experts in flight like only a handful of other people other people at the time, and were obsessed with Lilenthal's tables to the point they figured out how aerodynamic errors killed Lilenthal.
The blog's actual point isn't some deference to credentials straw man you've invented, it's that stuff lazily hashed together that's got to "good enough" without effort is seldom as interesting as people's passion projects. And the Wright brothers' application of their hardware assembly skills and the scientific method to theory they'd gone to great lengths to get sent to Dayton Ohio is pretty much the antithesis of getting to "good enough" without effort. Probably nobody around the turn of the century devoted more thinking and doing time to powered flight
Using AI isn't necessary or sufficient for getting to "good enough" without much effort (and it's of course possible to expend lots of effort with AI), but it does act as a multiplier for creating passable stuff with little thought (to an even greater extent than templates and frameworks and stock photos). And sure, a founding tenet of online marketing (and YC) from long before Claude is that many small experiments to see if $market has any takers might be worth doing before investing thinking time in understanding and iterating, and some people have made millions from it, but that doesn't mean experiments stitched together in a weekend mostly from other people's parts aren't boring to look at or that their hit rate won't be low....
I've actually ran into few blogs that were incredibly shallow while sounding profound.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
Bro, I'm a software developer, it's not the fucking AI making me boring.
Also sounds likely that it's the mediocre who gravitate to AI in the first place.
Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
"every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon."
They're token predictors. This is inherently a limited technology, which is optimized for making people feel good about interacting with it.
There may be future AI technologies which are not just token predictors, and will have different capabilities. Or maybe there won't be. But when we talk about AI these days, we're talking about a technology with a skill ceiling.
Try the formulation "anything about AI is boring."
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?
Another click bait title produced by a human. Most of your premises could be easily be countered. Every comment is essentially an example.
Isnt this just flat out untrue since bots can pass turing tests
People are often boring in conversation. Therefore, an AI agent doesn't need to be interesting to seem human enough in a Turing test.
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
The turing test isn't designed to select for the most interesting individuals. Some people are less interesting than other people. If the machine is acting similar to a boring human, it can make you more boring, and still pass the test.
The bot doesn’t pass, the human fails.
I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.
I think the point of the article is that on sites like HN, people used to need domain expertise to answer questions. Their answer was based on unique experience, and even if maybe it wasn't optimal it was unique. Now a lot of people just check chatgpt and answer the question without actually knowing what they're talking about. Worse the bar to submit something to Show HN has gotten lower, and people are vibe coding projects in an afternoon nobody wants or cares about. I don't think the article is really about writing style
Meh.
Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.
I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.
Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.
People who agree with you are right-minded and sensible, and people who disagree with you are "jumping on the bandwagon".
I was onboard with the author until this paragraph:
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.