The gist ("product becomes a black box") applies to any abstraction. It could apply to high-level languages (including so-called "low-level" languages like C), and few people criticize those.
But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
I agree, there is a reason we settled on programming languages as a interface to instruct the machine. Ultimately it is a tool to indicate our thoughts as precise as possible in a certain domain.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
There are plenty of alternatives for programming than through written language. It's just that commercialization took the first thing that was available and exploited it. Same thing is happening with AI. Many innovations are doomed to become cursed. Computers are cursed. Medicine is cursed. Science is cursed. Everything sucks and that's just the way it is.
After the war when the dust settles, we'll start over. We might escape the death of the sun, but not the heat death of the universe. Therefore, none of the above matters and I'm rambling.
However, I think what a lot of people don't realize is the reason a lot of executives and business users are excited about AI and don't mind developers getting replaced is because product is already a black box.
I really like the framing here (via Richard Sennett / Roland van der Vorst): craft is a relationship with the material. In software, that “material consciousness” is built by touching the system—writing code, feeling the resistance of constraints, refactoring, modeling the domain until it clicks.
If we outsource the whole “hands that think” loop to agents, we may ship faster… but we also risk losing the embodied understanding that lets us explain why something is hard, where the edges are, and how to invent a better architecture instead of accepting “computer says no.”
I hope we keep making room for “luxury software”: not in price, but in care—the Swiss-watch mentality. Clean mechanisms, legible invariants, debuggable behavior, and the joy of building something you can trust and maintain for years. Hacker News needs more of that energy.
> This is a great insight. For software engineers coding is the way to fully grasp the business context.
> By programming, they learn how the system fits together, where the limits are, and what is possible. From there they can discover new possibilities, but also assess whether new ideas are feasible.
Maybe I have a different understanding of "business context", but I would argue the opposite. AI tools allow me to spend much more time on the business impact of features, think of edge cases, talk with stakeholders, talk with the project/product owners. Often there are features that stakeholders dismiss that seemed complex and difficult in the past, but are much easier now with faster coding.
Code was almost never the limiting factor before. It's the business that is the limit.
I think that for the average developer this might be true. I think that for excellent developers, they spend a lot of time thinking about the edge cases and ensuring that the system/code is supportable. i.e. it explains what the problems are so that issue can be resolved quickly; the code is written in a style that provides protection from other parts of the system failing; and so on. This isn't done on average code and I don't see AI doing this at all.
To be honest, the same applies when a developer gets promoted to team lead. I made this experience on my own that I no longer got in touch with the code written. Reasons are slightly different (for me it was a lack of time and documentation)
I am a scientist who rarely collaborates (unlike programmers and unlike most scientists).
When I wrote a paper in collaboration some time ago, it felt very weird to have large parts of the paper that I had superficial knowledge of (incidentally, I had retyped everything my co-author did, but in my own notation) but no profound knowledge of how it was obtained, of the difficulties encountered. I guess this is how people who started vibe coding must feel.
this isn’t the right way to use ai to write code at work, it shouldn’t become a black box, you should make it iterative and be precise about architecture, guide the ai carefully to maintain the code in a state that a human can instantly drop in on if needed, use ai as a keyboard extension not a blindfold
The gist ("product becomes a black box") applies to any abstraction. It could apply to high-level languages (including so-called "low-level" languages like C), and few people criticize those.
But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
I agree, there is a reason we settled on programming languages as a interface to instruct the machine. Ultimately it is a tool to indicate our thoughts as precise as possible in a certain domain.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
There are plenty of alternatives for programming than through written language. It's just that commercialization took the first thing that was available and exploited it. Same thing is happening with AI. Many innovations are doomed to become cursed. Computers are cursed. Medicine is cursed. Science is cursed. Everything sucks and that's just the way it is.
After the war when the dust settles, we'll start over. We might escape the death of the sun, but not the heat death of the universe. Therefore, none of the above matters and I'm rambling.
I love the framing here.
However, I think what a lot of people don't realize is the reason a lot of executives and business users are excited about AI and don't mind developers getting replaced is because product is already a black box.
I really like the framing here (via Richard Sennett / Roland van der Vorst): craft is a relationship with the material. In software, that “material consciousness” is built by touching the system—writing code, feeling the resistance of constraints, refactoring, modeling the domain until it clicks.
If we outsource the whole “hands that think” loop to agents, we may ship faster… but we also risk losing the embodied understanding that lets us explain why something is hard, where the edges are, and how to invent a better architecture instead of accepting “computer says no.”
I hope we keep making room for “luxury software”: not in price, but in care—the Swiss-watch mentality. Clean mechanisms, legible invariants, debuggable behavior, and the joy of building something you can trust and maintain for years. Hacker News needs more of that energy.
> This is a great insight. For software engineers coding is the way to fully grasp the business context.
> By programming, they learn how the system fits together, where the limits are, and what is possible. From there they can discover new possibilities, but also assess whether new ideas are feasible.
Maybe I have a different understanding of "business context", but I would argue the opposite. AI tools allow me to spend much more time on the business impact of features, think of edge cases, talk with stakeholders, talk with the project/product owners. Often there are features that stakeholders dismiss that seemed complex and difficult in the past, but are much easier now with faster coding.
Code was almost never the limiting factor before. It's the business that is the limit.
I think that for the average developer this might be true. I think that for excellent developers, they spend a lot of time thinking about the edge cases and ensuring that the system/code is supportable. i.e. it explains what the problems are so that issue can be resolved quickly; the code is written in a style that provides protection from other parts of the system failing; and so on. This isn't done on average code and I don't see AI doing this at all.
> feeling resistance
In a software context, I wonder what the impact of the language used is on the sense of "resistance"?
To be honest, the same applies when a developer gets promoted to team lead. I made this experience on my own that I no longer got in touch with the code written. Reasons are slightly different (for me it was a lack of time and documentation)
I am a scientist who rarely collaborates (unlike programmers and unlike most scientists).
When I wrote a paper in collaboration some time ago, it felt very weird to have large parts of the paper that I had superficial knowledge of (incidentally, I had retyped everything my co-author did, but in my own notation) but no profound knowledge of how it was obtained, of the difficulties encountered. I guess this is how people who started vibe coding must feel.
If your computer says No just ditch it and buy a non-Apple computer.
One with Windows 11?
Do you really think Windows 11 doesn't try to control the user, or say no to them?
Are you sure you can't think of a commonly used operating system which doesn't?
Name ends with "ux", or maybe "BSD"?
Chill, it was an obvious tongue-in-cheek comment
mine too, calm down
ITT: people who didn't see little brittain
To an extend, AI can help with explaining what the code does. So, computer says why it says "no".
this isn’t the right way to use ai to write code at work, it shouldn’t become a black box, you should make it iterative and be precise about architecture, guide the ai carefully to maintain the code in a state that a human can instantly drop in on if needed, use ai as a keyboard extension not a blindfold