Interestingly I’ve learned more about languages and systems and tools I use in the last few years working with agentic coding than I did in 35 years of artisanal programming. I am still vastly superior at making decisions about systems and techniques and approaches than the agentic tools, but they are like a really really well read intern who knows a great deal of detail about errata but have very little experience. They enthusiastically make mistakes but take feedback - at least up front - even if they often forget because they don’t totally understand and haven’t internalized it.
The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it.
I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it.
Agreed. I don't know anything about turning sand into transistors or assembly but do well. So I don't know my full stack either.
What is important is not being afraid to learn the rest of your system and keeping an index.
Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand.
When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base."
This post does not make the claim that "you should know everything about everything you work on" - its making the claim that writing code and being able to read code effectively are intrinsically linked.
I think of it as driver's seat vs back seat vs passenger seat. You always take the back seat and eventually you will forget how to drive. You insist on always being in the front seat and you will miss out on the occasions where the LLM happens to know the area very well, like working with an unfamiliar library or problem domain. If it is a place that you are just passing through, it's a great to let it take the wheel and see where it will takes you. If it is a place that you need to become familiar with, it's great to have a dependable navigator beside you.
My sense is that a decade from now, the people who generally see their place as the driver seat but recognize when its not are going to be writing the code that matters.
I tend to think of it as like two pilots as on commercial airliners: you always have one pilot flying and one pilot monitoring.
You can debate with agentic coding who is monitoring and who is flying but, if we assume the user is monitoring what that means, in practice, for me is that I'm reading and making sure I understand all the changes the agent is proposing to make. That includes reading and understanding all the code.
This is how I feel about things. Its like someone is demanding that I become a manager, when I was perfectly happy being a IC. And now I have to figure out how to be a manager of AI agents while at the same time not lose my ability to judge their work, or plan effectively, even though I'm not supposed to be doing things "by hand" anymore. But doing things "by hand" is how I reasoned through problems and figured out the plan to begin with.
the funny thing is once the llms got mostly good enough in november 2025 for me, it was mind boggling how much it helped me get stuff out of my head with ease.
its easier for me to code now, because its like i have a 24/7 insane intern that needs to be supervised via pair programming but also understands most topics enough to be useful/ dangerous.
ironically ive been spending much of my time iterating on ways to improve model reasoning and reliability and aside from the challenge of benchmark design, ive had some pretty good success!!
my fork of omp: https://github.com/cartazio/oh-punkin-pi has a bunch of my ideas layered on top. ultimately its just a bridge till i’ve finished the build of the proper 2nd gen harness with some other really cool stuff folded in. not sure if theres a bizop in a hosted version of what ive got planned, but the changes ive done in my forks have made enough difference that i can see the different in per model reasoning
Actually ignore my comment I misunderstood the premise. I meant not vibe coding is the way to save time with production issues. Not the other way around!
If you are coding by hand like the old days you are probably not literally writing everything from scratch anyway, you are copy pasting a bunch of shit off google and stackoverflow or installing open source libraries.
I also reuse a lot of my own code. Either from libraries I built or just directly copy pasting (like boilerplate code for setting up the basics of something in my style).
it's a fairly new way of doing things. I predict, in the future it will be more formalized and standardized like AGILE and SCRUM and all that boring stuff.
The result of that though would be establishment of development patterns that are good practices.
The rule of thumb is: An agent can write it, but a human has to understand it before it gets pushed to prod.
I'm still not convinced about the doom and gloom over developers being replaced. I'm not a dev as part of my main job function, but where I do use LLMs, it has been to do things I couldn't have done before because I just didn't have time, and had to de-prioritize. You can ship more and better features. I think LLMs being tools and all, there is too much focus on how the tool should be used without considering desired and actualized results.
If you just want an app shipped with little hassle and that's it, just let Claude do most of the work and get it over with. If you have other requirements, well that's where the best practices and standards would come in the future (I hope), but for now we're all just reading random blog posts and see how others are faring and experimenting.
I've been using AI tools to brainstorm approaches and sometimes generate code, but actually doing the typing myself. That way I'm less likely to forget the mechanics and programming language over time.
One approach you can use is to ask it to never write the code for you, which forces it to explain and then once you try the idea by coding yourself you get a better understanding of it. I use this approach with code I am required to maintain. It still bites me sometimes because the models still mixes a lot of incorrect information (usually just stuff that was correct in the past but is incorrect now). For throwaway and easy to verify scripts I ask it to generate, but I do ask to avoid over engineering and trying to catch all corner cases cause in scripts I prefer just letting things error as they are better understood as a step that failed. I also avoid languages I find hard to read (like powershell) and prefer to generate things that are short to fit in the monitor so I can read everything and understand (python, bash, batch are my goto scripting languages).
Same. I've also configured the system prompt to never give me a full solution or write a code for me. So whenever I ask it a question it produces a short 10 line example or even a pseudocode. This is far easier for me to reason about.
I still reject > 50% of AI suggestions, because they're too mediocre, like moving code for no reason or sometimes it is just plain wrong.
Me too, ... more or less. I'm mostly still typing, sometimes copy-and-pasting with typed changes, and rarely copy-and-pasting verbatim. With the caveat that in some cases, like prototypes, proofs-of-concept, and porting code between languages; then maybe many lines are copy-and-pasted verbatim.
Re vendor lock in point: this is a harness issue really. Sure, CC is restricted to Anthropic models, but it's not the only harness out there. So if one vendor has an outage or botches the quality of their models due to compute shortage, you can switch to another vendor. LLMs are the easiest to switch. Of course, if hardware costs go up, so will all AI vendors. The only way out for the employer would be to directly buy the hardware (or do a fixed price deal with a cloud provider).
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
I’ve build a configuration transpiler to Claude code and codex and found I can switch pretty quickly between both and run both at once. At the moment codex performs better. Prior CC did. There is no vendor lockin and this is an old canard in technology that LLMs in fact themselves make irrelevant. Once you’ve got an implementation that uses X converting it to Y is almost trivial with an LLM because the spec is canonical in the reference.
CC isn’t even limited to Anthropic models, there’s a post on the front page right now to use it with Deepseek V4 since Deepseek provides an Anthropic compatible API and CC reads API URLs from env variables so you can override them.
I mean that’s been my line every time someone makes impressed noises when I say I’m a programmer - it’s really not that hard, it’s really just a question of whether you like it enough to put the work in, like anything else. “Don’t you have to be a math wiz?” No dude 95% of the time whatever you’re trying to do already has a very well researched approach, a lot of times you’re just picking which pre-vetted solution to adapt to your needs.
I kind of think this article misses the mark a little.
There is skill loss from heavy AI use.
But I want to acknowledge the awkward elephant in the room. AI Is making people too fast. I don't mean that a faster output is bad. It's a faster output and code rather than a full understanding and experience in producing the code. It's rewarding people who try to talk about business value rather than the people that are building and making safe decisions with deep knowledge.
AI: Yes, its good and it can produce some good solutions, however it ultimately doesn't know what it's doing and at the best of cases needs strong orchestrators.
We're in a cesspit of business driven development and they're not getting the right harsh and repulational punishments for bad decisions.
Lars we are on the same page. I use LLMs to help me scope and get a second set of eyes on the high levels of a task. Then I write the code. Often I automate boilerplate or boring objects but sometimes it's faster/better for me to just write them. Then I will ask an LLM to say write some tests. Then I will focus on the cases they missed and write those myself.
I have been described as a decel and a Luddite though so be weary of my opinions.
Would like to see a study of brain scans during flow, manual programming, compared to code review. If the conclusion is different parts of the brain are activated, then orchestration is a separate activity entirely. Reading code is not the same as writing code.
However, the code review study needs to compare between surface scanning and reviewing long enough to get over a theoretical slough of perspective: when you assume the coding chair and are in their frame, whether the brain shifts into a different cognitive mode.
Otherwise, just stamping "Looks good to me" is likely to lead to the same atrophy. There's no critical thought, even a self-summary of the change or active questioning.
Thoughtful, deliberate code review just plain takes longer. AI can help here a lot, although it still takes over the "get into review mode" process.
I absolutely feel like a "different" part of my mind is loaded when seriously engineering something myself vs vibecoding+reviewing. Even the reviewing is more annoying in the latter mental context.
Code review alone is kind of like being able to understand a foreign language enough to read it, but not really understand it in flowing conversation or being able to speak it, much less construct a complex piece of literature.
Retention also suffers, as you will quickly forget what you just reviewed. What is the last PR you remember?
There’s too much in this article to comment on it all, but if we zoom into the first claim:
> An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism.
My question is why isn’t there an effort from the author to mitigate the insane things that LLMs do? For example, I set up a hexagonal design pattern for our backend. Claude Code printed out directionally ok but actually nonsensical code when I asked it to riff off the canonical example.
Then, I built linters specific to the conventions I want. For example, all hexagonal features share the same directory structure, and the port.py file has a Protocol class suffixed with “Port”.
That was better but there was a bunch of wheel spinning so then I built a scaffolder as part of the linter to print out templated code depending on what I want to do.
Then I was worried it was hallucinating the data, so I wrote a fixture generator that reads from our db and creates accurate fixtures for our adapters.
Since good code has never been “explained for itself 100%, without comments”, I employ BDD so the LLM can print out in a human readable way what the expected logical flow is. And for example, any disable of a custom rule I wrote requires and explanation of why as a comment.
Meanwhile, I’m collecting feedback from the agents along the way where they get tripped up, and what can improve in the architecture so we can promote more trust to the output. Like, I only have a fixture printer because it called out that real data (redacted yes) would be a better truth than any mocks I made.
Finally, code review is now less focused on the boilerplate and much more control flow in the use_case.
The stakes to have shitty code in these in-house tools is almost zero since new rules and rule version bumps are enforced w a ratchet pattern. Let the world fail on first pass.
Anyway, it seems to me like with investment you can slap rails on your code and stay sharp along the way. I have a strong vision for what works, am able to prove it deterministically with my homespun linters, and am being challenged by the LLMs daily with new ideas to bolt on.
So I don’t know, seems like the issue comes down to choosing to mistrust instead of slap on rails.
Edit: I wanted to ask if anyone is taking this approach or something similar, or have thought about things like writing linters for popular packages that would encourage a canonical implementation (I have seen some crazy crazy modeling with ORMs just from folks not reading the docs). HMU would love to chat youngii.jc@gmail
Only way to cope with this I found is to grind leetcode or advent of code. It's kinda funny how fast this all changed. Less funny part is the fact that I'm now kinda feared for my job in some time.
Software engineers and their leadership have been pushing back on terrible product managers for decades. AI isn't the reason to stop our time honored tradition. If anything we can write the emails faster now
This is exactly the same problem of "mechanical engineers' job is to design parts, not machine them, so we'll take training on machines out of the mech eng curriculum." Result: fresh mech eng grads do not know how to properly design parts because they have no idea how they are machined.
Doesn’t matter, if you ask for physically impossible features to cut this is something you could technically do. Or you ask for a feature that adds multiple setups to an otherwise simple part and makes it wildly expensive
The CNC doesn't know either. What usually happens is that an engineer at the CNC shop figures it out for you.
Knowing some machining still lets you design parts and assemblies that are some combination of cheaper, better, etc. This is noticeable with precision or high performance assemblies. And how many revisions are needed.
How can we solve this at a more fundamental level?
I think many people already recognize the problem:
-“Our ability to write code is being damaged.”
-“If our ability to write code declines, our ability to recognize good code also declines.”
But the problem is that the market no longer works without LLMs.
Freelance rates and deadlines are now calibrated around LLM-assisted output. Even clients who write “do not vibe code” often set deadlines that are impossible to meet unless you use something like vibe coding. The client’s expectations themselves are becoming abnormal.
That is the irony of the market.
I honestly do not know what to do.
Recent Hacker News discussions are mostly a negative echo chamber about AI use. In other places, it is often the opposite: only positive echo. But almost nobody discusses the actual solution.
The main topics I keep seeing are roughly these:
1. Is the large repository PR system failing a fundamental stress test? Or should AI-generated(GEN AI) code simply not be merged? If PR review is moving from handmade production to mass production, how should the PR system change? Or should it remain the same?
2. As vendor lock-in continues, can we move toward local LLMs to escape it? Are cost and harness design manageable? What level of local model is required to reach a similar coding speed?
3. If we are forced to use agentic coding, how do we avoid damaging our own ability to code?
There is a passage from Christopher Alexander that I keep thinking about:
“A whole academic field has grown up around the idea of ‘design methods’—and I have been hailed as one of the leading exponents of these so-called design methods. I am very sorry that this has happened, and want to state, publicly, that I reject the whole idea of design methods as a subject of study, since I think it is absurd to separate the study of designing from the practice of design. In fact, people who study design methods without also practicing design are almost always frustrated designers who have no sap in them, who have lost, or never had, the urge to shape things.”
— Christopher Alexander, 1971
This quote feels relevant to programming now.
If we separate the study and supervision fo programming from the actual practice of making, something important may be lost.
In architecture, there is this idea that without practice, the architect loses meaning. But now the market is forcing the separation.
People with enough symbolic capital and high status have the freedom not to use AI. But people lower in the market are under pressure to use it.
So I think the discussion now needs to move beyond whether AI coding is good or bad.
The real question is How do we keep using AI because the market demands it, while still preserving the human practice that makes programming meaningful and keeps our judgment alive?
I think these are the important question.
How do you maintain market value without using AI?
Or, if you do use AI, how do you avoid being treated as low-quality?
If you do not use AI, how can you remain more competitive than people who do use it?
If you do use AI, what advanatge do you have over people who do not use it, and how should you position yourself?
I know that agentic coding can cause skill degradation. I can feel it happening to me already. But for someone like me, who does not have strong status, credentials, or symbolic capital, social and market pressure makes AI almost unavoidable.
What frustrates me is that I do not see practical answers anywhere.
"How can we solve this at a more fundamental level?"
Stop using AI for coding. Period...there is no other solution. You can't make it work, nobody else can either. Without determinism, the entire process is useless. We need to stop trying to act like we all know that this isn't true. We have given it a chance, it failed, time to move on to something else no matter how much the VCs and execs don't want to. Those that do move on have a chance, the others have no future in software.
The issue is that you will end up without a job if the trend continues. It's similar to many cases of technical innovation - you can still have a few workers who do handcrafted works, but most of them have to use the machines, that may produce work of inferior quality but at much higher speed.
The market realigns, and unless you handwrite the highest possible quality at a quick pace, you won't be competitive with the vibe-coders who can fix a hundred issues a month.
It was the same with gps-assisted driving, now most people can't orient themselves autonomously. Worse, there are no roadsigns with directions installed, meaning that you are stuck with using the GPS.
I agree with what you are saying, but if I cannot get work, I may literally have nothing to eat tomorrow.
So while I agree with your point, it does not feel like a practical answer for my situation. For someone who is already well known and has enough reputation, refusing to use AI may be a matter of principle. But I am dealing with survival.
I do not think your answer is bad. But because this is a survival problem, it is difficult for me to risk everything on principle.
In other words, I know that your answer may be the morally correct one. If everyone boycotted this, perhaps it would not be adopted so aggressively.
But I cannot do that.
What I need is a way to use AI while degrading my own ability as little as possible, and while still preserving my skills.
I am not saying you are wrong. I am saying that your answer is too idealistic for someone in my position.
Interestingly I’ve learned more about languages and systems and tools I use in the last few years working with agentic coding than I did in 35 years of artisanal programming. I am still vastly superior at making decisions about systems and techniques and approaches than the agentic tools, but they are like a really really well read intern who knows a great deal of detail about errata but have very little experience. They enthusiastically make mistakes but take feedback - at least up front - even if they often forget because they don’t totally understand and haven’t internalized it.
The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it.
I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it.
Agreed. I don't know anything about turning sand into transistors or assembly but do well. So I don't know my full stack either.
What is important is not being afraid to learn the rest of your system and keeping an index.
Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand.
When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base."
That holds even within just large code bases.
This post does not make the claim that "you should know everything about everything you work on" - its making the claim that writing code and being able to read code effectively are intrinsically linked.
I think of it as driver's seat vs back seat vs passenger seat. You always take the back seat and eventually you will forget how to drive. You insist on always being in the front seat and you will miss out on the occasions where the LLM happens to know the area very well, like working with an unfamiliar library or problem domain. If it is a place that you are just passing through, it's a great to let it take the wheel and see where it will takes you. If it is a place that you need to become familiar with, it's great to have a dependable navigator beside you.
My sense is that a decade from now, the people who generally see their place as the driver seat but recognize when its not are going to be writing the code that matters.
I tend to think of it as like two pilots as on commercial airliners: you always have one pilot flying and one pilot monitoring.
You can debate with agentic coding who is monitoring and who is flying but, if we assume the user is monitoring what that means, in practice, for me is that I'm reading and making sure I understand all the changes the agent is proposing to make. That includes reading and understanding all the code.
This is how I feel about things. Its like someone is demanding that I become a manager, when I was perfectly happy being a IC. And now I have to figure out how to be a manager of AI agents while at the same time not lose my ability to judge their work, or plan effectively, even though I'm not supposed to be doing things "by hand" anymore. But doing things "by hand" is how I reasoned through problems and figured out the plan to begin with.
the funny thing is once the llms got mostly good enough in november 2025 for me, it was mind boggling how much it helped me get stuff out of my head with ease.
its easier for me to code now, because its like i have a 24/7 insane intern that needs to be supervised via pair programming but also understands most topics enough to be useful/ dangerous.
ironically ive been spending much of my time iterating on ways to improve model reasoning and reliability and aside from the challenge of benchmark design, ive had some pretty good success!!
my fork of omp: https://github.com/cartazio/oh-punkin-pi has a bunch of my ideas layered on top. ultimately its just a bridge till i’ve finished the build of the proper 2nd gen harness with some other really cool stuff folded in. not sure if theres a bizop in a hosted version of what ive got planned, but the changes ive done in my forks have made enough difference that i can see the different in per model reasoning
The thing is the code quality is still ultimately up to you
Nothing stopping you from iterating with the agent till the code is the exact same quality that you yourself would write
Sure, but then it's not really saving you time is it.
Comment deleted because it was backwards
But you’d have that coding it yourself…
Actually ignore my comment I misunderstood the premise. I meant not vibe coding is the way to save time with production issues. Not the other way around!
IMO it still does save time generally but it’s not as much of a huge gain
There are some times after iterating so much I’m not sure if I’ve even saved time because going from “it works” to “it’s up to quality” takes so long
It still is if agent brings it up to quality fast .
And yea usually does for me
I mean you have to compare apples to apples.
If you are coding by hand like the old days you are probably not literally writing everything from scratch anyway, you are copy pasting a bunch of shit off google and stackoverflow or installing open source libraries.
I also reuse a lot of my own code. Either from libraries I built or just directly copy pasting (like boilerplate code for setting up the basics of something in my style).
"... iterating with the agent till the code is the exact same quality that you yourself would write"
I don't want my code quality, I want AGI code quality - that's what I was promised and jetpacks and flying cars too!
it's a fairly new way of doing things. I predict, in the future it will be more formalized and standardized like AGILE and SCRUM and all that boring stuff.
The result of that though would be establishment of development patterns that are good practices.
The rule of thumb is: An agent can write it, but a human has to understand it before it gets pushed to prod.
I'm still not convinced about the doom and gloom over developers being replaced. I'm not a dev as part of my main job function, but where I do use LLMs, it has been to do things I couldn't have done before because I just didn't have time, and had to de-prioritize. You can ship more and better features. I think LLMs being tools and all, there is too much focus on how the tool should be used without considering desired and actualized results.
If you just want an app shipped with little hassle and that's it, just let Claude do most of the work and get it over with. If you have other requirements, well that's where the best practices and standards would come in the future (I hope), but for now we're all just reading random blog posts and see how others are faring and experimenting.
The slot machine lever is my least favourite opinion on the subject.
Also, let’s not forget. The developer is rarely the person pitching the feature, and is normally given the constraints and the PRD…
Soooo people can keep tiptapping on the keyboard, but eventually they need to open their mind to the possibility that “the old way” is actually dead.
I've been using AI tools to brainstorm approaches and sometimes generate code, but actually doing the typing myself. That way I'm less likely to forget the mechanics and programming language over time.
One approach you can use is to ask it to never write the code for you, which forces it to explain and then once you try the idea by coding yourself you get a better understanding of it. I use this approach with code I am required to maintain. It still bites me sometimes because the models still mixes a lot of incorrect information (usually just stuff that was correct in the past but is incorrect now). For throwaway and easy to verify scripts I ask it to generate, but I do ask to avoid over engineering and trying to catch all corner cases cause in scripts I prefer just letting things error as they are better understood as a step that failed. I also avoid languages I find hard to read (like powershell) and prefer to generate things that are short to fit in the monitor so I can read everything and understand (python, bash, batch are my goto scripting languages).
Same. I've also configured the system prompt to never give me a full solution or write a code for me. So whenever I ask it a question it produces a short 10 line example or even a pseudocode. This is far easier for me to reason about.
I still reject > 50% of AI suggestions, because they're too mediocre, like moving code for no reason or sometimes it is just plain wrong.
This is exactly what I do. I'm glad I'm not the only one.
Me too, ... more or less. I'm mostly still typing, sometimes copy-and-pasting with typed changes, and rarely copy-and-pasting verbatim. With the caveat that in some cases, like prototypes, proofs-of-concept, and porting code between languages; then maybe many lines are copy-and-pasted verbatim.
Re vendor lock in point: this is a harness issue really. Sure, CC is restricted to Anthropic models, but it's not the only harness out there. So if one vendor has an outage or botches the quality of their models due to compute shortage, you can switch to another vendor. LLMs are the easiest to switch. Of course, if hardware costs go up, so will all AI vendors. The only way out for the employer would be to directly buy the hardware (or do a fixed price deal with a cloud provider).
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
I’ve build a configuration transpiler to Claude code and codex and found I can switch pretty quickly between both and run both at once. At the moment codex performs better. Prior CC did. There is no vendor lockin and this is an old canard in technology that LLMs in fact themselves make irrelevant. Once you’ve got an implementation that uses X converting it to Y is almost trivial with an LLM because the spec is canonical in the reference.
I would love to see that! Mind sharing it?
CC isn’t even limited to Anthropic models, there’s a post on the front page right now to use it with Deepseek V4 since Deepseek provides an Anthropic compatible API and CC reads API URLs from env variables so you can override them.
I've come to the conclusion that if AI can do it, its not hard. None of the complicated software i work on can be reliably written by ai yet
I mean that’s been my line every time someone makes impressed noises when I say I’m a programmer - it’s really not that hard, it’s really just a question of whether you like it enough to put the work in, like anything else. “Don’t you have to be a math wiz?” No dude 95% of the time whatever you’re trying to do already has a very well researched approach, a lot of times you’re just picking which pre-vetted solution to adapt to your needs.
no i mean the opposite, some programming is actually hard
I kind of think this article misses the mark a little.
There is skill loss from heavy AI use.
But I want to acknowledge the awkward elephant in the room. AI Is making people too fast. I don't mean that a faster output is bad. It's a faster output and code rather than a full understanding and experience in producing the code. It's rewarding people who try to talk about business value rather than the people that are building and making safe decisions with deep knowledge.
AI: Yes, its good and it can produce some good solutions, however it ultimately doesn't know what it's doing and at the best of cases needs strong orchestrators.
We're in a cesspit of business driven development and they're not getting the right harsh and repulational punishments for bad decisions.
Lars we are on the same page. I use LLMs to help me scope and get a second set of eyes on the high levels of a task. Then I write the code. Often I automate boilerplate or boring objects but sometimes it's faster/better for me to just write them. Then I will ask an LLM to say write some tests. Then I will focus on the cases they missed and write those myself.
I have been described as a decel and a Luddite though so be weary of my opinions.
This author assumes that workforce development is a first-order priority for businesses, or at least for the health of the industry.
Why make this assumption so confidently?
The arrival of the electronic computer did not turn human computers into programmers, it simply eliminated them en masse.
I try to make understanding the bottleneck and it seems to work out for me while still delivering solid productivity gains.
Would like to see a study of brain scans during flow, manual programming, compared to code review. If the conclusion is different parts of the brain are activated, then orchestration is a separate activity entirely. Reading code is not the same as writing code.
However, the code review study needs to compare between surface scanning and reviewing long enough to get over a theoretical slough of perspective: when you assume the coding chair and are in their frame, whether the brain shifts into a different cognitive mode.
Otherwise, just stamping "Looks good to me" is likely to lead to the same atrophy. There's no critical thought, even a self-summary of the change or active questioning.
Thoughtful, deliberate code review just plain takes longer. AI can help here a lot, although it still takes over the "get into review mode" process.
I absolutely feel like a "different" part of my mind is loaded when seriously engineering something myself vs vibecoding+reviewing. Even the reviewing is more annoying in the latter mental context.
Many firms are going to go bust because of dangerous assumptions they made re. Expectations of llm improvements.
And they will deserve it.
It is definitely not the same parts of a brain.
Code review alone is kind of like being able to understand a foreign language enough to read it, but not really understand it in flowing conversation or being able to speak it, much less construct a complex piece of literature.
Retention also suffers, as you will quickly forget what you just reviewed. What is the last PR you remember?
There’s too much in this article to comment on it all, but if we zoom into the first claim:
> An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism.
My question is why isn’t there an effort from the author to mitigate the insane things that LLMs do? For example, I set up a hexagonal design pattern for our backend. Claude Code printed out directionally ok but actually nonsensical code when I asked it to riff off the canonical example.
Then, I built linters specific to the conventions I want. For example, all hexagonal features share the same directory structure, and the port.py file has a Protocol class suffixed with “Port”.
That was better but there was a bunch of wheel spinning so then I built a scaffolder as part of the linter to print out templated code depending on what I want to do.
Then I was worried it was hallucinating the data, so I wrote a fixture generator that reads from our db and creates accurate fixtures for our adapters.
Since good code has never been “explained for itself 100%, without comments”, I employ BDD so the LLM can print out in a human readable way what the expected logical flow is. And for example, any disable of a custom rule I wrote requires and explanation of why as a comment.
Meanwhile, I’m collecting feedback from the agents along the way where they get tripped up, and what can improve in the architecture so we can promote more trust to the output. Like, I only have a fixture printer because it called out that real data (redacted yes) would be a better truth than any mocks I made.
Finally, code review is now less focused on the boilerplate and much more control flow in the use_case.
The stakes to have shitty code in these in-house tools is almost zero since new rules and rule version bumps are enforced w a ratchet pattern. Let the world fail on first pass.
Anyway, it seems to me like with investment you can slap rails on your code and stay sharp along the way. I have a strong vision for what works, am able to prove it deterministically with my homespun linters, and am being challenged by the LLMs daily with new ideas to bolt on.
So I don’t know, seems like the issue comes down to choosing to mistrust instead of slap on rails.
Edit: I wanted to ask if anyone is taking this approach or something similar, or have thought about things like writing linters for popular packages that would encourage a canonical implementation (I have seen some crazy crazy modeling with ORMs just from folks not reading the docs). HMU would love to chat youngii.jc@gmail
[delayed]
Agents are a first-generation technology. They propose and act at the same time. I recommend you read https://safebots.ai/agents.html
Only way to cope with this I found is to grind leetcode or advent of code. It's kinda funny how fast this all changed. Less funny part is the fact that I'm now kinda feared for my job in some time.
"Don't vibe code" but here's a deadline that's impossible without it. classic
Software engineers and their leadership have been pushing back on terrible product managers for decades. AI isn't the reason to stop our time honored tradition. If anything we can write the emails faster now
This is exactly the same problem of "mechanical engineers' job is to design parts, not machine them, so we'll take training on machines out of the mech eng curriculum." Result: fresh mech eng grads do not know how to properly design parts because they have no idea how they are machined.
How do they solve this for mechanical engineering? Or is it an ongoing problem?
Vertical integration is valuable at many different scales
Surely they're made on CNC machines now? (well, since the 1970s)
Doesn’t matter, if you ask for physically impossible features to cut this is something you could technically do. Or you ask for a feature that adds multiple setups to an otherwise simple part and makes it wildly expensive
The CNC doesn't know either. What usually happens is that an engineer at the CNC shop figures it out for you.
Knowing some machining still lets you design parts and assemblies that are some combination of cheaper, better, etc. This is noticeable with precision or high performance assemblies. And how many revisions are needed.
How can we solve this at a more fundamental level?
I think many people already recognize the problem:
-“Our ability to write code is being damaged.” -“If our ability to write code declines, our ability to recognize good code also declines.”
But the problem is that the market no longer works without LLMs.
Freelance rates and deadlines are now calibrated around LLM-assisted output. Even clients who write “do not vibe code” often set deadlines that are impossible to meet unless you use something like vibe coding. The client’s expectations themselves are becoming abnormal.
That is the irony of the market.
I honestly do not know what to do.
Recent Hacker News discussions are mostly a negative echo chamber about AI use. In other places, it is often the opposite: only positive echo. But almost nobody discusses the actual solution.
The main topics I keep seeing are roughly these:
1. Is the large repository PR system failing a fundamental stress test? Or should AI-generated(GEN AI) code simply not be merged? If PR review is moving from handmade production to mass production, how should the PR system change? Or should it remain the same?
2. As vendor lock-in continues, can we move toward local LLMs to escape it? Are cost and harness design manageable? What level of local model is required to reach a similar coding speed?
3. If we are forced to use agentic coding, how do we avoid damaging our own ability to code? There is a passage from Christopher Alexander that I keep thinking about:
“A whole academic field has grown up around the idea of ‘design methods’—and I have been hailed as one of the leading exponents of these so-called design methods. I am very sorry that this has happened, and want to state, publicly, that I reject the whole idea of design methods as a subject of study, since I think it is absurd to separate the study of designing from the practice of design. In fact, people who study design methods without also practicing design are almost always frustrated designers who have no sap in them, who have lost, or never had, the urge to shape things.” — Christopher Alexander, 1971
This quote feels relevant to programming now. If we separate the study and supervision fo programming from the actual practice of making, something important may be lost.
In architecture, there is this idea that without practice, the architect loses meaning. But now the market is forcing the separation.
People with enough symbolic capital and high status have the freedom not to use AI. But people lower in the market are under pressure to use it.
So I think the discussion now needs to move beyond whether AI coding is good or bad.
The real question is How do we keep using AI because the market demands it, while still preserving the human practice that makes programming meaningful and keeps our judgment alive?
I think these are the important question. How do you maintain market value without using AI?
Or, if you do use AI, how do you avoid being treated as low-quality?
If you do not use AI, how can you remain more competitive than people who do use it?
If you do use AI, what advanatge do you have over people who do not use it, and how should you position yourself?
I know that agentic coding can cause skill degradation. I can feel it happening to me already. But for someone like me, who does not have strong status, credentials, or symbolic capital, social and market pressure makes AI almost unavoidable.
What frustrates me is that I do not see practical answers anywhere.
"How can we solve this at a more fundamental level?"
Stop using AI for coding. Period...there is no other solution. You can't make it work, nobody else can either. Without determinism, the entire process is useless. We need to stop trying to act like we all know that this isn't true. We have given it a chance, it failed, time to move on to something else no matter how much the VCs and execs don't want to. Those that do move on have a chance, the others have no future in software.
The issue is that you will end up without a job if the trend continues. It's similar to many cases of technical innovation - you can still have a few workers who do handcrafted works, but most of them have to use the machines, that may produce work of inferior quality but at much higher speed.
The market realigns, and unless you handwrite the highest possible quality at a quick pace, you won't be competitive with the vibe-coders who can fix a hundred issues a month.
It was the same with gps-assisted driving, now most people can't orient themselves autonomously. Worse, there are no roadsigns with directions installed, meaning that you are stuck with using the GPS.
I agree with what you are saying, but if I cannot get work, I may literally have nothing to eat tomorrow.
So while I agree with your point, it does not feel like a practical answer for my situation. For someone who is already well known and has enough reputation, refusing to use AI may be a matter of principle. But I am dealing with survival.
I do not think your answer is bad. But because this is a survival problem, it is difficult for me to risk everything on principle.
In other words, I know that your answer may be the morally correct one. If everyone boycotted this, perhaps it would not be adopted so aggressively.
But I cannot do that.
What I need is a way to use AI while degrading my own ability as little as possible, and while still preserving my skills.
I am not saying you are wrong. I am saying that your answer is too idealistic for someone in my position.