I think the existing comments already cover it most, also, I would argue that we are seeing a new emerging group of coders come into the realm of programming and we are judging them at their worst and comparing them to our best. It is quite insane to me to expect someone who just started to fully build google.com and all of it's infra,security,etc.
> I would argue that we are seeing a new emerging group of coders come into the realm of programming and we are judging them at their worst and comparing them to our best.
Maybe, but the world seems to be inviting this comparison by acting as though they are going to disrupt and replace the established experienced coders
I would argue that we are seeing a new emerging group of coders come into the realm of programming and we are judging them at their worst and comparing them to our best
Nyes. I think what we're doing is that these new guys are coming in and using AI and trying to tell us how super awesome and powerful they are because of AI and that nothing could ever go wrong.
It is quite insane to me to expect someone who just started to fully build google.com and all of it's infra,security,etc.
But it's not us expecting them to do it. It's them telling us they can do it coz they have AI.
Look, I've been using Claude and Codex agents for about 6 months now, full time for coding (when I code) essentially (coz I can't ask my people to use a tool I have no experience with myself, so I purposefully forced myself to use the agent and only the agents as much as I could bear, only resolving to manual changes in very very few instances. And there have been many many frustrations, believe me).
The amount of times that even Seniors have just verbatim pasted Claude analyses as truth to me, when it was apparent after the first read through of the output, that it wasn't true is amazing. How we expect juniors that have way less developed "spidey senses" to successfully navigate that is beyond me. Most people are trusting by default. They shouldn't be, but it's human nature for most of us. For some it isn't, like myself. I'm already the dude that asks too many questions of humans when they're not clear on what they assumed vs. have verified.
Like, example, I showed an analysis, full page in a slack thread recently to one of my Seniors (made by some other Senior) and tell me where they think it shows that it's BS and not true. He couldn't do it. He tried over and over and he was unable to. I read it and the second paragraph out of lots of them was BS and just not true. Easy to verify. Claude didn't have access to the actual information (because of various circumstances) but just made something up. Said the relevant code was deployed, thus XYZ was true. Listed lots of extra analysis after that, which sounded reasonable and probably was, if the premise was correct. Just it wasn't. The code had never been released at that point.
I've been doing the same kind of "spidey senses are tingling" comments and questions back to people for lots and lots of years. And others are usually not good with that sort of thing (exceptions prove the rule). Coz people do the same kind of "BS-ing" that Claude et. al. do. Claude is generally "better" about questioning his/her (yes, it works both ways) judgement actually than people, which in many cases have feelings attached to their investigations (even if they very blatantly didn't check something and just assumed it - pre AI - all by themselves).
I think definitionally "vibe coding" means you feel out of control, in fact I would say Karpathy is deliberately trying to bring these feelings out in people.
If you are using an AI assistant with your feet on the ground, like as a coding buddy that you pair with, you're not "vibe coding"
> If you keep vibe-adding features, and somehow keep getting customers to pay for this thing, what happens once the codebase becomes so complex that an LLM cannot fit it inside its “brain”?
you realize this point is well, well beyond what a human can "fit" in their brain as well? you start making shorthands and assumptions about your systems once they get too large.
One of the main weaknesses with current AI is they don't know how to modularize unless you explicitly say it in their prompt, or they will modularize but "forget" they included a feature in file B, so they redundantly type it in file A, causing features to break further down the line.
Modularizing code is important and a lot of devs will learn this, I once had 2k-line files at the beginning of my career (this was before AI) and I now usually keep files between 100 and 500 lines (but not just because of AI).
While I rarely use AI on my code, if I want to type my program into a local LLM that only has between 8-32k context (depends on the LLM), I need to keep it small to allow space for my prompt and other things.
Even as a human it's much easier to edit the code when it's modular. I used to like everything in one file but not anymore, since with a modular codebase you can import a function into 2 different files, so changing it in one place will change it everywhere.
TLDR: Modularizing your code makes it easier for both you (as a human) and an AI assistant to review your codebase, and reduces the risk of redundant development, which AI frequently does unknowingly.
I think the existing comments already cover it most, also, I would argue that we are seeing a new emerging group of coders come into the realm of programming and we are judging them at their worst and comparing them to our best. It is quite insane to me to expect someone who just started to fully build google.com and all of it's infra,security,etc.
> I would argue that we are seeing a new emerging group of coders come into the realm of programming and we are judging them at their worst and comparing them to our best.
Maybe, but the world seems to be inviting this comparison by acting as though they are going to disrupt and replace the established experienced coders
The judgement and pushback is pretty warranted
Look, I've been using Claude and Codex agents for about 6 months now, full time for coding (when I code) essentially (coz I can't ask my people to use a tool I have no experience with myself, so I purposefully forced myself to use the agent and only the agents as much as I could bear, only resolving to manual changes in very very few instances. And there have been many many frustrations, believe me).
The amount of times that even Seniors have just verbatim pasted Claude analyses as truth to me, when it was apparent after the first read through of the output, that it wasn't true is amazing. How we expect juniors that have way less developed "spidey senses" to successfully navigate that is beyond me. Most people are trusting by default. They shouldn't be, but it's human nature for most of us. For some it isn't, like myself. I'm already the dude that asks too many questions of humans when they're not clear on what they assumed vs. have verified.
Like, example, I showed an analysis, full page in a slack thread recently to one of my Seniors (made by some other Senior) and tell me where they think it shows that it's BS and not true. He couldn't do it. He tried over and over and he was unable to. I read it and the second paragraph out of lots of them was BS and just not true. Easy to verify. Claude didn't have access to the actual information (because of various circumstances) but just made something up. Said the relevant code was deployed, thus XYZ was true. Listed lots of extra analysis after that, which sounded reasonable and probably was, if the premise was correct. Just it wasn't. The code had never been released at that point.
I've been doing the same kind of "spidey senses are tingling" comments and questions back to people for lots and lots of years. And others are usually not good with that sort of thing (exceptions prove the rule). Coz people do the same kind of "BS-ing" that Claude et. al. do. Claude is generally "better" about questioning his/her (yes, it works both ways) judgement actually than people, which in many cases have feelings attached to their investigations (even if they very blatantly didn't check something and just assumed it - pre AI - all by themselves).
time will tell. you can set reasonable constraints and review the code. unless you are disqualifying that as vibecoding.
I think definitionally "vibe coding" means you feel out of control, in fact I would say Karpathy is deliberately trying to bring these feelings out in people.
If you are using an AI assistant with your feet on the ground, like as a coding buddy that you pair with, you're not "vibe coding"
> If you keep vibe-adding features, and somehow keep getting customers to pay for this thing, what happens once the codebase becomes so complex that an LLM cannot fit it inside its “brain”?
you realize this point is well, well beyond what a human can "fit" in their brain as well? you start making shorthands and assumptions about your systems once they get too large.
One of the main weaknesses with current AI is they don't know how to modularize unless you explicitly say it in their prompt, or they will modularize but "forget" they included a feature in file B, so they redundantly type it in file A, causing features to break further down the line.
Modularizing code is important and a lot of devs will learn this, I once had 2k-line files at the beginning of my career (this was before AI) and I now usually keep files between 100 and 500 lines (but not just because of AI).
While I rarely use AI on my code, if I want to type my program into a local LLM that only has between 8-32k context (depends on the LLM), I need to keep it small to allow space for my prompt and other things.
Even as a human it's much easier to edit the code when it's modular. I used to like everything in one file but not anymore, since with a modular codebase you can import a function into 2 different files, so changing it in one place will change it everywhere.
TLDR: Modularizing your code makes it easier for both you (as a human) and an AI assistant to review your codebase, and reduces the risk of redundant development, which AI frequently does unknowingly.