I'm very thankful I came of age during the golden age of personal computing. I was able to own my own computer(s) and earn a living writing software on them and for them. Fifty years was a good run, and I consider myself lucky to have participated in it.
IMO we've gone full circle: dumb terminals chained to mainframes and the whimsey of someone else's rules, restrictions, and rent-seeking, to my own bought-and-paid-for computer sitting on my desk that did exactly what I told it to do using software that never changed unless I wanted it to change, and now we're back to dumb terminals (browsers) that talk to mainframes (the cloud) that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software and have gone back to renting me the software and pushing changes that I never asked for and never wanted in the first place.
I will never use spicy autocomplete for anything, and I find it depressing that people are being forced to use it in order to keep their job. I see a very dark future for computing if real skills are all replaced with garbage being vomited out by rules engines that harvested their "guess the next word" results from today's internet.
Going full circle is what we do, it's everywhere throughout human history. Actually, one could argue it's how life works. Nature has seasons to help life grow and be balanced. We're only starting to understand how this affects us in a larger scheme of things. Who knows, maybe we will wipe ourselves to dust and be discovered by the next iteration until we reach v1.0.0
I mean, that works for you since you're retiring. But for people still working in the industry, you adapt or die. As it's always been.
The fact of the matter is, a person working with a bunch of agents is a lot more productive than just a person. It makes research faster. It makes experimentation faster. It makes output cleaner. And this is true across many disciplines, not just tech.
Also, it is a skill. Yes, anyone can chat with an LLM. But understanding the optimal work flow for what to delegate and what to do yourself is difficult. Understanding the need for precision in the language used, and learning how to elegantly phrase things that were previously just abstract thoughts is absolutely a talent that can be refined.
If i had to guess, I'd say we'll probably see major breakthroughs across multiple disciplines within the next decade, largely because researchers and engineers can cover much more ground individually now, freed from the slow moving coordination mechanisms that team dynamics require. Pretty good for "spicy autocomplete" as you put it.
>my generation was the best, the smartest, most hard working with computers. We used to do "real programing", these youngsters aren't gonna achieve anything with their "spicy autocomplete"
If there's an ethos that emphasizes the boomer mindset it's this one.
>and I find it depressing that people are being forced to use it in order to keep their job
You know what, when numerical control systems started arriving in the machine shops in the 1960's, that's exactly what machinists were saying. Are CNC operators today worse off than machinists in the 1950s?
I don't blame you for not seeing the history repeating tise, just because you're old doesn't mean you're also wise(no offense), We'll adapt and survive, like we always do.
It's not about us older folk, but the computing environment itself. We're heading into a world of centralized control where your personal computer is mostly a "thin client" for a bunch of online services. Combined with omnipresent age/identity verification and you will basically need permission from someone to do anything interesting with a computer. Especially on the internet. This is in contrast to the 1990-2010 era where software was generally "buy once use forever" (plus kept working regardless of what politician you support online) and general purpose, open hardware was the norm. You could hook your homemade server up to the internet with a minimum of fuss and start running a service or forum or website or whatever.
There are plenty of bright kids out there, but they're going to be operating from a position of dependence on the OpenAIs, Googles, and Apples of the world if they want to ship a product.
>It's not about us older folk, but the computing environment itself.
Whether you like it or not, the computing environment of today is a product of the labor and financial participation of older folk of the past. Facebook and Microsoft weren't built by Zoomers. Everyone contributed to the current state of things, either direct thought labor and fiance or indirectly by just using the products.
>We're heading into a world of centralized control where your personal computer is mostly a "thin client" for a bunch of online services
And who built those online services?
People make it sound like Azure, AWS, Facebook, X, etc were just snapped into existence one day by Zuckerberg, Bezos, and Musk, and aren't decades of labor by hundreds of thousands of workers who voluntarily did this in exchange for cash.
> This is in contrast to the 1990-2010 era where software was generally "buy once use forever" and general purpose, open hardware was the norm. You could hook your homemade server up to the internet with a minimum of fuss and start running a service or forum or website or whatever.
I know, but how does remanenceing help here? You can't put the toothpaste back in the tube same how you can't turn back housing affordability back to how it was in 1995, or bring back those lucrative union manufacturing jobs that could support a family from just bolting bumpers to a Chevy on an assembly line.
Those are all one-time things of the past now, never to return again in the same form. You have to work with the cards you've been dealt today, not moan about how much better the past was since that doesn't help anyone.
That is not what the GP said at all. I am definitely not a Boomer, and I fully agree with them. They are lamenting the loss of control over your own general purpose computing system, and how our capitalistic society run by oligarchs is enforcing that through economic pressures. There is nothing in their comment that praises their hard work, intelligence, or claims younger people are incapable.
> our capitalistic society run by oligarchs is enforcing that through economic pressures.
Which the people retiring soon, like the one I'm replying to, helped build with their labor, funded with their investments to get rich and enjoy their 401ks, leaving the current generations holding the bag.
Nobody's innocent here. When Zuckerberg brought out his checkbook to poach engineers to build the Spybook 9000 social network, everyone flocked there without thinking, "hey, are we fucking up the future of the world?".
When people here were flaunting Crypto as the second coming of Jesus, they weren't thinking "hey, are we maybe helping people get scammed?"
Every past generation of workers has their own guilt in to how we got here to the present situation.
Oh, I fully agree with you that the Boomers completely fucked the world to advantage themselves and then pulled the ladder up for their kids and grandkids. But I don't think it's fair or reasonable to reply to a single individual by trying to foist the responsibility of an entire generation onto their shoulders, especially when they were not expressing any of the words you that were trying to put into their mouth.
It's a bit arrogant and borderline Luddite to suggest that 'your era was legitimate' and that somehow these new things which you don't understand are somehow 'lesser' or illegitimate.
In the long arc of history, I'm doubtful we'll see 'the last 50 years' as 'the Golden Age' - that's just a personal, contemporary romanticization. More than likely, the advent of computers -> web -> AI etc. will be one block of the 'informational industrial revolution'.
The people who made the ostensible 'Golden Era' were pioneers, just as those breaking new ground are pioneers today, it's honestly 'depressing' that people who consider themselves 'Engineers' wouldn't see that as clear as day, an be hopeful for the future on some level.
AI is very real phenom, obviously vastly over-hyped in many ways, and it doesn't feel nice to have to get caught up in a tectonic shift against one's will, but it is bringing about legitimate progress in every sense that the Engineers and Creators before us did.
In the exact same spirit as DaVinci or Babbage.
If one wants to keep a horse in the stable, or a typewriter around for posterity or any other reason that's fine, but not under the notion that somehow they are better or more useful.
> Why wouldn't you aim to be better, to learn how to be or do something that AI would never?
Because it doesn't make sense to be better than a tool. A woodworker could use a hand saw and take an hour to cut wood... Or he could use a buzz saw and cut it in a few minutes. Is the woodworker any less of a woodworker when he uses a buzzsaw vs a hand saw?
Outsourcing thinking to AI is not healthy, and certainly if everyone used AI like this we're doomed.
I still think it's true that those who don't use AI will be left behind, but it's a bit tautalogical because the thing they're behind left behind on is AI. A lot of the biggest companies on earth are putting a lot of money in AI, but if you're OK with working for a company that is not putting all their money in AI that's perfectly fine.
Just like block chain was everywhere ten years ago and now is just kinda _there_. If you got in before the hype you could have made a lot of money. If you didn't, you were left behind. I was left behind and I'm OK with that.
Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
The better you are at architecting or even directing a junior developer, the better your output too. Dont let AI make decisions, its supposed to take your decisions and turn those into code. When AI makes decisions, well the unexpected outcome is always on you.
> Dont let AI make decisions, its supposed to take your decisions and turn those into code.
I let the AI make decisions all the time. I often approve them, and I sometimes revert them. Most of the time they’re really good decisions based on my initial intent, but followed by analysis I didn’t make but agree with.
i always found it to be easier to write code myself than to direct a junior developer.
the level of teaching involved would always mean the overall velocity of work slowed down.
some people say you can throw them the drudge work but i find that if you're doing coding right (e.g. you dont let your code base degenerate into a mess of boilerplate), there is barely any drudge work to do.
A weekend is enough to get going, but not nearly enough to 'be on par' with everyone else.
That said - what we have learned in the last year could be compressed quite a lot - there are a lot steps we could skip, and 'learn by failure' that need not be repeated.
It takes a while to get the subtleties of it, it's among the most highly nuanced things we've ever encountered.
Just learn, sure. But the difference between my efficiency of using it on my day 2 and month 6 is significant. Yet I feel I am barely scratching the surface of it.
> a weekend would be enough to be on par with everyone else in the industry
I kind of agree in general that it is a learned skill, but considering how unclear people generally are when they communicate, I'm guessing it'll take longer than a weekend to be able to catch up, especially catch up to people who've been working on precise and careful communication and language for years already in a professional environment.
If one has been reading a wide variety of books/papers/articles/whatever their whole life, and one has been mindful of how to communicate with the "written word" as it were, it takes about 3 hours to be wildly effective with this technology. I think it took longer to learn google-fu than it did to learn how to use this technology effectively.
The statement is absurd because the skill curve for AI tooling is so small you can can mess around for a day or two and get "caught up" with the zeitgeist. And what you need to know to get started is actually far less these days than it was 1.5 years ago thanks to all the product refinement that took place in the space.
The only real risk is that today there's an expectation from employers that you've got some AI experience under your belt you can articulate. But you can get that experience today.
Most good programmers are good at writing. If you’re capable at simultaneously writing instructions for a dumb abstract machine and have those instructions being understandable for humans, you’re clearly good at expressing at least technical ideas.
6-12 months ago I felt like i was constantly behind the curve with all the different things people were doing to get more out of their claude code. as the year has progressed though, all of those features keep making their way into vanilla claude code, at a faster and faster rate. Now someone working on the bleeding edge is using things that i'll be using without having to think about them a month from now. It has really reduced my anxiety of being left behind.
That's the thing, any "advancement" you might discover will be integrated into main tools soon enough, I am going to say that in fact, you probably shouldn't even learn them before they are integrated. Helps you filter through all the noise and avoid wasting time on learning something that isn't going to take off.
I feel like I use AI this way, but a majority of my peers lean too much into it. There used to be the sentence "we don't think, we google", and I see that with ai usage. As soon as a roadblock appears, the situation is pasted in GPT without further engaging with it, then they pick up the phone and open an app while GPT does its thing 0_o
I have a coworker like that too, my pet theory is that they're not passionate about their job to begin with. It's just something that can pay their bills.
While waiting for Claude to finish, we talked about our hobbies outside of work, and the same guy will go into deep details on how steroids and the HPG axis works, and even gave me a spreadsheet with several NCBI PubMed links on the topic.
I think we are all naturally be more creative and opinionated in things we are interested in.
An orthogonal observation: Bearblog seems to have become an anti-AI echo chamber. Their community responds very positively to posts exactly like this one [1] [2] [3]
I think it's just important context to keep in mind that these sorts of takes are very typical to top https://bearblog.dev/discover/ in the same way that certain types of posts are designed to rank well here. I considered migrating my blog there earlier this year and ended up deciding that, while I loved the product, the community was not healthy.
People are worse at mental arithmetic than they were in the recent past, so it's not clear that they aren't "dumber" in the sense people meant at the time.
And did our thinking about the importance of being good at arithmetic change in response? I think so.
We also used to be much better remembering things, when we relied on oral histories, our memory skills have degraded quite a bit. And there's a quote from Socrates criticizing how writing is a crutch that degrades our skill (https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1... , the last bit). Over time, we've just moved to valuing other things more.
Well, with anything, practice is key. When I was in school, I was in a math competition where you had to do everything in your head. There was no scratch paper, you could not modify your answer once written, and erasing was obviously not allowed either. I wasn't the greatest at it, but I didn't suck at it either. That was decades ago, and I no longer do math in my head that way. What I used to do in seconds for a result now takes a couple of seconds to think about what needs to be done and then the time to come up with the result.
Sounds like you haven't used it much. It starts small with you forgetting the arcane params to commonly used tools that you don't need to type anymore. Where it will stop nobody knows.
Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
I once worked for a guy who typed 7 + 4 into a calculator, after freezing for 1.5 secs trying to work it out in his head. It was in a "stressful" situation (not something extreme, we just were in a hurry), and I'm sure the guy could add those numbers in his head, generally... he owns his own business, after all. It took so much out of me to not move a face muscle.
Once you write something and publish it, its just out there, it doesn't really get healthy or unhealthy. I do not think all writing is meant to be, or needs to be, the representation of someone's mind and its health. We write to have the opinion exist outside of ourselves. Why would we even read things if what we read didn't have strong beliefs or opinions? It sounds so boring!
In my practice, I found AI are more useful in adversarial mode ("criticize this concept, "find a possible bug in this code", "challenge me", "quiz me on the knowledge"), because the knowledge found adds up to your own skills.
You don't need super big brain IQ to be creative and expressive, all you need is simply a strong opinion on something, and you don't let AI (or other people) dictate otherwise.
Now the skill issue lies in whether your opinion is a good idea or not lol.
Some people who don't use AI will be left behind - those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind if they just refuse to leverage the superhuman properties that LLMs have.
I don't think it's hard to catch up if such a person changes their mind, though.
Some people who do use AI will be also left behind - those who use it to replace their skills without developing new ones themselves, and those who use it to do the same or worse work more cheaply. They will be left behind in a competitive world where others will work out how use it to do more or better work with no reduction in effort.
If LLMs mean I never have to open a PowerPoint from a client to pull out their "data" again, that'd be great. I gain nothing from being a manual data entry monkey for people who don't understand the concept that presentation-ready output formats are not data transmission formats.
But if I'm to be expected to employ vibecoding in my day to day job as a software engineer, I'll dismantle my house and go live off grid somewhere in Alaska. I have enough power tools and knowledge to do it. Probably massively healthier for my kids.
For now at least, I think it really depends on what type of coding that is.
I don't have any particular predictions going forward about it, but something I think about right now is, do I want to focus my time where the interesting decisions, the valuable contributions I make, are product-level thinking about what to build and what problems to solve? Or do I want to focus my time where the interesting decisions are technical ones, fully wrapping my head around a technical problem and coming up with a solution?
I do think both options are still available, and personally I love them both. But I don't know what types of coding would involve significant amounts of both activities anymore.
There’s still a lot of place for both. Because they are just a shift of perspective around the same thing: Solving a problem for someone.
Product is when you’re seeing things as the one who have the problem and designing the solution in a way that is usable. Technical is when you shift to see how the solution can be implemented and then balancing tradeoffs (mostly costs in time and monetary resources).
While the code is valuable (as it is the solution). Building it is quite easy once you have a good knowledge on both side.
The issue with AI is not in their capabilities, but in people rushing to accept the first version when there are still unknowns in the project. And then, changes costs almost as much as redoing the project properly.
I have a feeling that a big risk of using AI all the time is that our own neurological capacity starts to dwindle.
Just as many people leading sedentary lifestyles have to make a deliberate effort to exercise, because inactivity is really bad for our bodies, I think we're going to realise that a similar process is necessary for our minds.
You really want to be spending a bit of time every day operating at your cognitive limits - trying to fully engage your System 2 - if you want to avoid brain atrophy. Coding used to kind of give you this exercise for free, but you can go really far with just your System 1 nowadays - literally get things done while scrolling Reddit.
I'm trying to allocate 30-60 minutes a day to doing something difficult, like writing code by hand for an unfamiliar problem or reading and summarising difficult papers without AI.
I agree with OP it's the other way around, while some will gradually lose basic skills by relying more and more on AI for productivity sake and laziness, those "people who don't use AI" value will go up by choosing to simply keep "learning the hard way"
People who can only use AI will be left behind. It is easy to shut off your brain when using AI and then get overwhelmed by the amount of code it produces. Worse is though when people replace programming experience with AI. I have seen a lot of really bad AI code. I can spot and repair it. Others can not. And that is a problem. And I am not talking about purist principles. I am talking about bad unoptimized code that I can spot with just one look.
It is a tool just like syntax highlighting, code completion and refactoring tools before it. You need to know how to use them, where their usefulness ends and you should probably have an idea how to do it yourself without the tool. It is okay if you will be less efficient, but it's bad if you just can't.
I think what we're seeing is it's just an amplification of whatever intrinsic motivations people have; the whole mirror to the self thing, on steroids.
Obviously people who are motivated by curiousity will have a different view and those who value creativity will end up thinking otherwise.
Also, it's basically impossible to separate the technical capabilities with the big money fascists pushing it.
I think it's pretty obvious that people who offload their thinking to an LLM will eventually get used to not thinking hard about things. Anything you do that you stop doing regularly eventually atrophies. Thinking hard about things and performing on work is as much a skill as it is an innate property of being smart, as evidenced by the many "prodigy" sort of folk who languish in obscurity later in life.
A lot of people do outsource their thinking to AI, so it's not that weird to bring up. That's effectively how many AI companies are marketing the technology.
But it's definitely possible to use AI without letting it think for you. OP should at least acknowledge that.
Those who dogmatically refuse AI outright may be disadvantaged for some things in the future. But it's also probably hyperbolic to say they will be "left behind".
This energy would be better directed anywhere else.
The author chose to take offense by connecting with a false dichotomy presented vaguely in a way that serves no purpose other than dividing and poorly labeling everyone in an area where much nuance applies.
I think this is perhaps a side effect of consuming too much content and feeling overwhelmed with it.
Engaging with stuff like this only amplifies its effects, how about do anything else instead? Maybe learn something new, like how to channel your anger.
My take on it is I would rather code than ask the machine to code. It's frustrating though how many open source projects now are overrun with massive PRs and nobody to code review them. This feels like fallout from too much reliance on AI.
This is an abacus-to-calculator situation. Some people still use an abacus. The vast majority do not. It's wild living through one of these technological transitions. People just eschew all common sense and critical thinking as it relates to the adoption of new technologies.
If it's good, lots of people will use it commercially. If it's generationally good, everybody will use it commercially because commercial use is about competition. It either gets banned outright, like steroids, or — if it doesn't get banned — those who use it will have a clear advantage and that will lead to a very small number of people who don't use it (in business).
This is not really something that opinions are required for because if you think LLMs are going away, your opinion is historically incorrect. Things that reduce toil and increase output do not go away.
I think of AI as just another abstraction layer, somewhat similar to what high-level programming languages provide compared to writing machine code. Deciding how deep to understand the abstraction layers is a choice the user has to make, which could be optional if they don't really need to.
Nevertheless, the responsibility of whatever a human produces with AI is still on the human.
With that said, knowing how to use AI the way it's right for you can give you a huge advantage. You don't have to though. And there is not a standard way of doing it.
What I recommend to everyone is give it a try and see if and how it could help you. At the end, you have to make the decision based on your constraints and what you're aiming to and can sacrifice, including but not limited to speed, accuracy, learning, etc.
Or a manifestation of having risk aversion that isn't easily swayed by peer pressure...
Everyone seems to know you can't trust the AI output, and that it is on you to review it. But whenever I talk to people who claim to be getting big benefits, there is always a moment they reveal that they are not really reviewing the output. They are just going with it.
Similarly, so many who claim to use AI as a search index eventually seem to just trust the summary instead of checking the references to figure out whether it is regurgitating fact or fiction.
I don't really know if these users always had low quality standards or low diligence, or whether the tool usage degrades them. But I see the correlation among the friends-of-friends network I can observe.
On the contrary, using AI is like outsourcing your DIY to a professional joiner.
Sure, he'll get it done twice as fast and you might notice some tricks as you look over his shoulder. But when you need a second door hung, you'll either have to start learning from scratch or call him again.
Yes, but it goes both ways. Using AI can be a great way to be productive while purposefully NOT learning how the sausage is made—say, boilerplate code in some devops system that you don't care about—allowing your attention to be focused on the part of the stack you actually care about.
Trading practice of primary skills for indirect skills like AI is like a writer deciding they should stop writing directly and get really good at Microsoft Word.
a good job is one that brings you joy and improves your creativitiy, they by definition can't be in hell. if you mean well paying, that's a different thing entirely, ditch the fancy car and adjust your lifestyle
I have a fancy car? News to me. I'm just trying to pay my bills and live a sensible and reasonably comfortable life. These days that requires a lot of money.
Just like every single trend that came before, they said you would be left behind:
If you didn't embrace OOP
Test driven development
Behavior driven development
Events driven development
Pants in head driven development
SOLID
DRY
Cloud first
Virtualization everything
Microsservices
Serverless
Everything js
Everything ts
Everything Microsoft
This will never stop.
You either let someone be in the middle of you and what you want to accomplish, or you will be left behind.
Think about the most mediocre person you know. Now remember 50% of people around you is dumber than that
> People who rely on AI are the ones who will be left behind. They'll forget how to think, how to write, how to do a simple reliable search, how to tell fact from fiction... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
We could replace "AI" here with many different terms and the argument would remain unchanged.
Sadly I think the author is wrong though I agree with the spirit of what they're saying.
Economic pressures will force workers to use AI as part of their work. Categorically refusing to use AI under any circumstances will guarantee being left behind.
Like others are pointing out, if we define "using AI" as "outsourcing all your thinking to AI" then yeah, those people will perhaps not do well...or will they?
Most people consider quality work a hassle. It takes a long time and it gets in the way of shipping. I've worked with quite a few people who were lousy engineers but boy could they ship. They were universally beloved by the business and tolerated/loathed by the engineering side. But they're the ones who get promoted and get ahead.
Life is hard, but at least on the other hand, it's also unfair.
Friendly reminder that we're still in the hype phase, even if it's the late stages.
To me the idea that a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful. I'm exaggerating here, but it is literally reading gigabytes of data and processing it to produce relatively little information.
Some data is truly worth the effort, but the majority won't be able to afford this long term - especially when those who capture the market increase prices.
Well of course too much is bad for you, that's what "too much" means you blithering twat. If you had too much water it would be bad for you, wouldn't it? "Too much" precisely means that quantity which is excessive, that's what it means. Could you ever say "too much water is good for you"? I mean if it's too much it's too much. Too much of anything is too much. Obviously. Jesus.
People not using AI will 100% get left behind as sure as those refusing to 'cars' or 'computers'.
There is absolutely not doubt; and it will be impossible to avoid as using 'plastic' or 'electricity'.
The narrow challenges of 'AI aided development' or 'AI aided creative work' are legitimate - that part is real and fair, but it'd be an over-statement to contemplate 'not using it'.
The cyclists who keep their muscles strong the 'hard way' ... will win the delivery war vs. cars!?
The carpenter who hammers every nail and saws every plank by hand 'the hard way' ... will win over the guys using power saws and nail guns!?
No - AI is changing the landscape.
What is 'hard and easy' are changing.
We won't need some skills, we will need others.
It maybe harder to maintain some critical skills, but the upside is obvious.
What is fundamentally missing from this treatise is that 'there is always a hard way'.
Personally - I have never been more 'cognitively overloaded' than ever. The AI 'amplifies' the depth of complexity one can reach, it's just at 1/2 a layer of abstraction above the code.
Driving a 'race car' at the highest speeds - is as challenging - and perhaps more so - than riding a horse.
The 'instinct to push back' is fair and there are innumerable legit criticisms ...
... but AI is just a new part of the stack and it will be as horizontally applied as 'software or the transistor' - it's not reasonable to think one could or should avoid it entirely.
with AI agents, you're obtaining a mildly lossy perspective of the code itself. whereas if you wrote it by hand, you'd have a more concrete understanding.
This is not too different from an engineering manager directing junior developers.
The stereotype of the engineering manager who forgot to write a line of code is not wrong.
That's a fair point, but you're i) radically underrepresenting the broader impact of AI ii) under estimating the power it will have over the short horizon, and iii) missing the fact that 'abstractions are real'.
i - AI is going to interject in so many things and so many ways beyond 'helping you write some modules' so consider that.
ii - AI 2 years ago was useless for code, you can see how well it works not, and this progress is still very real. By this time next year, the power will be more evident, making the position harder to take.
iii - to your point - the real answer is 'abstractions'. We used to write machine code by hand as well, until someone came up with FORTRAN and C etc. Now, people have 'forgotten' how to do that, largely, because we don't need people to do it.
AI is crudely that abstraction. You don't have to know a lot about some things.
Now - it's very fair to highlight the fact that the abstraction isn't very clean (!!!) but that will come over time.
So yes - for writing software today, we're '1/2 a layer abstraction up' - and it's 100% essential to keep an eye on the code, the architecture etc. - it's 'not fully there' but it's better to look at this through the lens of growing capabilities because over the horizon, the argument starts to tilt.
I made very concrete claim: that AI will be universal and widespread - embedded within all of the technology and systems we use.
It's so completely obvious, that anyone denying it has to be living in some kind rhetorical bubble.
It's truly a feature of 'online rhetoric' like HN/Reddit where people can consider these asymptotic postures and take themselves seriously.
We will use AI like you use plastics, cars, electricity, computers etc..
That's it.
I'm sure there were a few people who thought that 'hand writing machine instructions' was the 'one true way' of writing software, but hey, what would we call them in hindsight?
There are so many legitimate ways to be curmudgeon or wary of AI, but this reactionary stuff is anti-reason. It's not an argument, it's guttural, we should just ignore it.
The author makes a great point about learning. Learning is what increases your intelligence and if we substitute learning for AI lookup we will literally get dumber. That said, AI models have a lot of information and can assist in learning. It's a tool, how will people use it? My fear is they won't use it to help learn.
I'm no AI-hypeman (nor the opposite, I guess), and I agree that replacing AI for critical thinking and writing will only turn out bad in the long-term.
But "your dignity"? You mean like "I feel shameful over that people saw that my writing was actually AI?" or something else?
I suppose. So I still don't understand why users of AI should feel like they've lost their dignity, does it matter where the AI runs or using AI is just shameful regardless?
And I'm sorry to nitpick - but "People who rely on AI are the ones who will be left behind" is NOT the opposite of "People who don't use AI will be left behind".
I sympathise with the author and the argument. I know the text is a rant. As such, I can understand that the proposed consequences might not make sense. Yet still, there is a fun game you can play, where you replace AI by "chess engine" and you get a text that would be fitting for a late 90s chess grandmaster but seen as totally anachronistic today:
"Chess players who don't use engines will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
People who rely on engines are the ones who will be left behind. They'll forget how to think, how to move the pieces, how to solve a simple straightforward mate in 3, how to tell victory from stalemate... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to play chess.
If you think Deep Blue can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that a chess computer would never do?
The problem is AI is being pushed and used as the equivalent of using a chess engine to tell you the best move during a match.
Maybe there's a way AI can be used to make developers better but it mostly just seems to be the equivalent of grand masters saying how great vibe playing is because now they can play 1000x more games every day. But don't worry, they're still steering the games.
Sounds like something Magnus Carlsen might say. I hear he's doing quite well out of the game of chess, and pointedly not playing how a computer would play, even though Deep Blue is clearly capable of winning more than he is and from more difficult positions.
Also, the world isn't as trivially solved by computation as a game of chess, so maybe delegating your job or how to be a better human to ChatGPT isn't as much of a winning strategy as getting the computer to suggest chess moves.
Deeper reasoning, longer term planning, and more efficient solutions have always separated amateurs from experts. That experience cannot be applied asynchronously or reduced to supervision. It has to be "in the loop" and there is always a lot of ignored out-of-band information you can't train into a model.
It's always obvious that LLMs are bullshit. It's blockchain, but far far worse. US invests too much in it and the collapse has already begun. Half of planned data center builds have been delayed or canceled across the country.
What's with all the anti-AI sentiment here? Is it a bunch of unemployed devs?
I think it's the greatest development in my lifetime, and I don't really worry about my skills atrophying. I worry about getting things done that are valuable.
I thought people here got excited about technology. Now it's just doomer spam. sigh
well some folks probably do, which is why they seem "anti-AI" to you (I certainly do care about my skills atrophying, and it's the reason I don't use "AI")
> excited about technology
there is a difference between being excited about technology and falling into marketing traps
I know, it's weird. I'm really excited about it, but somehow there's a bunch of people on here who are largely negative about it all and make weirdly misinformed statements, it's like they haven't really tried it, they think they have, but it's often obvious they tried it early last year or didn't really get it. I think minimum you need to vibe code a piece of software of some sort. Don't write a single line of code just prompt your way to a finished product. This is going to tell you a lot. First, thing you start realizing, your approach to prompting makes a big difference to what you get. You really need to think about the design/feature set of your program you are making. For me, it just became plainly obvious coding was more of a hinderance and I was too busy thinking about the application itself, features and how they should work together. Not that it's all happy days, still problems occur at the code level. But once you get to this stage you start having a really clear idea of its strengths and weaknesses. For me, I actually find myself thinking a lot more, having a lot more ideas, experimenting more and iterating a lot faster.
> I thought people here got excited about technology.
I am passionately excited about technology that serves people, but the current hype around AI is not that.
LLMs are, fundamentally, a super cool development. That we can now generate large regions of text that are statistically likely to be perceived as accurate is phenomenally neat.
But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
AI, as an industry, is mostly hype — and it's a particularly insidious form of hype that preys on people's desire for cool, new things at the expense of our collective long-term well-being. The exponential ramp-up of AI data centers is wreaking havoc on natural ecosystems and local communities; the humans "employed" for RLHF are severely underpaid and exploited, and are mostly powerless to make other choices; the damage to our digital infrastructure and community is visible daily (how many "are you a human?" captchas do you get today versus ten years ago? how many useless pull requests do repositories big and small receive on a daily basis? how many "look I vibe coded a shitty app in three days and it's riddled with bugs, let me post about it" posts do we see now?). And this is all to say nothing of the rapidity with which huge swaths of people have just decided they don't care about other people fundamentally; they'd rather AI-generate some garbage company logo than employ a graphic designer to do a better job; they'd rather AI-generate copytext rather than hire an editor; they'd rather reach for the cheap, built-from-the-labor-of-others-without-respect-to-them tool that outsources all creativity and effort and gives them an immediately available "eh, it's good enough" solution. Before long, we will be inundated with "good enough", and we will forget what it was like to have good.
I'm excited about technology. I am not excited about the current incarnation of this technology.
[1] I am fundamentally not interested in sophistic arguments that "this is how humans work!" We don't know how humans work, so I make a choice to maintain a belief — based on my own experiences and learning — that LLMs do not accurately reflect the workings of a human brain.
> But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
The reductionist, mechanical explanation of what AIs do is not the full picture, and it almost belies first-hand experience with frontier models. AIs know more and can reason better than most humans in increasingly many contexts.
Yes, this means they produce "convincing text." But there's more than one way LLM output can be convincing. The easiest way isn't with rhetorical tricks or sycophancy—it's arguing compellingly, solving difficult problems, and producing good code. The frontier models have all improved dramatically in these respects over the past 1.5 years.
I find the people who go "its just statistical" "its just picking the next word" have probably not really understood what the actual tool can do. Ultimately its arguable whether humans are just statistical also, our brains are pattern matching machines. It's just not sensible to boil it down complex behavior to a fundamental building block. It's not hype (though there is hype), vast amounts of people are getting real value out of it. I've been coding 40+ years, it's super obvious to me the utility of AI tools.
I don't think anywhere I said they weren't useful tools; I said that a lot of people fail to recognize the limitations, and that even aside from limitations/utility there is a discussion to be had about broader impact.
LLMs are not hype, but "AI" is. AI is a marketing term, and always has been.
>What's with all the anti-AI sentiment here? Is it a bunch of unemployed devs?
so what? if you aren't part of inflating the hype beast you're a victim of it. Eventually no one will be left to hype it because we'll all have lost the battle.
> "People who don't use AI will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
> [...] they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
I love learning. My life of self-education is so much richer with LLMs to help me.
There are dozens of other arguments for not engaging with AI. If your reason is "I love learning" I recommend at least dipping your toes in before you declare that AI is a hindrance, not a help, to people who love to learn new things.
Seriously, these are an autodidact's dream. I've been having an absolute blast learning about stuff from government structures and the different approaches to fusion power to what types of electrical conduit are used for what applications and appropriate connectors, heat pump sizing, etc. It's so ridiculously empowering. All this info that you had to use an enormous amount of time synthesizing and studying is now available at everyone's fingertips. I think we're going to see an explosion in productivity on all sorts of fronts, not just writing code.
this discussion is so stupid. no one who isn't a moron is offloading all work and thought to LLMs. no one who isn't a moron is seriously afraid of their thinking and learning skill "atrophying", whatever tf that means.
it's clear that LLMs are unique in that you actually do have the capability to turn your brain off and blindly trust whatever it does for you. but it should be equally clear that that's a stupid approach. people will still use their minds, and this use gets empowered with proper use of LLMs. it's that simple. ffs, we take the fact that they pass the Turing Test routinely for granted now. let's not forget that this technology is legitimately incredible. it stands to reason that you are seriously handicapping yourself by not trying to use it.
Maybe it's a generational thing, but I'm old enough to remember when personal and office computers were really hitting mainstream in the late 70s and 80s, the messaging was a lot more friendly and how they will save you time, help you, etc. Even though practically speaking it reduced a lot of manual jobs.
This AI/LLM push from leadership is so damn tone deaf, like "you better do this", "ai layoffs", etc. I feel like they are jumping way too hard and fast into the "post-employee" thinking and deserve every bit of scorn from laymen.
I just don't get even the presumed risk here. How can something be so revolutionary in its capacity to increase productivity but still so esoteric or specialized that there is a risk of being "left behind"? Like all these things people talk about are, at the end of the day, products that want you to use them; they aren't gonna make it hard for someone to onboard in the future. Sure if all coding became ecommerce overnight and I'd never "learned" Salesforce, there might be brief friction there, but I could still just, like, learn Salesforce. It's gonna be a lot easier than learning good software engineering in general.
Why spend your life "learning" something whose whole deal is about not needing to learn? Even if you gamble incorrectly, its not going to be hard to get into!
Like, what, if I don't start practicing now I am not going to be able to... express concepts with natural language as well?
In the 1950s, COBOL was introduced with the idea that programming could be written almost as if one were speaking English. But eventually people realized that \writing COBOL well, in a style that resembles English conversation, was itself difficult\.
Today, we are hearing a similar claim: “If you can describe the program in natural language, programming is basically finished.” But the industry is now discovering that \describing the program well\ is the hard part.
This is also why ideas like harness engineering are appearing: methods for controlling the range of outputs, from poor to excellent, that can emerge from minimal input.
And honestly, I do not think the “vibe coding” phenomenon is entirely bad. The essence of programming is automation. Many people were previously limited because they did not know programming languages. Now, through AI, they can express themselves and turn that expression into working apps. Seeing this, I understand how deeply people have wanted to create.
I write industrial software that runs in large factory environments, and because of the nature of my work, it is difficult for me to use AI directly. These environments are usually closed networks, so AI does not really benefit my own production work. Even so, I still defend AI, because it functions as a new kind of voice that allows more people to express themselves..
Of course, capitalism distorts this. Many people use AI to chase money and capital, and as a result, a lot of low-quality apps are being produced. But on the other hand, what is wrong with the motivation of wanting to make something one wants to make?
I have been studying the history of programming, and I like Dijkstra’s famous line:
> Computer science is no more about computers than astronomy is about telescopes..
To me, this means that computing is fundamentally about \automation\.
AI has existed as a research topic almost since the birth of computers. We tend to think of it as recent, but it is a field with a history of more than sixty years. Starting from early work such as the Perceptron, there have always been people claiming that AI was a fraud or an illusion.
But now a new seed has germinated. The amount of complexity that a single human can handle has increased. Historically, the techniques for managing that complexity were things like programming patterns and software architecture. And even people who strongly argued for software architecture also warned that if architecture becomes detached from code, then something has gone wrong.
Memes always damage the essence of ideas. As information circulates, it degrades, and eventually the original meaning disappears.
The Dunning-Kruger effect is a good example. The original paper was not simply saying, “ignorant people show off, while knowledgeable people do not.” It was more about how both less competent and more competent people can have difficulty accurately assessing their own metacognition. But the idea became distorted.
The same thing happens to many famous ideas in programming. Knuth’s statement about premature optimization is also constantly distorted as it circulates.
In that situation, can we really say it is always bad to step away from online communities and learn through AI while cross-checking against books?
When I see people making extreme claims about this, I sometimes find it absurd. Of course, many people may flag or downvote my comment. But this is how I see it.
Thank ghod I'm retiring in six months.
I'm very thankful I came of age during the golden age of personal computing. I was able to own my own computer(s) and earn a living writing software on them and for them. Fifty years was a good run, and I consider myself lucky to have participated in it.
IMO we've gone full circle: dumb terminals chained to mainframes and the whimsey of someone else's rules, restrictions, and rent-seeking, to my own bought-and-paid-for computer sitting on my desk that did exactly what I told it to do using software that never changed unless I wanted it to change, and now we're back to dumb terminals (browsers) that talk to mainframes (the cloud) that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software and have gone back to renting me the software and pushing changes that I never asked for and never wanted in the first place.
I will never use spicy autocomplete for anything, and I find it depressing that people are being forced to use it in order to keep their job. I see a very dark future for computing if real skills are all replaced with garbage being vomited out by rules engines that harvested their "guess the next word" results from today's internet.
Going full circle is what we do, it's everywhere throughout human history. Actually, one could argue it's how life works. Nature has seasons to help life grow and be balanced. We're only starting to understand how this affects us in a larger scheme of things. Who knows, maybe we will wipe ourselves to dust and be discovered by the next iteration until we reach v1.0.0
I mean, that works for you since you're retiring. But for people still working in the industry, you adapt or die. As it's always been.
The fact of the matter is, a person working with a bunch of agents is a lot more productive than just a person. It makes research faster. It makes experimentation faster. It makes output cleaner. And this is true across many disciplines, not just tech.
Also, it is a skill. Yes, anyone can chat with an LLM. But understanding the optimal work flow for what to delegate and what to do yourself is difficult. Understanding the need for precision in the language used, and learning how to elegantly phrase things that were previously just abstract thoughts is absolutely a talent that can be refined.
If i had to guess, I'd say we'll probably see major breakthroughs across multiple disciplines within the next decade, largely because researchers and engineers can cover much more ground individually now, freed from the slow moving coordination mechanisms that team dynamics require. Pretty good for "spicy autocomplete" as you put it.
[delayed]
Take me with you, please.
>my generation was the best, the smartest, most hard working with computers. We used to do "real programing", these youngsters aren't gonna achieve anything with their "spicy autocomplete"
If there's an ethos that emphasizes the boomer mindset it's this one.
>and I find it depressing that people are being forced to use it in order to keep their job
You know what, when numerical control systems started arriving in the machine shops in the 1960's, that's exactly what machinists were saying. Are CNC operators today worse off than machinists in the 1950s?
I don't blame you for not seeing the history repeating tise, just because you're old doesn't mean you're also wise(no offense), We'll adapt and survive, like we always do.
It's not about us older folk, but the computing environment itself. We're heading into a world of centralized control where your personal computer is mostly a "thin client" for a bunch of online services. Combined with omnipresent age/identity verification and you will basically need permission from someone to do anything interesting with a computer. Especially on the internet. This is in contrast to the 1990-2010 era where software was generally "buy once use forever" (plus kept working regardless of what politician you support online) and general purpose, open hardware was the norm. You could hook your homemade server up to the internet with a minimum of fuss and start running a service or forum or website or whatever.
There are plenty of bright kids out there, but they're going to be operating from a position of dependence on the OpenAIs, Googles, and Apples of the world if they want to ship a product.
>It's not about us older folk, but the computing environment itself.
Whether you like it or not, the computing environment of today is a product of the labor and financial participation of older folk of the past. Facebook and Microsoft weren't built by Zoomers. Everyone contributed to the current state of things, either direct thought labor and fiance or indirectly by just using the products.
>We're heading into a world of centralized control where your personal computer is mostly a "thin client" for a bunch of online services
And who built those online services?
People make it sound like Azure, AWS, Facebook, X, etc were just snapped into existence one day by Zuckerberg, Bezos, and Musk, and aren't decades of labor by hundreds of thousands of workers who voluntarily did this in exchange for cash.
> This is in contrast to the 1990-2010 era where software was generally "buy once use forever" and general purpose, open hardware was the norm. You could hook your homemade server up to the internet with a minimum of fuss and start running a service or forum or website or whatever.
I know, but how does remanenceing help here? You can't put the toothpaste back in the tube same how you can't turn back housing affordability back to how it was in 1995, or bring back those lucrative union manufacturing jobs that could support a family from just bolting bumpers to a Chevy on an assembly line.
Those are all one-time things of the past now, never to return again in the same form. You have to work with the cards you've been dealt today, not moan about how much better the past was since that doesn't help anyone.
That is not what the GP said at all. I am definitely not a Boomer, and I fully agree with them. They are lamenting the loss of control over your own general purpose computing system, and how our capitalistic society run by oligarchs is enforcing that through economic pressures. There is nothing in their comment that praises their hard work, intelligence, or claims younger people are incapable.
> our capitalistic society run by oligarchs is enforcing that through economic pressures.
Which the people retiring soon, like the one I'm replying to, helped build with their labor, funded with their investments to get rich and enjoy their 401ks, leaving the current generations holding the bag.
Nobody's innocent here. When Zuckerberg brought out his checkbook to poach engineers to build the Spybook 9000 social network, everyone flocked there without thinking, "hey, are we fucking up the future of the world?".
When people here were flaunting Crypto as the second coming of Jesus, they weren't thinking "hey, are we maybe helping people get scammed?"
Every past generation of workers has their own guilt in to how we got here to the present situation.
Oh, I fully agree with you that the Boomers completely fucked the world to advantage themselves and then pulled the ladder up for their kids and grandkids. But I don't think it's fair or reasonable to reply to a single individual by trying to foist the responsibility of an entire generation onto their shoulders, especially when they were not expressing any of the words you that were trying to put into their mouth.
It's a bit arrogant and borderline Luddite to suggest that 'your era was legitimate' and that somehow these new things which you don't understand are somehow 'lesser' or illegitimate.
In the long arc of history, I'm doubtful we'll see 'the last 50 years' as 'the Golden Age' - that's just a personal, contemporary romanticization. More than likely, the advent of computers -> web -> AI etc. will be one block of the 'informational industrial revolution'.
The people who made the ostensible 'Golden Era' were pioneers, just as those breaking new ground are pioneers today, it's honestly 'depressing' that people who consider themselves 'Engineers' wouldn't see that as clear as day, an be hopeful for the future on some level.
AI is very real phenom, obviously vastly over-hyped in many ways, and it doesn't feel nice to have to get caught up in a tectonic shift against one's will, but it is bringing about legitimate progress in every sense that the Engineers and Creators before us did.
In the exact same spirit as DaVinci or Babbage.
If one wants to keep a horse in the stable, or a typewriter around for posterity or any other reason that's fine, but not under the notion that somehow they are better or more useful.
The Luddites were rational. It's immature to use that word as an insult.
Nothing the parent said was arrogant.
> Why wouldn't you aim to be better, to learn how to be or do something that AI would never?
Because it doesn't make sense to be better than a tool. A woodworker could use a hand saw and take an hour to cut wood... Or he could use a buzz saw and cut it in a few minutes. Is the woodworker any less of a woodworker when he uses a buzzsaw vs a hand saw?
Outsourcing thinking to AI is not healthy, and certainly if everyone used AI like this we're doomed.
I still think it's true that those who don't use AI will be left behind, but it's a bit tautalogical because the thing they're behind left behind on is AI. A lot of the biggest companies on earth are putting a lot of money in AI, but if you're OK with working for a company that is not putting all their money in AI that's perfectly fine.
Just like block chain was everywhere ten years ago and now is just kinda _there_. If you got in before the hype you could have made a lot of money. If you didn't, you were left behind. I was left behind and I'm OK with that.
Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
The better you are at architecting or even directing a junior developer, the better your output too. Dont let AI make decisions, its supposed to take your decisions and turn those into code. When AI makes decisions, well the unexpected outcome is always on you.
> Dont let AI make decisions, its supposed to take your decisions and turn those into code.
I let the AI make decisions all the time. I often approve them, and I sometimes revert them. Most of the time they’re really good decisions based on my initial intent, but followed by analysis I didn’t make but agree with.
i always found it to be easier to write code myself than to direct a junior developer.
the level of teaching involved would always mean the overall velocity of work slowed down.
some people say you can throw them the drudge work but i find that if you're doing coding right (e.g. you dont let your code base degenerate into a mess of boilerplate), there is barely any drudge work to do.
i always found it to be easier to write code myself than to direct a junior developer.
Me, too. But that doesn't mean I'm a great developer, just a shitty manager.
A weekend is enough to get going, but not nearly enough to 'be on par' with everyone else.
That said - what we have learned in the last year could be compressed quite a lot - there are a lot steps we could skip, and 'learn by failure' that need not be repeated.
It takes a while to get the subtleties of it, it's among the most highly nuanced things we've ever encountered.
Just learn, sure. But the difference between my efficiency of using it on my day 2 and month 6 is significant. Yet I feel I am barely scratching the surface of it.
> a weekend would be enough to be on par with everyone else in the industry
I kind of agree in general that it is a learned skill, but considering how unclear people generally are when they communicate, I'm guessing it'll take longer than a weekend to be able to catch up, especially catch up to people who've been working on precise and careful communication and language for years already in a professional environment.
/thread
If one has been reading a wide variety of books/papers/articles/whatever their whole life, and one has been mindful of how to communicate with the "written word" as it were, it takes about 3 hours to be wildly effective with this technology. I think it took longer to learn google-fu than it did to learn how to use this technology effectively.
The statement is absurd because the skill curve for AI tooling is so small you can can mess around for a day or two and get "caught up" with the zeitgeist. And what you need to know to get started is actually far less these days than it was 1.5 years ago thanks to all the product refinement that took place in the space.
The only real risk is that today there's an expectation from employers that you've got some AI experience under your belt you can articulate. But you can get that experience today.
You're discounting the "being able to write properly and put ideas into inteligible text" skill piece here.
Most people who have been programming for a while should have those skills. If they don't then learning AI is not the issue but communication is.
Most good programmers are good at writing. If you’re capable at simultaneously writing instructions for a dumb abstract machine and have those instructions being understandable for humans, you’re clearly good at expressing at least technical ideas.
Yeah, never had a problem with explaining to AI what I want from it. That doesn't mean AI always follows what I tell it to do ...
6-12 months ago I felt like i was constantly behind the curve with all the different things people were doing to get more out of their claude code. as the year has progressed though, all of those features keep making their way into vanilla claude code, at a faster and faster rate. Now someone working on the bleeding edge is using things that i'll be using without having to think about them a month from now. It has really reduced my anxiety of being left behind.
That's the thing, any "advancement" you might discover will be integrated into main tools soon enough, I am going to say that in fact, you probably shouldn't even learn them before they are integrated. Helps you filter through all the noise and avoid wasting time on learning something that isn't going to take off.
Black-and-white thinking like this is not healthy.
You can still do creative thinking while using AI as a powerful tool at your disposal.
Some mathematicians like Terence Tao are comfortable doing this, for example.
I feel like I use AI this way, but a majority of my peers lean too much into it. There used to be the sentence "we don't think, we google", and I see that with ai usage. As soon as a roadblock appears, the situation is pasted in GPT without further engaging with it, then they pick up the phone and open an app while GPT does its thing 0_o
I have a coworker like that too, my pet theory is that they're not passionate about their job to begin with. It's just something that can pay their bills.
While waiting for Claude to finish, we talked about our hobbies outside of work, and the same guy will go into deep details on how steroids and the HPG axis works, and even gave me a spreadsheet with several NCBI PubMed links on the topic.
I think we are all naturally be more creative and opinionated in things we are interested in.
An orthogonal observation: Bearblog seems to have become an anti-AI echo chamber. Their community responds very positively to posts exactly like this one [1] [2] [3]
I think it's just important context to keep in mind that these sorts of takes are very typical to top https://bearblog.dev/discover/ in the same way that certain types of posts are designed to rank well here. I considered migrating my blog there earlier this year and ended up deciding that, while I loved the product, the community was not healthy.
[1] https://forkingmad.blog/ai-summary-blog-post/
[2] https://blog.spu.io/you-dont-want-to-make-things-you-want-to...
[3] https://blog.happyfellow.dev/simulacrum-of-knowledge-work/
People also used to say that Google or calculators will make you dumber. Neither happened. Won't happen with this either.
People are worse at mental arithmetic than they were in the recent past, so it's not clear that they aren't "dumber" in the sense people meant at the time.
And did our thinking about the importance of being good at arithmetic change in response? I think so.
We also used to be much better remembering things, when we relied on oral histories, our memory skills have degraded quite a bit. And there's a quote from Socrates criticizing how writing is a crutch that degrades our skill (https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1... , the last bit). Over time, we've just moved to valuing other things more.
Well, with anything, practice is key. When I was in school, I was in a math competition where you had to do everything in your head. There was no scratch paper, you could not modify your answer once written, and erasing was obviously not allowed either. I wasn't the greatest at it, but I didn't suck at it either. That was decades ago, and I no longer do math in my head that way. What I used to do in seconds for a result now takes a couple of seconds to think about what needs to be done and then the time to come up with the result.
Sounds like you haven't used it much. It starts small with you forgetting the arcane params to commonly used tools that you don't need to type anymore. Where it will stop nobody knows.
Well, I forget arcane params all the time before AIs too. I rely on terminal history search and google.
Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
> Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
That's extremely speculative, especially given there was a major event in 2020 which massively disrupted education worldwide.
I once worked for a guy who typed 7 + 4 into a calculator, after freezing for 1.5 secs trying to work it out in his head. It was in a "stressful" situation (not something extreme, we just were in a hurry), and I'm sure the guy could add those numbers in his head, generally... he owns his own business, after all. It took so much out of me to not move a face muscle.
it clearly did make "people" dumber because now "people" believe in AI ;)
Once you write something and publish it, its just out there, it doesn't really get healthy or unhealthy. I do not think all writing is meant to be, or needs to be, the representation of someone's mind and its health. We write to have the opinion exist outside of ourselves. Why would we even read things if what we read didn't have strong beliefs or opinions? It sounds so boring!
Well I'm not saying that the blog writer shouldn't have written the article.
I've read the article and to me it reads like a very angry rant, which is why I commented with something akin to "bro calm down"
> You can still do creative thinking while using AI as a powerful tool at your disposal.
It remains the case that AI _erodes_ your ability to that.
So, eventually, after a few years, no, you can't.
Edit: meanwhile you're making yourself disposable. So, have fun with that.
Most people aren't anywhere close to Terence Tao on intelligence scale. Even most of HN commenters aren't that close to Terence Tao.
I don't think balancing AI use with creativity and thought is a matter of IQ. It comes down to how you use the tool.
In my practice, I found AI are more useful in adversarial mode ("criticize this concept, "find a possible bug in this code", "challenge me", "quiz me on the knowledge"), because the knowledge found adds up to your own skills.
You don't need super big brain IQ to be creative and expressive, all you need is simply a strong opinion on something, and you don't let AI (or other people) dictate otherwise.
Now the skill issue lies in whether your opinion is a good idea or not lol.
Some people who don't use AI will be left behind - those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind if they just refuse to leverage the superhuman properties that LLMs have.
I don't think it's hard to catch up if such a person changes their mind, though.
Some people who do use AI will be also left behind - those who use it to replace their skills without developing new ones themselves, and those who use it to do the same or worse work more cheaply. They will be left behind in a competitive world where others will work out how use it to do more or better work with no reduction in effort.
>those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind
It sounds more like there is no chance that most of those people will stay employed, regardless of how "ahead" they try to stay.
If LLMs mean I never have to open a PowerPoint from a client to pull out their "data" again, that'd be great. I gain nothing from being a manual data entry monkey for people who don't understand the concept that presentation-ready output formats are not data transmission formats.
But if I'm to be expected to employ vibecoding in my day to day job as a software engineer, I'll dismantle my house and go live off grid somewhere in Alaska. I have enough power tools and knowledge to do it. Probably massively healthier for my kids.
For now at least, I think it really depends on what type of coding that is.
I don't have any particular predictions going forward about it, but something I think about right now is, do I want to focus my time where the interesting decisions, the valuable contributions I make, are product-level thinking about what to build and what problems to solve? Or do I want to focus my time where the interesting decisions are technical ones, fully wrapping my head around a technical problem and coming up with a solution?
I do think both options are still available, and personally I love them both. But I don't know what types of coding would involve significant amounts of both activities anymore.
There’s still a lot of place for both. Because they are just a shift of perspective around the same thing: Solving a problem for someone.
Product is when you’re seeing things as the one who have the problem and designing the solution in a way that is usable. Technical is when you shift to see how the solution can be implemented and then balancing tradeoffs (mostly costs in time and monetary resources).
While the code is valuable (as it is the solution). Building it is quite easy once you have a good knowledge on both side.
The issue with AI is not in their capabilities, but in people rushing to accept the first version when there are still unknowns in the project. And then, changes costs almost as much as redoing the project properly.
If your job can't easily be done by AI, then you can pick it up and get "up to speed" any time you like.
If it can be done by AI, then you have no hope of competing with the quantity of AI output that anyone can trigger in very little time.
As my job seems pretty secure, I can ignore AI for as long as I like.
I have a feeling that a big risk of using AI all the time is that our own neurological capacity starts to dwindle.
Just as many people leading sedentary lifestyles have to make a deliberate effort to exercise, because inactivity is really bad for our bodies, I think we're going to realise that a similar process is necessary for our minds.
You really want to be spending a bit of time every day operating at your cognitive limits - trying to fully engage your System 2 - if you want to avoid brain atrophy. Coding used to kind of give you this exercise for free, but you can go really far with just your System 1 nowadays - literally get things done while scrolling Reddit.
I'm trying to allocate 30-60 minutes a day to doing something difficult, like writing code by hand for an unfamiliar problem or reading and summarising difficult papers without AI.
I agree with OP it's the other way around, while some will gradually lose basic skills by relying more and more on AI for productivity sake and laziness, those "people who don't use AI" value will go up by choosing to simply keep "learning the hard way"
People who can only use AI will be left behind. It is easy to shut off your brain when using AI and then get overwhelmed by the amount of code it produces. Worse is though when people replace programming experience with AI. I have seen a lot of really bad AI code. I can spot and repair it. Others can not. And that is a problem. And I am not talking about purist principles. I am talking about bad unoptimized code that I can spot with just one look.
It is a tool just like syntax highlighting, code completion and refactoring tools before it. You need to know how to use them, where their usefulness ends and you should probably have an idea how to do it yourself without the tool. It is okay if you will be less efficient, but it's bad if you just can't.
I find that good people get better with AI, but I'm not sure more average people really do.
I've seen some produce stuff without really understanding it, barely review anything, and pretty much suffer from imposter syndrome.
I think what we're seeing is it's just an amplification of whatever intrinsic motivations people have; the whole mirror to the self thing, on steroids.
Obviously people who are motivated by curiousity will have a different view and those who value creativity will end up thinking otherwise.
Also, it's basically impossible to separate the technical capabilities with the big money fascists pushing it.
Weird fallacy that if you use a tool you can't use your brain anymore
This is not what a fallacy is.
I think it's pretty obvious that people who offload their thinking to an LLM will eventually get used to not thinking hard about things. Anything you do that you stop doing regularly eventually atrophies. Thinking hard about things and performing on work is as much a skill as it is an innate property of being smart, as evidenced by the many "prodigy" sort of folk who languish in obscurity later in life.
- https://publichealthpolicyjournal.com/mit-study-finds-artifi...
If you a tool that replaces your brain then you won't use your brain.
A lot of people do outsource their thinking to AI, so it's not that weird to bring up. That's effectively how many AI companies are marketing the technology.
But it's definitely possible to use AI without letting it think for you. OP should at least acknowledge that.
Those who dogmatically refuse AI outright may be disadvantaged for some things in the future. But it's also probably hyperbolic to say they will be "left behind".
If that's the case, so be it.
This energy would be better directed anywhere else.
The author chose to take offense by connecting with a false dichotomy presented vaguely in a way that serves no purpose other than dividing and poorly labeling everyone in an area where much nuance applies.
I think this is perhaps a side effect of consuming too much content and feeling overwhelmed with it.
Engaging with stuff like this only amplifies its effects, how about do anything else instead? Maybe learn something new, like how to channel your anger.
My take on it is I would rather code than ask the machine to code. It's frustrating though how many open source projects now are overrun with massive PRs and nobody to code review them. This feels like fallout from too much reliance on AI.
This is an abacus-to-calculator situation. Some people still use an abacus. The vast majority do not. It's wild living through one of these technological transitions. People just eschew all common sense and critical thinking as it relates to the adoption of new technologies.
If it's good, lots of people will use it commercially. If it's generationally good, everybody will use it commercially because commercial use is about competition. It either gets banned outright, like steroids, or — if it doesn't get banned — those who use it will have a clear advantage and that will lead to a very small number of people who don't use it (in business).
This is not really something that opinions are required for because if you think LLMs are going away, your opinion is historically incorrect. Things that reduce toil and increase output do not go away.
I think of AI as just another abstraction layer, somewhat similar to what high-level programming languages provide compared to writing machine code. Deciding how deep to understand the abstraction layers is a choice the user has to make, which could be optional if they don't really need to.
Nevertheless, the responsibility of whatever a human produces with AI is still on the human.
With that said, knowing how to use AI the way it's right for you can give you a huge advantage. You don't have to though. And there is not a standard way of doing it.
What I recommend to everyone is give it a try and see if and how it could help you. At the end, you have to make the decision based on your constraints and what you're aiming to and can sacrifice, including but not limited to speed, accuracy, learning, etc.
If the crowd is running towards a cliff, I'd rather be left behind.
I think not using AI is a manifestation of one's inability or unwillingness to LEARN. To your point, if you can't learn, you will fall behind.
Or a manifestation of having risk aversion that isn't easily swayed by peer pressure...
Everyone seems to know you can't trust the AI output, and that it is on you to review it. But whenever I talk to people who claim to be getting big benefits, there is always a moment they reveal that they are not really reviewing the output. They are just going with it.
Similarly, so many who claim to use AI as a search index eventually seem to just trust the summary instead of checking the references to figure out whether it is regurgitating fact or fiction.
I don't really know if these users always had low quality standards or low diligence, or whether the tool usage degrades them. But I see the correlation among the friends-of-friends network I can observe.
On the contrary, using AI is like outsourcing your DIY to a professional joiner.
Sure, he'll get it done twice as fast and you might notice some tricks as you look over his shoulder. But when you need a second door hung, you'll either have to start learning from scratch or call him again.
Yes, but it goes both ways. Using AI can be a great way to be productive while purposefully NOT learning how the sausage is made—say, boilerplate code in some devops system that you don't care about—allowing your attention to be focused on the part of the stack you actually care about.
Trading practice of primary skills for indirect skills like AI is like a writer deciding they should stop writing directly and get really good at Microsoft Word.
That's why I write my own assembly language. Compilers just atrophy your skills!
When on the road to hell, it's OK to be left behind.
But what if all the good jobs are only in hell?
A job from hell is a bad job by definition.
a good job is one that brings you joy and improves your creativitiy, they by definition can't be in hell. if you mean well paying, that's a different thing entirely, ditch the fancy car and adjust your lifestyle
I have a fancy car? News to me. I'm just trying to pay my bills and live a sensible and reasonably comfortable life. These days that requires a lot of money.
Just like every single trend that came before, they said you would be left behind:
If you didn't embrace OOP Test driven development Behavior driven development Events driven development Pants in head driven development SOLID DRY Cloud first Virtualization everything Microsservices Serverless Everything js Everything ts Everything Microsoft
This will never stop.
You either let someone be in the middle of you and what you want to accomplish, or you will be left behind.
Think about the most mediocre person you know. Now remember 50% of people around you is dumber than that
AI was always slowing me down, only recently it has become somewhat useful.
> People who rely on AI are the ones who will be left behind. They'll forget how to think, how to write, how to do a simple reliable search, how to tell fact from fiction... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
We could replace "AI" here with many different terms and the argument would remain unchanged.
Sadly I think the author is wrong though I agree with the spirit of what they're saying.
Economic pressures will force workers to use AI as part of their work. Categorically refusing to use AI under any circumstances will guarantee being left behind.
Like others are pointing out, if we define "using AI" as "outsourcing all your thinking to AI" then yeah, those people will perhaps not do well...or will they?
Most people consider quality work a hassle. It takes a long time and it gets in the way of shipping. I've worked with quite a few people who were lousy engineers but boy could they ship. They were universally beloved by the business and tolerated/loathed by the engineering side. But they're the ones who get promoted and get ahead.
Life is hard, but at least on the other hand, it's also unfair.
This is a HN comment reply masquerading as a novel submission.
Friendly reminder that we're still in the hype phase, even if it's the late stages.
To me the idea that a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful. I'm exaggerating here, but it is literally reading gigabytes of data and processing it to produce relatively little information.
Some data is truly worth the effort, but the majority won't be able to afford this long term - especially when those who capture the market increase prices.
That reminds me of an old Fry and Laurie sketch.
Well of course too much is bad for you, that's what "too much" means you blithering twat. If you had too much water it would be bad for you, wouldn't it? "Too much" precisely means that quantity which is excessive, that's what it means. Could you ever say "too much water is good for you"? I mean if it's too much it's too much. Too much of anything is too much. Obviously. Jesus.
Doing fine so far, thanks!
People not using AI will 100% get left behind as sure as those refusing to 'cars' or 'computers'.
There is absolutely not doubt; and it will be impossible to avoid as using 'plastic' or 'electricity'.
The narrow challenges of 'AI aided development' or 'AI aided creative work' are legitimate - that part is real and fair, but it'd be an over-statement to contemplate 'not using it'.
The cyclists who keep their muscles strong the 'hard way' ... will win the delivery war vs. cars!?
The carpenter who hammers every nail and saws every plank by hand 'the hard way' ... will win over the guys using power saws and nail guns!?
No - AI is changing the landscape.
What is 'hard and easy' are changing.
We won't need some skills, we will need others.
It maybe harder to maintain some critical skills, but the upside is obvious.
What is fundamentally missing from this treatise is that 'there is always a hard way'.
Personally - I have never been more 'cognitively overloaded' than ever. The AI 'amplifies' the depth of complexity one can reach, it's just at 1/2 a layer of abstraction above the code.
Driving a 'race car' at the highest speeds - is as challenging - and perhaps more so - than riding a horse.
The 'instinct to push back' is fair and there are innumerable legit criticisms ...
... but AI is just a new part of the stack and it will be as horizontally applied as 'software or the transistor' - it's not reasonable to think one could or should avoid it entirely.
this is definitely not true.
with AI agents, you're obtaining a mildly lossy perspective of the code itself. whereas if you wrote it by hand, you'd have a more concrete understanding.
This is not too different from an engineering manager directing junior developers.
The stereotype of the engineering manager who forgot to write a line of code is not wrong.
That's a fair point, but you're i) radically underrepresenting the broader impact of AI ii) under estimating the power it will have over the short horizon, and iii) missing the fact that 'abstractions are real'.
i - AI is going to interject in so many things and so many ways beyond 'helping you write some modules' so consider that.
ii - AI 2 years ago was useless for code, you can see how well it works not, and this progress is still very real. By this time next year, the power will be more evident, making the position harder to take.
iii - to your point - the real answer is 'abstractions'. We used to write machine code by hand as well, until someone came up with FORTRAN and C etc. Now, people have 'forgotten' how to do that, largely, because we don't need people to do it.
AI is crudely that abstraction. You don't have to know a lot about some things.
Now - it's very fair to highlight the fact that the abstraction isn't very clean (!!!) but that will come over time.
So yes - for writing software today, we're '1/2 a layer abstraction up' - and it's 100% essential to keep an eye on the code, the architecture etc. - it's 'not fully there' but it's better to look at this through the lens of growing capabilities because over the horizon, the argument starts to tilt.
all those words and not one concrete claim made. in other words, FUD. the whole point of the article.
I made very concrete claim: that AI will be universal and widespread - embedded within all of the technology and systems we use.
It's so completely obvious, that anyone denying it has to be living in some kind rhetorical bubble.
It's truly a feature of 'online rhetoric' like HN/Reddit where people can consider these asymptotic postures and take themselves seriously.
We will use AI like you use plastics, cars, electricity, computers etc..
That's it.
I'm sure there were a few people who thought that 'hand writing machine instructions' was the 'one true way' of writing software, but hey, what would we call them in hindsight?
There are so many legitimate ways to be curmudgeon or wary of AI, but this reactionary stuff is anti-reason. It's not an argument, it's guttural, we should just ignore it.
The author makes a great point about learning. Learning is what increases your intelligence and if we substitute learning for AI lookup we will literally get dumber. That said, AI models have a lot of information and can assist in learning. It's a tool, how will people use it? My fear is they won't use it to help learn.
Yes, I will be left behind. Left behind with my copyrights,
https://news.ycombinator.com/item?id=47932937
Left behind with my money,
https://news.ycombinator.com/item?id=47933355
Left behind with my intact data,
https://news.ycombinator.com/item?id=47911524
Oh, the horror. I am being left behind.
Don't forget your critical thinking skills, your unique voice you painstaking developed over your entire life, and your dignity.
I'm no AI-hypeman (nor the opposite, I guess), and I agree that replacing AI for critical thinking and writing will only turn out bad in the long-term.
But "your dignity"? You mean like "I feel shameful over that people saw that my writing was actually AI?" or something else?
> You mean like "I feel shameful over that people saw that my writing was actually AI?" or something else?
Well if you don't have dignity in the first place, its hard to have any shame over losing it
I suppose. So I still don't understand why users of AI should feel like they've lost their dignity, does it matter where the AI runs or using AI is just shameful regardless?
and your ability to stop screwing around and sit down and actually get into something in depth instead half-assing everything
Both of these can be true.
And I'm sorry to nitpick - but "People who rely on AI are the ones who will be left behind" is NOT the opposite of "People who don't use AI will be left behind".
I sympathise with the author and the argument. I know the text is a rant. As such, I can understand that the proposed consequences might not make sense. Yet still, there is a fun game you can play, where you replace AI by "chess engine" and you get a text that would be fitting for a late 90s chess grandmaster but seen as totally anachronistic today:
"Chess players who don't use engines will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
People who rely on engines are the ones who will be left behind. They'll forget how to think, how to move the pieces, how to solve a simple straightforward mate in 3, how to tell victory from stalemate... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to play chess.
If you think Deep Blue can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that a chess computer would never do?
The problem is AI is being pushed and used as the equivalent of using a chess engine to tell you the best move during a match.
Maybe there's a way AI can be used to make developers better but it mostly just seems to be the equivalent of grand masters saying how great vibe playing is because now they can play 1000x more games every day. But don't worry, they're still steering the games.
> "Chess players who don't use engines will be left behind"
Unfortunately this is absolutely true for classical chess at the professional level, w.r.t. preparation.
Not detracting from your point though, for the other 99.9% of chess players.
Sounds like something Magnus Carlsen might say. I hear he's doing quite well out of the game of chess, and pointedly not playing how a computer would play, even though Deep Blue is clearly capable of winning more than he is and from more difficult positions.
Also, the world isn't as trivially solved by computation as a game of chess, so maybe delegating your job or how to be a better human to ChatGPT isn't as much of a winning strategy as getting the computer to suggest chess moves.
That is a really accurate analogy.
Deeper reasoning, longer term planning, and more efficient solutions have always separated amateurs from experts. That experience cannot be applied asynchronously or reduced to supervision. It has to be "in the loop" and there is always a lot of ignored out-of-band information you can't train into a model.
It's always obvious that LLMs are bullshit. It's blockchain, but far far worse. US invests too much in it and the collapse has already begun. Half of planned data center builds have been delayed or canceled across the country.
What's with all the anti-AI sentiment here? Is it a bunch of unemployed devs?
I think it's the greatest development in my lifetime, and I don't really worry about my skills atrophying. I worry about getting things done that are valuable.
I thought people here got excited about technology. Now it's just doomer spam. sigh
> Is it a bunch of unemployed devs?
probably quite the opposite
> I don't really worry about my skills atrophying
well some folks probably do, which is why they seem "anti-AI" to you (I certainly do care about my skills atrophying, and it's the reason I don't use "AI")
> excited about technology
there is a difference between being excited about technology and falling into marketing traps
I know, it's weird. I'm really excited about it, but somehow there's a bunch of people on here who are largely negative about it all and make weirdly misinformed statements, it's like they haven't really tried it, they think they have, but it's often obvious they tried it early last year or didn't really get it. I think minimum you need to vibe code a piece of software of some sort. Don't write a single line of code just prompt your way to a finished product. This is going to tell you a lot. First, thing you start realizing, your approach to prompting makes a big difference to what you get. You really need to think about the design/feature set of your program you are making. For me, it just became plainly obvious coding was more of a hinderance and I was too busy thinking about the application itself, features and how they should work together. Not that it's all happy days, still problems occur at the code level. But once you get to this stage you start having a really clear idea of its strengths and weaknesses. For me, I actually find myself thinking a lot more, having a lot more ideas, experimenting more and iterating a lot faster.
This site slid into pessimism like ten years ago
> I thought people here got excited about technology.
I am passionately excited about technology that serves people, but the current hype around AI is not that.
LLMs are, fundamentally, a super cool development. That we can now generate large regions of text that are statistically likely to be perceived as accurate is phenomenally neat.
But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
AI, as an industry, is mostly hype — and it's a particularly insidious form of hype that preys on people's desire for cool, new things at the expense of our collective long-term well-being. The exponential ramp-up of AI data centers is wreaking havoc on natural ecosystems and local communities; the humans "employed" for RLHF are severely underpaid and exploited, and are mostly powerless to make other choices; the damage to our digital infrastructure and community is visible daily (how many "are you a human?" captchas do you get today versus ten years ago? how many useless pull requests do repositories big and small receive on a daily basis? how many "look I vibe coded a shitty app in three days and it's riddled with bugs, let me post about it" posts do we see now?). And this is all to say nothing of the rapidity with which huge swaths of people have just decided they don't care about other people fundamentally; they'd rather AI-generate some garbage company logo than employ a graphic designer to do a better job; they'd rather AI-generate copytext rather than hire an editor; they'd rather reach for the cheap, built-from-the-labor-of-others-without-respect-to-them tool that outsources all creativity and effort and gives them an immediately available "eh, it's good enough" solution. Before long, we will be inundated with "good enough", and we will forget what it was like to have good.
I'm excited about technology. I am not excited about the current incarnation of this technology.
[1] I am fundamentally not interested in sophistic arguments that "this is how humans work!" We don't know how humans work, so I make a choice to maintain a belief — based on my own experiences and learning — that LLMs do not accurately reflect the workings of a human brain.
[2] See "Empire of AI" by Karen Hao.
> But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
The reductionist, mechanical explanation of what AIs do is not the full picture, and it almost belies first-hand experience with frontier models. AIs know more and can reason better than most humans in increasingly many contexts.
Yes, this means they produce "convincing text." But there's more than one way LLM output can be convincing. The easiest way isn't with rhetorical tricks or sycophancy—it's arguing compellingly, solving difficult problems, and producing good code. The frontier models have all improved dramatically in these respects over the past 1.5 years.
I find the people who go "its just statistical" "its just picking the next word" have probably not really understood what the actual tool can do. Ultimately its arguable whether humans are just statistical also, our brains are pattern matching machines. It's just not sensible to boil it down complex behavior to a fundamental building block. It's not hype (though there is hype), vast amounts of people are getting real value out of it. I've been coding 40+ years, it's super obvious to me the utility of AI tools.
I don't think anywhere I said they weren't useful tools; I said that a lot of people fail to recognize the limitations, and that even aside from limitations/utility there is a discussion to be had about broader impact.
LLMs are not hype, but "AI" is. AI is a marketing term, and always has been.
>What's with all the anti-AI sentiment here? Is it a bunch of unemployed devs?
so what? if you aren't part of inflating the hype beast you're a victim of it. Eventually no one will be left to hype it because we'll all have lost the battle.
I disagree. People who use too much AI will not learn anything and will not contribute significantly to new developments.
You clearly missed that the statement in quotes. And didn't actually read the linked post.
> "People who don't use AI will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
> [...] they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
I love learning. My life of self-education is so much richer with LLMs to help me.
There are dozens of other arguments for not engaging with AI. If your reason is "I love learning" I recommend at least dipping your toes in before you declare that AI is a hindrance, not a help, to people who love to learn new things.
Seriously, these are an autodidact's dream. I've been having an absolute blast learning about stuff from government structures and the different approaches to fusion power to what types of electrical conduit are used for what applications and appropriate connectors, heat pump sizing, etc. It's so ridiculously empowering. All this info that you had to use an enormous amount of time synthesizing and studying is now available at everyone's fingertips. I think we're going to see an explosion in productivity on all sorts of fronts, not just writing code.
this discussion is so stupid. no one who isn't a moron is offloading all work and thought to LLMs. no one who isn't a moron is seriously afraid of their thinking and learning skill "atrophying", whatever tf that means.
it's clear that LLMs are unique in that you actually do have the capability to turn your brain off and blindly trust whatever it does for you. but it should be equally clear that that's a stupid approach. people will still use their minds, and this use gets empowered with proper use of LLMs. it's that simple. ffs, we take the fact that they pass the Turing Test routinely for granted now. let's not forget that this technology is legitimately incredible. it stands to reason that you are seriously handicapping yourself by not trying to use it.
Came here to write something like that his. It’s so sad we still have these stupid conversations with people thinking black and white in 2026.
"People who use a calculator will forget how to think"
Maybe it's a generational thing, but I'm old enough to remember when personal and office computers were really hitting mainstream in the late 70s and 80s, the messaging was a lot more friendly and how they will save you time, help you, etc. Even though practically speaking it reduced a lot of manual jobs.
This AI/LLM push from leadership is so damn tone deaf, like "you better do this", "ai layoffs", etc. I feel like they are jumping way too hard and fast into the "post-employee" thinking and deserve every bit of scorn from laymen.
I just don't get even the presumed risk here. How can something be so revolutionary in its capacity to increase productivity but still so esoteric or specialized that there is a risk of being "left behind"? Like all these things people talk about are, at the end of the day, products that want you to use them; they aren't gonna make it hard for someone to onboard in the future. Sure if all coding became ecommerce overnight and I'd never "learned" Salesforce, there might be brief friction there, but I could still just, like, learn Salesforce. It's gonna be a lot easier than learning good software engineering in general.
Why spend your life "learning" something whose whole deal is about not needing to learn? Even if you gamble incorrectly, its not going to be hard to get into!
Like, what, if I don't start practicing now I am not going to be able to... express concepts with natural language as well?
In the 1950s, COBOL was introduced with the idea that programming could be written almost as if one were speaking English. But eventually people realized that \writing COBOL well, in a style that resembles English conversation, was itself difficult\.
Today, we are hearing a similar claim: “If you can describe the program in natural language, programming is basically finished.” But the industry is now discovering that \describing the program well\ is the hard part.
This is also why ideas like harness engineering are appearing: methods for controlling the range of outputs, from poor to excellent, that can emerge from minimal input.
And honestly, I do not think the “vibe coding” phenomenon is entirely bad. The essence of programming is automation. Many people were previously limited because they did not know programming languages. Now, through AI, they can express themselves and turn that expression into working apps. Seeing this, I understand how deeply people have wanted to create.
I write industrial software that runs in large factory environments, and because of the nature of my work, it is difficult for me to use AI directly. These environments are usually closed networks, so AI does not really benefit my own production work. Even so, I still defend AI, because it functions as a new kind of voice that allows more people to express themselves..
Of course, capitalism distorts this. Many people use AI to chase money and capital, and as a result, a lot of low-quality apps are being produced. But on the other hand, what is wrong with the motivation of wanting to make something one wants to make?
I have been studying the history of programming, and I like Dijkstra’s famous line:
> Computer science is no more about computers than astronomy is about telescopes..
To me, this means that computing is fundamentally about \automation\.
AI has existed as a research topic almost since the birth of computers. We tend to think of it as recent, but it is a field with a history of more than sixty years. Starting from early work such as the Perceptron, there have always been people claiming that AI was a fraud or an illusion.
But now a new seed has germinated. The amount of complexity that a single human can handle has increased. Historically, the techniques for managing that complexity were things like programming patterns and software architecture. And even people who strongly argued for software architecture also warned that if architecture becomes detached from code, then something has gone wrong.
Memes always damage the essence of ideas. As information circulates, it degrades, and eventually the original meaning disappears.
The Dunning-Kruger effect is a good example. The original paper was not simply saying, “ignorant people show off, while knowledgeable people do not.” It was more about how both less competent and more competent people can have difficulty accurately assessing their own metacognition. But the idea became distorted.
The same thing happens to many famous ideas in programming. Knuth’s statement about premature optimization is also constantly distorted as it circulates.
In that situation, can we really say it is always bad to step away from online communities and learn through AI while cross-checking against books?
When I see people making extreme claims about this, I sometimes find it absurd. Of course, many people may flag or downvote my comment. But this is how I see it.
"People who drive cars will forget how to walk".
more like they'll slowly lose muscle and the will to walk if they rely entirely on cars to move themselves around
No, it's more like they'll just get really, really fat and die early of preventable diseases.
If you learn how to walk, then you'll forget how to crawl