Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem. Some companies comprehend how short-sighted this is and invest in professional development in one way or another. They want better engineers so that their operations run better. It's an investment and arguably a smart one.
Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away.
Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.
Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
I think your argument is predicated on LLM coding tools providing significant benefit when used effectively. Personally I still think the answer is "not really" if you're doing any kind of interesting work that's not mostly boilerplate code writing all day.
I've worked for 35ish companies (contract and fulltime), largely on the west coast of the US. I have experienced the lip service, from the vast majority. I have experienced maybe 2 or 3 earnest attempts at growing engineer skills through subsidized admission/travel to talks, tools, or invited instructors.
These two statements go hand in hand though. While I do believe companies could take the altruistic take of training people whether or not they stay, and some places do, they're certainly not going to make the effort for someone who has clear markers of being someone who will leave anyway.
What exactly do you have in mind? The large companies I've worked at had book subscriptions, internal training courses, and would pay for school. Personally I don't see the point of any of it. For software engineering, the info you need is all online for free. You can go download e.g. graduate level CS courses on youtube. IME no one's going to stop you from spending a couple hours a week of work time watching lectures (at least if you're fulltime). I also don't really see how attending conferences relates to skill improvement. Meanwhile, I've been explicitly told by managers that spending half my time mentoring people sounds reasonable.
Every company I worked for didn’t give a shit about my skills. They just wanted to solve the problem in front of them and if they couldn’t then they would hire someone in with the right skills. Improving my skills was seen as a risk as I might leave.
The opposite is true in my case - though 1 organization that had a small budget for things like AWS certs. I remember almost everyone who would get those certificates would never really learn anything from it either. They would just take the exams.
Maybe I’m just getting extremely lucky, but I don’t use AI to code at work and I’m still keeping up with my peers who are all Clauded up. I do a lot of green field network appliance design and implementation and have not felt really felt the pressure in that space.
I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).
The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on.
There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
How many kernel devs does the world need? A dozen or two?
It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer).
Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone.
It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems.
Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation?
There is a deadly game of chicken going on. Junior recruiting already stopped for the most part. Only way this doesn’t end in a catastrophe is if AI becomes genuinely as good as the most skilled developers before we run out of them. Which I doubt very much but don’t find completely impossible.
And the irony is that AI usage should make onboarding juniors easier.
Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors.
Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something.
And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head.
Or if code quality stops mattering, in a kind of "ok, the old codebase is irretrievably sphagettified. Lets just have the chatbot extract all the requirements from it, and build a clean room version" kind of way. It's also not impossible we go that route.
I feel I've upskilled in so many directions (not just "ability to prompt LLMs") since going all in on LLM coding. So many tools, techniques, systems, and new areas of research I'd never have had the time to fully learn in the past.
I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.)
Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]).
I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.
Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".
So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D
What do you mean by "LLM coding"? That's not a very meaningful term, it covers everything from 100% vibe coded projects, to using the LLM to gradually flesh out a careful initial design and then verifying that the implementation is done correctly at every step with meticulous human review and checking.
This. I never had patience to figure how to build a from-scratch iOS app because it required too much boilerplate work. Now i do, and i got to enjoy Swift as a language, and learned a lot of iOS (and Mac) APIs.
>But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there.
Why do people worry about a potential, temporary loss of skill?
Because they may have studied history... There are countless examples of eras of lost technology due to a stumble in society. Where those societies were never able to recover the lost "secrets" of the past. Ultimately, yes, humans can rediscover/reinvent how to do things we know are possible. But it is a very real and understandable concern that we could build a society that slowly crumbles without the ability to relearn the way to maintain the systems it relies upon, fast enough to stop it from continued degradation.
Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book.
The COBOL thing seems to be working out just fine last I heard. Today a small number of people get paid well to know COBOL's depths and legacy platforms/software. The world moved on, where possible, to lower cost labor and tools.
Arguably, that outcome was the right creative destruction. Market economics doesn't long-term incentivize any other outcomes. We'll see the arc of COBOL play out again with LLM coding.
>"That's only a brief moment in time. We learned it once, we can learn it again if we have to. "
Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.
How many people do you think know how to do that today? It's in the millions (probably 10s to 100s), scattered all across the globe because we all need to eat. Not to mention all of the publications on the topic in many different languages. The only credible case for everyone forgetting how to farm is nuclear doomsday and at that point we'll all be dead anyway.
>If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.
I don't think there is a single piece of technology that is so critical to civilization that everyone alive easily forgets how to do it and there is also zero documentation on how it works.
These vague doomsday scenarios around losing knowledge and crashing civilization just have zero plausibility to me.
If a catastrophic failure occurs we will have to return to first principles and re-derive the solutions. Not so bad, probably enlivening even to get to spin up the mind again after a break.
> I got to do my hobby as a career for the past 15 years, but that’s ending.
Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out.
I love the perpetual motion machine / thermodynamics analogy.
It kind of feels like companies are being fooled into outsourcing/offshoring their jr. developer level work. Then the companies depend on it because operational inertia is powerful, and will pay as the price keeps going up to cover the perpetual motion lie. Then they look back and realize they're just paying Microsoft for 20 jr. developers but are getting zero benefit from in-house skill development.
This is silly. I can build products in a weekend that would take me a year by myself. I am still necessary 1% of the time for debug, design, and direction and those of not at all a shallow skill. I have some graduate algebra texts on the way my math friend is guiding me through because I have found a publishable result and need to shore up my background before writing the paper...
It's not perpetual motion, it's very real capability, you just have to be able to learn how to use it.
No one is saying that it cannot do what you say now.
What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast. That is how I compare it to perpetual motion mechanism scams. In the case of a perpetual motion machine, it appear that it will continue to run indefinitely. That is analogous to the impression that you have now. You feel that this will go on and on for ever, and that is the scam you are falling for.
People yeating a (shitty) Github clone with Claude in a week apparently can't imagine it, but if you know the shit out of Rails, start with a good a boiler plate, and have a good git library, a solo dev can also build a (shitty) Github clone in a week. And they'll be able to take it somewhere, unlike the llm ratsnest that will require increasingly expensive tokens to (frustratingly) modify.
You're fooling yourself. It's very easy to get demonstrably working results in an afternoon that would take weeks at least without coding agents. Demonstrably working, as in you can prove the code actually works by then putting it to use. I had a coding agent write an entire declarative GUI library for mpv userscripts, rendering all widgets with ASS subtitles, then proceeded to prove to my satisfaction that it does in fact work by using it to make a node editor for constructing ffmpeg filter graphs and an in-mpv nonlinear video editor. All of this is stuff I already knew how to do in practice, had intended to do one day for years now, but never bit the bullet because I knew it would turn into weeks of me pouring over auto-generated ASS doing things it was never intended to do to figure out why something is rendering subtly wrong. Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing. Fooling myself? The code works, I'm using it, you're fooling yourself.
Funnily enough I saw this post as I was placing my HN account on hiatus, because I'm tired pretending that the quality of discourse is on par with what I've been used to read and participate in.
We're obviously in an era where "good enough" is taken so far that, what used to be the middle of the fictional line is not the middle point anymore but a new extreme. You're either someone who cares for the output or someone who cares how readable and easy to extend the code is.
I can only assume this is done on hopeful purpose, with the hope that the LLM's will "only keep improving linearly" to the point where readability and extendability is not my problem by it's "tomorrow's LLM" problem.
Ok but if you're a person that likes HN discourse but thinks "eternal september" has happened ... what's your plan?
You'll still come here, read the comments, see something engaging and want to reply and... feel sad because shakes fist at [datacenter] clouds it's all just bots talking to each other anyway.
Picking out my favorite idea out of many: we do need ways to stay mentally sharp in the age of AI. Writing and publishing is a good one. I also recommend stimulating human conversations and long-form reading.
More and more the bar is being lowered. Don’t fall to brain rot. Don’t quite quit. Stay active and engaged, and you’ll begin to stand out among your peers.
> we do need ways to stay mentally sharp in the age of AI.
Here's my advice: if there's someone around you who can teach you, learn from them. But if there isn't anyone around you who can teach you, find someone around you who can learn from you. You'll actually grow more from the latter than from the former, if you can believe that.
I think there's a broad blindness in industry to the benefits of mentorship for the mentors. Mentoring has sharpened my thinking and pushed me to articulate why things are true in a way I never would have gone to the effort of otherwise.
If there are no juniors around to teach, seniors will forever be less senior than they might have been had they been getting reps at mentorship along the way.
I haven't heard this benefit for mentors clearly articulated before (probably just missed it), but definitely felt it - I guess it's a deeper version of how writing/other communication forces clarity/organization of thoughts because mentorship conversations are so focused on extracting the why as well as the what.
I can confidently say that, yes, reading helps a lot. My mental model has shifted a bit that words are cheap (printing -> writing -> typing -> generating) and that we should accept there is something like high quality text.
I haven't really been a reader, but I can definitely notice when a book/text is "hard". I'm currently reading the old testament, and I understand very little (even the oxford one that has a lot of annotations is hard for me). I like this, because its a measurement of what I don't know (if that makes sense).
Do you want a Stairmaster with that elevator? Life is for living, ostensibly. This Inevitabilism drone choir[1] may be correct that it will take my current job and after that maybe there will nothing fruitful in that department left. But I can’t imagine a life situation where I’m both surviving and using thinking-with-my-brain as some retirement home pastime + “brainrot”-preventer.
> Stay active and engaged, and you’ll begin to stand out among your peers.
Here’s how the rat race looks in the age of AI and how you can stay ahead.
I'm pretty sure all this AI is built on top of Silicon valley's technobabble of "permanent underclass" which seems to have zero introspection as to why we're just going to accept the feudal overlords of technology.
But besides that, it's interesting so many people are willing to tailor their entire workflow and product to indeterminate machines and business culture.
I recommend everyone stop using these infernal cloud devices and start with a nice local model that doesn't instantly give you everything, but is quite capabable of removing a select amount of drudgery that is rather relaxing. And as soon as you get too lazy to do enough specifying or real coding, it fucks up your dev environment and you slap yuorself a hundred times wondering why you ever trusted someone else to properly build your artifaces.
There's definitely some philosophy being edged into our spaces that need to be combatted.
True, but the tools make the default behavior so tempting.
I have a friend who uses Google Maps to find places, then memorizes the route there and closes the app to navigate because he wants to build a better mental map of our city. Meanwhile, I just check the app every five seconds like a dummy, and my hippocampus stays small.
This is a good parallel. In the 90s when I learned to drive I was quite good at navigating. Now google maps is on a screen in my car telling me where to go whenever I drive beyond my most common routes.
Really all the research telling us about AI skills atrophy.. We should have guessed from previous experience.
I'm pretty sure the -as-a-service stage is only temporary.
The local models are only going to get better, and the improvement curve has to top out eventually. Maybe the cloud models will still give you a few extra percentage points of performance, especially if they're based on data sets that aren't available to the public, but it won't make much difference on most tasks and the local models will have a lot of advantages too.
I do find it hard to tolerate the feeling of being watched online. The second-most trending dataset on huggingface right now is a snapshot of HN updating at a 5 minute interval. It makes me not want to really comment at all, just like how I don’t really publish any software I write anymore.
Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.
This sounds like a nice principled stance, but you won't get any traffic with this approach. That's demotivating - to me blogging is a tight balance of exploration, learning, improving and feedback. I'm not able to write without considering how this impacts the reader - removing all readers breaks the process for me.
> The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms.
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
> Isn't this what the free software movement wanted? Code available to all?
Available to all yes. Not available to the giant corpos while the lone hobbyist still fears getting sued to oblivion. In fact that's pretty much the opposite of what the free software movement wanted.
Also the other thing the free software movement wanted was to be able to fix bugs in the code they had to use, which AI is pulling us further and further away from.
No, the free software movement wants that the source code of the software you use be available to you to modify it if you wish. AI does not necessarily do that.
AI makes the entirety of the software engineering profession available to you. All you have to do is ask the right way, and you can build in days what once took months or years.
Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Closed source is no longer the moat it was, and so keeping the source code to yourself is only going to hurt you as people pass you over for companies who realize this, and strive to make it easier for your LLM to figure their systems out.
> Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Jesus christ.
"The people who wanted everyone to have a home should be happy with the invention of the lockpick. You can just find a nice house and open the lock and move in. Ignore the lockpick company charging essentially whatver they want for lockpicks or how they got accesss to everyones keyfob, or the danger of someone breaking into your house"
That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
The open source community wants people to upskill, people become tech literate, free solutions that grow organically out of people who care, features the community needs and wants and people having the freedom to modify that code to solve their own circumstances.
> That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
Stop trying to make this into some abstract argument. It's not an argument anymore. It's already happened.
How one might choose to characterize the reality, is irrelevant. A vast (and growing) amount of source code is more open, for better or worse. Granted, this is to the chagrin of subgroups that had been pushing different strategies.
> The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm.
> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This sort of reasoning is why you might have been called extreme.
It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".
There's nothing wrong with extreme, but since you asked.
Yes, declaring AI to be 99% hype just turns away people like me from what the author has to say.
I was an AI sceptic for a long time until toward the end of last year when I seriously evaluated them, and came to realise it could add tremendous value.
When someone comes along and declares that it's all hype, it goes against my experience that it's getting things done.
I can also see the harm it does, and I hope the tooling improves to reduce that harm. For example, there's a significant lack of caching in the tooling. It's constantly re-reading the same files every day, and more harmfully, constantly fetching the same help pages and blog-posts from the web.
If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.
Declaring my experience to be invalid and based on nothing but hype doesn't engage people like me at all.
And it's the people like me, the middle-of-the-road developer working on enterprise software, that either need convincing to not use the tools, or for our habits to change to minimise the harm.
Because otherwise we're quietly getting on with using it, potentially destroying forests and lakes as we do.
It’s worse than that, in the linked “I’ve done my research” they make the tired claim that ai hallucinates api calls. Which while true has not been a practical problem since tool calling was added.
I think the position that ai is morally troubling enough that the downsides out way the positives is perfectly defensible. But the entire argument becomes a joke when you can’t accurately catalog the positives.
At this point, I’m pretty sure saying “I’ve done my research” is more of an indicator that someone hasn’t done their research but would like to be taken seriously anyway by pretending they did. The kind of person who’s both smart enough to realize that an issue might be more nuanced than they present it, as well as intellectually dishonest enough to… not care.
Anti-AI articles like this seem to be the new "Doing my part to resist big tech: Why I'm switching back from Chrome to Firefox" genre that popped up on HN for a decade or so. If it makes you feel better, great, but don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society.
Can’t we just sabotage AI? We have the means for sure (speed light communication across the globe). Like, at least for once in the history of software engineering we should get together like other professionals do. Sadly our high salaries and perks won’t make the task easy for many
- spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
- be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
I think you should be very picky about generated PRs not as an act of sabotage but because very obviously generated ones tend to balloon complexity of the code in ways that makes it difficult for both humans and agents, and because superficial plausibility is really good at masking problems. It's a rational thing to do.
Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.
> be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
But that's the opposite of sabotage, you're actually helping your boss use AI effectively!
> spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
Yes, but the "useless" stuff should be things like "carefully document how this codebase works" or "ruthlessly critique this 10k-lines AI slop pull request, and propose ways to improve it". So that you at least get something nice out of it long-term, even if it's "useless" to a clueless AI-pilled PHB.
One problem writing does have: we grew up in a massive changing and progressing software writing area. A golden area.
Now i still show clean code videos from bob and other old things to new hires and young collegues.
Java got more features, given but the golden area of discovery is over.
The new big thing is ai and i'm curious to see how it will feel to write real agents for my company specific use cases.
But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use. It will change and it changes our field.
Btw. "Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value." i disagree, video generation has a massive impact on the industry for a lot of people. Don't down play this. NFTs btw. never had any impact besides moving money from a to b
I don't see any proof that software development is not dead. Software engineering is not, and it's much more than writing code, and it can be fun. But writing code is dead, there is no point of doing it if an LLM can output the same code 100x faster. Of course, architecture and operations stays in our hands (for now?).
Initially I was very sceptic, first versions of ChatGPT or Claude were rather bad. I kept holding to a thought that it cannot get good. Then I've spend a few months evaluating them, if you know how to code, there is no point of coding anymore, just instruct an LLM to do something, verify, merge, repeat. It's an editor of some sorts, an editor when you enter a thought and get code as an output. Changes the whole scene.
LLM's don't really output the same code quality as a human, even on the smallest scale. It's not even close. Maybe you can guide them to refactor their slop up to human-written quality, but then you're still coding. You're just doing it by asking the computer to write something, instead of physically typing the whole thing out with a keyboard.
Yeah I also keep thinking this. I don't see LLMs reliably producing code that is up to my standards. Granted I have high standards because I do take pride in producing high quality code (in all manner of metrics). A lot of the time the code works, unfortunately only for the most naive, mechanical definition of "works".
Useful tool, and if you're just scratching a small itch it's great.
For any serious system you still need to understand and guide the code, and unless you do some of the coding.. You won't. It's just novelty right now is skewing our reasoning.
Alas I think tech crowd have collectively painted humanity into a corner where not playing is not an option anymore.
The combination of having subverted copyright and enabled cheap machine replication kills large swaths of creativity. At least as a viable living. One can still do many things on an artisanal level certainly and as excited as I am about AI it’s hard not to see it as a big L for humanity’s creative output
Old web stuff is still around. RSS feeds are out there. Some parts of masto are generally chill and filled with people having interesting convos.
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
Good lord I’m going to have to figure out some way to filter Hacker News. I’m so tired of this same sort of article (and the opposite) being posted every day. AI isn’t going away. AI is better than you think it is. AI is probably also worse than you think it is. The world has nuance, so can we please all chill?
Sort of hard to do because AI it shoved down your throat in one form or another virtually everywhere you go. I also think a lot of us Hackers are mourning the fact we spent many years mastering machines and programming just to have the skill devalued (at least from the publics perspective) nearly over night. I personally think it is more important now more than ever to understand technology. To be able to write code, understand how a CPU works etc. Tech literacy will help prevent doom scenarios. A future where virtually everyone depends on AI and Computers but lacks people who actually understand them from a low level perspective seems bleak. I know thinking itself seems to have gone out of fashion and its given rise to misinformation and/or political nonsense like the rise of fascism etc... I think a lot of us just feel "empty" and are trying to express it.
I get it. I’ve been doing this for 11 years. I use agents everyday at work now and deal with all the benefits and problems of that. The craft is certainly changing and it will take years for everything to shake out and settle. I understand the desire to publicly wax poetic, but nobody actually knows shit about where we will land, so it gets a bit tiresome to see over and over.
I laugh jollily in the face of AI. I know the coming shit pile, its nature isn't going to be surprising, only the speed and utter surrender of the vast majority of humanity to mediocrity.
What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.
AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.
You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.
Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.
This feels weird somehow. It feels like: Damn we can’t train our AI any better as everything regurgitated slop now.
How can we get people to create new content for us, hopefully with new ideas …
Might be just me though, but I definitely don’t get why blogging should be the solution.
"Generative AI is art. It’s irredeemably shit art; end of conversation."
I think most people cannot destinguish between "genuine" creativity and an artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.
I think many of the more human centric thinkers will be disappointed at how many people just wont care.
Further I'd argue we KNOW people don't care if you look at the music industry.
Pop music is often composed by dozens of people who specialize in a thin sliver of the track - lyrics, vocals, drums, &c. - and then it's given a pretty face and makes the charts. That's really no different than something like Suno.
I think AI is forcing people who thought that THEIR thing was too nuanced or too complex to be replaced by technology to reckon with what makes them special.
The question is how subtle AI can be. I feel like art sometimes seems to communicate A, and the artist intended to communicate A and perhaps some B, but clearly, it also hints at another C (and maybe also D, E, ..), which was not intended by the artist or recognised by many viewers, while to some people it's clearly there. Now where did that come from?
most people are just utilitarian and do not care for "art" (in the high art sense).
AI is perfect for that. It reveal, perhaps to the dismay of those who revel in high art, that it might be an illusion that art has genuine creativity, if most people find ai to produce acceptable output.
People have been having this debate with popular art forever. Some people do not even believe in taste, and that everyone's artistic opinions have equal merit.
Blog posts are an interesting case, they are a very good example of something where abundance of supply outstrips any demand so much that it cannot be realistic to expect a median level contribution to receive any significant attention.
Setting aside the self delusion that makes a considerable number to erroneously rate themselves above average, the reason you create blog posts should not be for the attention you might gain, there simply are not the eyeballs. You create as a form of self expression, to organise your thoughts, to create a record of them.
AI can never challenge in those areas because it is, as it has always been, the act of creation is the goal.
I’ve decided the only way I’ll adopt a full automated agentic AI workflow the way companies want, is if I am allowed to hold multiple jobs at multiple companies.
Imagine having 6 software engineering jobs, each paying maybe $150k a year, all being done by agents.
Hell, I might even do this secretly without their consent. If I can hold just 10 jobs for about 3 or 4 years, I can retire and leave the industry before it all comes crumbling down in 2030.
The problem of course, is securing that many jobs. But maybe agents can help with applying for jobs.
When they invented cars (and cars became popular and affordable), people did stop walking everywhere. Jogging wasn’t popularized until the 1970s, when we all realized we needed to be intentional with fitness in our car-based society.
If it's a crime to jog on railroad tracks, and the avalibility of rail makes it so that everything you need is only accessible by rail, I conclude that rail prevents you from jogging.
I really like the sentiment and will quote this in the future! My own thoughts line up a bit closer to the article though, with this quote being a good summary of it:
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
You have to write for yourself. People have said this for years, decades, millennia even - but nobody really believes that writing to an audience of zero (or one, if Mom is still around) is worth it.
Everyone wants to be a famous author, or at least a published/somewhat acknowledged one; few are willing to write their novel and be satisfied with zero or near-zero sales/readings.
But that is exactly what you need to do, especially in the age of AI. Everyone who was "in it to win it" (think linkedinslop which existed before AI) is going to certainly use AI - because they do not give a shit about the quality of themselves - they just want the result.
And you can only become a writer (unpublished, unread, or no) by doing the writing - it takes time (10,000 hours?) that cannot be replaced by AI, just like you can't have the body of a marathon runner without running (yes, yes, the joke). You may be able to get 26 miles and change away, even very fast, but unless you personally do the running of that distance without cheating, you will not get the inherent benefits.
And if you instruct an AI, or another human even, to write for you, you may get some of the results you want, but you won't have changed to become a writer.
We shouldn't celebrate the successful blogs; they're already rewarded enough. It's celebrating the unsuccessful blogs that is needed - even if, frankly, the vast majority of them are sub-AI levels of crap it is still a human changing and progressing behind them.
Babies fall over a lot but unless you take them out of the stroller and let them do so, they'll never progress to crawling, walking, running.
> I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters.
More pretentious gatekeeping from luddites who like to yell at clouds. This is someone who would love a piece of artwork created using ai tools right up until someone told them it was created using ai tools.
rants about AI from people who have already decided up front to never actually attempt to use the tools (which seems to be the case here from the post and the other one it links) are not really providing any value to the discourse.
There is nothing new about using machinery to automate boring / repetitive tasks, including the wall of resistance that comes up. But it should be clear that genuinely useful tooling and automation tends to become a normal part of life, from the plow, to the printing press, to the dishwasher, to digital video editing, to autocorrect, and now to large language models.
There's a lot that has to be worked out with LLMs in particular as they are now encroaching heavily upon human creativity and thought. This is an extremely important topic. But rants like these with terms like "the plagarism machine" and "the solution is that we all must vow to never use AI in any shape or form" are not really contributing.
"No AI" right above a robot voice playback button.
Mixed messages fr
Hot take, folks packing it in because of AI probably were not difference makers before AI, and wouldn't be difference makers after it either.
I agree with the author, keep writing. It helps hone your ability to communicate effectively which we all need for some time to come (at least until we become batteries).
> folks packing it in because of AI probably were not difference makers before AI
Anecdotal but I’ve been seeing a lot of the opposite. Some of those leaning in strongly are being propped up by the tools. Holding onto them like a lifeboat when they would have fallen off earlier.
Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem. Some companies comprehend how short-sighted this is and invest in professional development in one way or another. They want better engineers so that their operations run better. It's an investment and arguably a smart one.
Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away.
Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.
Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
I think your argument is predicated on LLM coding tools providing significant benefit when used effectively. Personally I still think the answer is "not really" if you're doing any kind of interesting work that's not mostly boilerplate code writing all day.
> Improving developer skills is not valuable to your company
Every company I've ever worked at has genuinely believed in and invested in improving developer skills.
I've worked for 35ish companies (contract and fulltime), largely on the west coast of the US. I have experienced the lip service, from the vast majority. I have experienced maybe 2 or 3 earnest attempts at growing engineer skills through subsidized admission/travel to talks, tools, or invited instructors.
These two statements go hand in hand though. While I do believe companies could take the altruistic take of training people whether or not they stay, and some places do, they're certainly not going to make the effort for someone who has clear markers of being someone who will leave anyway.
> I've worked for 35ish companies
It seems they were correct not to invest in your skills.
I've worked for six companies over almost 20 years. The majority invested in my skills, and I hope that investment has paid off for them!
What exactly do you have in mind? The large companies I've worked at had book subscriptions, internal training courses, and would pay for school. Personally I don't see the point of any of it. For software engineering, the info you need is all online for free. You can go download e.g. graduate level CS courses on youtube. IME no one's going to stop you from spending a couple hours a week of work time watching lectures (at least if you're fulltime). I also don't really see how attending conferences relates to skill improvement. Meanwhile, I've been explicitly told by managers that spending half my time mentoring people sounds reasonable.
Hard same over 20 years
This percentage is probably right on the money!
You must be lucky then.
Every company I worked for didn’t give a shit about my skills. They just wanted to solve the problem in front of them and if they couldn’t then they would hire someone in with the right skills. Improving my skills was seen as a risk as I might leave.
I’ve had both experiences, sometimes at the exact same company.
That’s been my experience, too. But now I get a sort of, “I dunno. Maybe don’t use AI on Fridays?”
There doesn’t seem to be a plan for maintaining that culture.
The opposite is true in my case - though 1 organization that had a small budget for things like AWS certs. I remember almost everyone who would get those certificates would never really learn anything from it either. They would just take the exams.
> Improving developer skills is not valuable to your company.
Yet every company does it, except the worst sweatshops.
Maybe I’m just getting extremely lucky, but I don’t use AI to code at work and I’m still keeping up with my peers who are all Clauded up. I do a lot of green field network appliance design and implementation and have not felt really felt the pressure in that space.
I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).
The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on.
There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
God I hope it doesn't all crash at once.
How many kernel devs does the world need? A dozen or two?
It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer).
Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone.
It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems.
Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation?
There is a deadly game of chicken going on. Junior recruiting already stopped for the most part. Only way this doesn’t end in a catastrophe is if AI becomes genuinely as good as the most skilled developers before we run out of them. Which I doubt very much but don’t find completely impossible.
And the irony is that AI usage should make onboarding juniors easier.
Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors.
Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something.
And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head.
Or if code quality stops mattering, in a kind of "ok, the old codebase is irretrievably sphagettified. Lets just have the chatbot extract all the requirements from it, and build a clean room version" kind of way. It's also not impossible we go that route.
I feel I've upskilled in so many directions (not just "ability to prompt LLMs") since going all in on LLM coding. So many tools, techniques, systems, and new areas of research I'd never have had the time to fully learn in the past.
I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.)
Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]).
For me both are true at the same time.
I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.
Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".
So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D
I think there's a considerable difference in its ability to help with breadth vs. depth of expertise.
What do you mean by "LLM coding"? That's not a very meaningful term, it covers everything from 100% vibe coded projects, to using the LLM to gradually flesh out a careful initial design and then verifying that the implementation is done correctly at every step with meticulous human review and checking.
This. I never had patience to figure how to build a from-scratch iOS app because it required too much boilerplate work. Now i do, and i got to enjoy Swift as a language, and learned a lot of iOS (and Mac) APIs.
>But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there.
Why do people worry about a potential, temporary loss of skill?
Because they may have studied history... There are countless examples of eras of lost technology due to a stumble in society. Where those societies were never able to recover the lost "secrets" of the past. Ultimately, yes, humans can rediscover/reinvent how to do things we know are possible. But it is a very real and understandable concern that we could build a society that slowly crumbles without the ability to relearn the way to maintain the systems it relies upon, fast enough to stop it from continued degradation.
Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book.
I imagine it being a "does anybody know COBOL?!" but much sooner than sixty years rom now.
COBOL also came to mind.
The COBOL thing seems to be working out just fine last I heard. Today a small number of people get paid well to know COBOL's depths and legacy platforms/software. The world moved on, where possible, to lower cost labor and tools.
Arguably, that outcome was the right creative destruction. Market economics doesn't long-term incentivize any other outcomes. We'll see the arc of COBOL play out again with LLM coding.
>"That's only a brief moment in time. We learned it once, we can learn it again if we have to. "
Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.
>It is like growing food on industrial scale.
How many people do you think know how to do that today? It's in the millions (probably 10s to 100s), scattered all across the globe because we all need to eat. Not to mention all of the publications on the topic in many different languages. The only credible case for everyone forgetting how to farm is nuclear doomsday and at that point we'll all be dead anyway.
>If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.
I don't think there is a single piece of technology that is so critical to civilization that everyone alive easily forgets how to do it and there is also zero documentation on how it works.
These vague doomsday scenarios around losing knowledge and crashing civilization just have zero plausibility to me.
If a catastrophic failure occurs we will have to return to first principles and re-derive the solutions. Not so bad, probably enlivening even to get to spin up the mind again after a break.
> I got to do my hobby as a career for the past 15 years, but that’s ending.
Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out.
I love the perpetual motion machine / thermodynamics analogy.
It kind of feels like companies are being fooled into outsourcing/offshoring their jr. developer level work. Then the companies depend on it because operational inertia is powerful, and will pay as the price keeps going up to cover the perpetual motion lie. Then they look back and realize they're just paying Microsoft for 20 jr. developers but are getting zero benefit from in-house skill development.
This is silly. I can build products in a weekend that would take me a year by myself. I am still necessary 1% of the time for debug, design, and direction and those of not at all a shallow skill. I have some graduate algebra texts on the way my math friend is guiding me through because I have found a publishable result and need to shore up my background before writing the paper...
It's not perpetual motion, it's very real capability, you just have to be able to learn how to use it.
No one is saying that it cannot do what you say now.
What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast. That is how I compare it to perpetual motion mechanism scams. In the case of a perpetual motion machine, it appear that it will continue to run indefinitely. That is analogous to the impression that you have now. You feel that this will go on and on for ever, and that is the scam you are falling for.
You're fooling yourself.
People yeating a (shitty) Github clone with Claude in a week apparently can't imagine it, but if you know the shit out of Rails, start with a good a boiler plate, and have a good git library, a solo dev can also build a (shitty) Github clone in a week. And they'll be able to take it somewhere, unlike the llm ratsnest that will require increasingly expensive tokens to (frustratingly) modify.
You're fooling yourself. It's very easy to get demonstrably working results in an afternoon that would take weeks at least without coding agents. Demonstrably working, as in you can prove the code actually works by then putting it to use. I had a coding agent write an entire declarative GUI library for mpv userscripts, rendering all widgets with ASS subtitles, then proceeded to prove to my satisfaction that it does in fact work by using it to make a node editor for constructing ffmpeg filter graphs and an in-mpv nonlinear video editor. All of this is stuff I already knew how to do in practice, had intended to do one day for years now, but never bit the bullet because I knew it would turn into weeks of me pouring over auto-generated ASS doing things it was never intended to do to figure out why something is rendering subtly wrong. Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing. Fooling myself? The code works, I'm using it, you're fooling yourself.
You can see their ego trying to protect itself.
Funnily enough I saw this post as I was placing my HN account on hiatus, because I'm tired pretending that the quality of discourse is on par with what I've been used to read and participate in.
We're obviously in an era where "good enough" is taken so far that, what used to be the middle of the fictional line is not the middle point anymore but a new extreme. You're either someone who cares for the output or someone who cares how readable and easy to extend the code is.
I can only assume this is done on hopeful purpose, with the hope that the LLM's will "only keep improving linearly" to the point where readability and extendability is not my problem by it's "tomorrow's LLM" problem.
Ok but if you're a person that likes HN discourse but thinks "eternal september" has happened ... what's your plan?
You'll still come here, read the comments, see something engaging and want to reply and... feel sad because shakes fist at [datacenter] clouds it's all just bots talking to each other anyway.
Seems lame. Keep talking anyway.
Picking out my favorite idea out of many: we do need ways to stay mentally sharp in the age of AI. Writing and publishing is a good one. I also recommend stimulating human conversations and long-form reading.
More and more the bar is being lowered. Don’t fall to brain rot. Don’t quite quit. Stay active and engaged, and you’ll begin to stand out among your peers.
> we do need ways to stay mentally sharp in the age of AI.
Here's my advice: if there's someone around you who can teach you, learn from them. But if there isn't anyone around you who can teach you, find someone around you who can learn from you. You'll actually grow more from the latter than from the former, if you can believe that.
I think there's a broad blindness in industry to the benefits of mentorship for the mentors. Mentoring has sharpened my thinking and pushed me to articulate why things are true in a way I never would have gone to the effort of otherwise.
If there are no juniors around to teach, seniors will forever be less senior than they might have been had they been getting reps at mentorship along the way.
[delayed]
I haven't heard this benefit for mentors clearly articulated before (probably just missed it), but definitely felt it - I guess it's a deeper version of how writing/other communication forces clarity/organization of thoughts because mentorship conversations are so focused on extracting the why as well as the what.
I can confidently say that, yes, reading helps a lot. My mental model has shifted a bit that words are cheap (printing -> writing -> typing -> generating) and that we should accept there is something like high quality text.
I haven't really been a reader, but I can definitely notice when a book/text is "hard". I'm currently reading the old testament, and I understand very little (even the oxford one that has a lot of annotations is hard for me). I like this, because its a measurement of what I don't know (if that makes sense).
For the first time in quite a while, I've started reading a challenging, non-computer book ("The New Testament in its World").
I'm trying to decide if my attention span has atrophied, or if I'm just more aware now of my ADD.
Either way, I'm hopeful that my attention span for this kind of reading will grow with practice.
I too have noticed my attention span having atrophied. It was pre-AI, at least for me. Post-internet, though.
I think browser tabs and screen (the terminal multiplexer) did it for me.
If you you haven't read a book in a while, I noticed it's like a thing you need to practice.
There are many things the AI can't do.
Do you want a Stairmaster with that elevator? Life is for living, ostensibly. This Inevitabilism drone choir[1] may be correct that it will take my current job and after that maybe there will nothing fruitful in that department left. But I can’t imagine a life situation where I’m both surviving and using thinking-with-my-brain as some retirement home pastime + “brainrot”-preventer.
> Stay active and engaged, and you’ll begin to stand out among your peers.
Here’s how the rat race looks in the age of AI and how you can stay ahead.
[1] https://news.ycombinator.com/item?id=47487774
hoped for something useful in your link, found drivel.
its drivel all the way down, act accordingly
Or, you know, writing some code every day.
I'm pretty sure all this AI is built on top of Silicon valley's technobabble of "permanent underclass" which seems to have zero introspection as to why we're just going to accept the feudal overlords of technology.
But besides that, it's interesting so many people are willing to tailor their entire workflow and product to indeterminate machines and business culture.
I recommend everyone stop using these infernal cloud devices and start with a nice local model that doesn't instantly give you everything, but is quite capabable of removing a select amount of drudgery that is rather relaxing. And as soon as you get too lazy to do enough specifying or real coding, it fucks up your dev environment and you slap yuorself a hundred times wondering why you ever trusted someone else to properly build your artifaces.
There's definitely some philosophy being edged into our spaces that need to be combatted.
I agree on the over-reliance part, but I don’t think it’s AI itself .It’s how people choose to use it.
Most people are outsourcing thinking instead of using it to go deeper. The tools aren’t the problem, the default behavior is.
True, but the tools make the default behavior so tempting.
I have a friend who uses Google Maps to find places, then memorizes the route there and closes the app to navigate because he wants to build a better mental map of our city. Meanwhile, I just check the app every five seconds like a dummy, and my hippocampus stays small.
This is a good parallel. In the 90s when I learned to drive I was quite good at navigating. Now google maps is on a screen in my car telling me where to go whenever I drive beyond my most common routes.
Really all the research telling us about AI skills atrophy.. We should have guessed from previous experience.
Old people my entire life have made fun of younger people for “not being able to read maps” or something.
But I’ve never seen anyone follow a GPS so religiously into so many obvious dead ends than elderly Uber drivers.
Your friend use google maps, while google maps uses you.
I'm pretty sure the -as-a-service stage is only temporary.
The local models are only going to get better, and the improvement curve has to top out eventually. Maybe the cloud models will still give you a few extra percentage points of performance, especially if they're based on data sets that aren't available to the public, but it won't make much difference on most tasks and the local models will have a lot of advantages too.
> which seems to have zero introspection as to why we're just going to accept the feudal overlords of technology.
You’ve let them in and given them power in many aspects of your life without even a whimper of resistance. Of course you’ll accept them as your lords.
I do find it hard to tolerate the feeling of being watched online. The second-most trending dataset on huggingface right now is a snapshot of HN updating at a 5 minute interval. It makes me not want to really comment at all, just like how I don’t really publish any software I write anymore.
Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.
I think the immediate term action is to viciously block all crawlers.
Writing a blog yes, feeding the beast no.
This sounds like a nice principled stance, but you won't get any traffic with this approach. That's demotivating - to me blogging is a tight balance of exploration, learning, improving and feedback. I'm not able to write without considering how this impacts the reader - removing all readers breaks the process for me.
Yeah well I just don’t care about „AI dark forest”
You seriously need to go outside if you are so defeated by another chess winning machine
Nobody wants to Watch AI play chess, nobody wants to read ai blogposts
> The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms.
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
> Isn't this what the free software movement wanted? Code available to all?
Available to all yes. Not available to the giant corpos while the lone hobbyist still fears getting sued to oblivion. In fact that's pretty much the opposite of what the free software movement wanted.
Also the other thing the free software movement wanted was to be able to fix bugs in the code they had to use, which AI is pulling us further and further away from.
Progress is good. But why on earth should we support Anthropic/OpenAI/etc? What the planet needs is less multibillion corporations, not more
You don't have to. Just like you don't have to support Amazon for web services and file stores.
Or Oracle for databases.
Or Microsoft for operating systems.
Or DEC for computers.
There are perfectly good open source LLMs and agents out there, which are getting better by the day (especially after the recent leak!)
I want to support local models and compute over SaaS models.
I want to support RISC V over Intel.
I want other things too, and on balance, Intel+Anthropic is most compliant with my various preferences, even if they're not perfect.
No, the free software movement wants that the source code of the software you use be available to you to modify it if you wish. AI does not necessarily do that.
AI makes the entirety of the software engineering profession available to you. All you have to do is ask the right way, and you can build in days what once took months or years.
Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Closed source is no longer the moat it was, and so keeping the source code to yourself is only going to hurt you as people pass you over for companies who realize this, and strive to make it easier for your LLM to figure their systems out.
But I can't have the weights of the LLM model I'm using for this.
> Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Jesus christ.
"The people who wanted everyone to have a home should be happy with the invention of the lockpick. You can just find a nice house and open the lock and move in. Ignore the lockpick company charging essentially whatver they want for lockpicks or how they got accesss to everyones keyfob, or the danger of someone breaking into your house"
That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
The open source community wants people to upskill, people become tech literate, free solutions that grow organically out of people who care, features the community needs and wants and people having the freedom to modify that code to solve their own circumstances.
> That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
Stop trying to make this into some abstract argument. It's not an argument anymore. It's already happened.
How one might choose to characterize the reality, is irrelevant. A vast (and growing) amount of source code is more open, for better or worse. Granted, this is to the chagrin of subgroups that had been pushing different strategies.
> The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm.
> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This sort of reasoning is why you might have been called extreme.
It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".
There's nothing wrong with extreme, but since you asked.
Yes, declaring AI to be 99% hype just turns away people like me from what the author has to say.
I was an AI sceptic for a long time until toward the end of last year when I seriously evaluated them, and came to realise it could add tremendous value.
When someone comes along and declares that it's all hype, it goes against my experience that it's getting things done.
I can also see the harm it does, and I hope the tooling improves to reduce that harm. For example, there's a significant lack of caching in the tooling. It's constantly re-reading the same files every day, and more harmfully, constantly fetching the same help pages and blog-posts from the web.
If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.
Declaring my experience to be invalid and based on nothing but hype doesn't engage people like me at all.
And it's the people like me, the middle-of-the-road developer working on enterprise software, that either need convincing to not use the tools, or for our habits to change to minimise the harm.
Because otherwise we're quietly getting on with using it, potentially destroying forests and lakes as we do.
It’s worse than that, in the linked “I’ve done my research” they make the tired claim that ai hallucinates api calls. Which while true has not been a practical problem since tool calling was added.
I think the position that ai is morally troubling enough that the downsides out way the positives is perfectly defensible. But the entire argument becomes a joke when you can’t accurately catalog the positives.
I think the fact you need tool calling to stop it doing that, shows the underlying issue with trusting it to do anything without a human
At this point, I’m pretty sure saying “I’ve done my research” is more of an indicator that someone hasn’t done their research but would like to be taken seriously anyway by pretending they did. The kind of person who’s both smart enough to realize that an issue might be more nuanced than they present it, as well as intellectually dishonest enough to… not care.
It's funny -- when LLMs do this, it's usually a sign that their confidence is also misplaced.
Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Anti-AI articles like this seem to be the new "Doing my part to resist big tech: Why I'm switching back from Chrome to Firefox" genre that popped up on HN for a decade or so. If it makes you feel better, great, but don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society.
A non-sequitur, but I really like the style of the blog. Good job.
Can’t we just sabotage AI? We have the means for sure (speed light communication across the globe). Like, at least for once in the history of software engineering we should get together like other professionals do. Sadly our high salaries and perks won’t make the task easy for many
- spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
- be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
I think you should be very picky about generated PRs not as an act of sabotage but because very obviously generated ones tend to balloon complexity of the code in ways that makes it difficult for both humans and agents, and because superficial plausibility is really good at masking problems. It's a rational thing to do.
Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.
> be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
But that's the opposite of sabotage, you're actually helping your boss use AI effectively!
> spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
Yes, but the "useless" stuff should be things like "carefully document how this codebase works" or "ruthlessly critique this 10k-lines AI slop pull request, and propose ways to improve it". So that you at least get something nice out of it long-term, even if it's "useless" to a clueless AI-pilled PHB.
One problem writing does have: we grew up in a massive changing and progressing software writing area. A golden area.
Now i still show clean code videos from bob and other old things to new hires and young collegues.
Java got more features, given but the golden area of discovery is over.
The new big thing is ai and i'm curious to see how it will feel to write real agents for my company specific use cases.
But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use. It will change and it changes our field.
Btw. "Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value." i disagree, video generation has a massive impact on the industry for a lot of people. Don't down play this. NFTs btw. never had any impact besides moving money from a to b
> But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use.
Oof. The modern "Go away or I will replace you with a very small shell script"
At least have the grace to show new hires Rich Hickey lectures or something. Uncle bob is nonsense.
I'm giving them a lot more but I assumed people know uncle bob. Like the Open Source Architecture Books, Googles SRE Books, 1:1 mentoring every week.
But yeah there is one person made of teflon. Nothing sticks. And i could tell you that teflon person in every company i worked so far.
I quit. The clankers won.
I don't see any proof that software development is not dead. Software engineering is not, and it's much more than writing code, and it can be fun. But writing code is dead, there is no point of doing it if an LLM can output the same code 100x faster. Of course, architecture and operations stays in our hands (for now?).
Initially I was very sceptic, first versions of ChatGPT or Claude were rather bad. I kept holding to a thought that it cannot get good. Then I've spend a few months evaluating them, if you know how to code, there is no point of coding anymore, just instruct an LLM to do something, verify, merge, repeat. It's an editor of some sorts, an editor when you enter a thought and get code as an output. Changes the whole scene.
[delayed]
LLM's don't really output the same code quality as a human, even on the smallest scale. It's not even close. Maybe you can guide them to refactor their slop up to human-written quality, but then you're still coding. You're just doing it by asking the computer to write something, instead of physically typing the whole thing out with a keyboard.
Yeah I also keep thinking this. I don't see LLMs reliably producing code that is up to my standards. Granted I have high standards because I do take pride in producing high quality code (in all manner of metrics). A lot of the time the code works, unfortunately only for the most naive, mechanical definition of "works".
Useful tool, and if you're just scratching a small itch it's great.
For any serious system you still need to understand and guide the code, and unless you do some of the coding.. You won't. It's just novelty right now is skewing our reasoning.
Paha, I thought this domain was 'D-Bus Hell' until I clicked in. (It's D. Bushell's blog.)
> The only winning move is not to play.
Alas I think tech crowd have collectively painted humanity into a corner where not playing is not an option anymore.
The combination of having subverted copyright and enabled cheap machine replication kills large swaths of creativity. At least as a viable living. One can still do many things on an artisanal level certainly and as excited as I am about AI it’s hard not to see it as a big L for humanity’s creative output
Man I love the design of your site, and that goldfish made my day.
For the article it was nice, but the font is really what got me.
Old web stuff is still around. RSS feeds are out there. Some parts of masto are generally chill and filled with people having interesting convos.
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
Good lord I’m going to have to figure out some way to filter Hacker News. I’m so tired of this same sort of article (and the opposite) being posted every day. AI isn’t going away. AI is better than you think it is. AI is probably also worse than you think it is. The world has nuance, so can we please all chill?
Sort of hard to do because AI it shoved down your throat in one form or another virtually everywhere you go. I also think a lot of us Hackers are mourning the fact we spent many years mastering machines and programming just to have the skill devalued (at least from the publics perspective) nearly over night. I personally think it is more important now more than ever to understand technology. To be able to write code, understand how a CPU works etc. Tech literacy will help prevent doom scenarios. A future where virtually everyone depends on AI and Computers but lacks people who actually understand them from a low level perspective seems bleak. I know thinking itself seems to have gone out of fashion and its given rise to misinformation and/or political nonsense like the rise of fascism etc... I think a lot of us just feel "empty" and are trying to express it.
I get it. I’ve been doing this for 11 years. I use agents everyday at work now and deal with all the benefits and problems of that. The craft is certainly changing and it will take years for everything to shake out and settle. I understand the desire to publicly wax poetic, but nobody actually knows shit about where we will land, so it gets a bit tiresome to see over and over.
I laugh jollily in the face of AI. I know the coming shit pile, its nature isn't going to be surprising, only the speed and utter surrender of the vast majority of humanity to mediocrity.
What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.
AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.
You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.
Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.
For me right now, that's the fretboard.
This feels weird somehow. It feels like: Damn we can’t train our AI any better as everything regurgitated slop now. How can we get people to create new content for us, hopefully with new ideas …
Might be just me though, but I definitely don’t get why blogging should be the solution.
"Generative AI is art. It’s irredeemably shit art; end of conversation."
I think most people cannot destinguish between "genuine" creativity and an artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.
I think many of the more human centric thinkers will be disappointed at how many people just wont care.
Further I'd argue we KNOW people don't care if you look at the music industry.
Pop music is often composed by dozens of people who specialize in a thin sliver of the track - lyrics, vocals, drums, &c. - and then it's given a pretty face and makes the charts. That's really no different than something like Suno.
I think AI is forcing people who thought that THEIR thing was too nuanced or too complex to be replaced by technology to reckon with what makes them special.
The question is how subtle AI can be. I feel like art sometimes seems to communicate A, and the artist intended to communicate A and perhaps some B, but clearly, it also hints at another C (and maybe also D, E, ..), which was not intended by the artist or recognised by many viewers, while to some people it's clearly there. Now where did that come from?
And can or will AI create it?
most people are just utilitarian and do not care for "art" (in the high art sense).
AI is perfect for that. It reveal, perhaps to the dismay of those who revel in high art, that it might be an illusion that art has genuine creativity, if most people find ai to produce acceptable output.
People have been having this debate with popular art forever. Some people do not even believe in taste, and that everyone's artistic opinions have equal merit.
Blog posts are an interesting case, they are a very good example of something where abundance of supply outstrips any demand so much that it cannot be realistic to expect a median level contribution to receive any significant attention.
Setting aside the self delusion that makes a considerable number to erroneously rate themselves above average, the reason you create blog posts should not be for the attention you might gain, there simply are not the eyeballs. You create as a form of self expression, to organise your thoughts, to create a record of them.
AI can never challenge in those areas because it is, as it has always been, the act of creation is the goal.
I’ve decided the only way I’ll adopt a full automated agentic AI workflow the way companies want, is if I am allowed to hold multiple jobs at multiple companies.
Imagine having 6 software engineering jobs, each paying maybe $150k a year, all being done by agents.
Hell, I might even do this secretly without their consent. If I can hold just 10 jobs for about 3 or 4 years, I can retire and leave the industry before it all comes crumbling down in 2030.
The problem of course, is securing that many jobs. But maybe agents can help with applying for jobs.
Just because they invented cars doesn't mean you stop jogging.
When they invented cars (and cars became popular and affordable), people did stop walking everywhere. Jogging wasn’t popularized until the 1970s, when we all realized we needed to be intentional with fitness in our car-based society.
They did make it very hard for people to do anything else but use a car in many, many places though...
In the US, perhaps, which has had perhaps the bulk of its growth post-automobiles.
> Just because they invented cars doesn't mean you stop jogging.
They literally made it a crime to walk down the street.
across the street, no?
It's also a crime to jog on the railroad tracks.
If it's a crime to jog on railroad tracks, and the avalibility of rail makes it so that everything you need is only accessible by rail, I conclude that rail prevents you from jogging.
I'm sorry for all the people who lived in my original SimCity towns. They must have been nearly spherical.
The one I like better is: software is great at playing chess, doesn't mean you cannot play too
Read the post, it's a gotcha ;P I was scared too
No but everyone has gotten real fat since then.
This. And you can mog others with your toned body too.
I really like the sentiment and will quote this in the future! My own thoughts line up a bit closer to the article though, with this quote being a good summary of it:
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This might be the coolest personal website theme I've ever seen.
You have to write for yourself. People have said this for years, decades, millennia even - but nobody really believes that writing to an audience of zero (or one, if Mom is still around) is worth it.
Everyone wants to be a famous author, or at least a published/somewhat acknowledged one; few are willing to write their novel and be satisfied with zero or near-zero sales/readings.
But that is exactly what you need to do, especially in the age of AI. Everyone who was "in it to win it" (think linkedinslop which existed before AI) is going to certainly use AI - because they do not give a shit about the quality of themselves - they just want the result.
And you can only become a writer (unpublished, unread, or no) by doing the writing - it takes time (10,000 hours?) that cannot be replaced by AI, just like you can't have the body of a marathon runner without running (yes, yes, the joke). You may be able to get 26 miles and change away, even very fast, but unless you personally do the running of that distance without cheating, you will not get the inherent benefits.
And if you instruct an AI, or another human even, to write for you, you may get some of the results you want, but you won't have changed to become a writer.
We shouldn't celebrate the successful blogs; they're already rewarded enough. It's celebrating the unsuccessful blogs that is needed - even if, frankly, the vast majority of them are sub-AI levels of crap it is still a human changing and progressing behind them.
Babies fall over a lot but unless you take them out of the stroller and let them do so, they'll never progress to crawling, walking, running.
Do people who journal exist in your world view?
> I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters.
More pretentious gatekeeping from luddites who like to yell at clouds. This is someone who would love a piece of artwork created using ai tools right up until someone told them it was created using ai tools.
"You can just blog things"
"Let them write blogs!"
Fucking hilarius domain name . David is unfortunately not announcing a rewrite of the Linux IPC stack!
Ha I read it as the DBU Shell but I guess dbus hell is more natural.
How did I miss DBUS Hell haha
rants about AI from people who have already decided up front to never actually attempt to use the tools (which seems to be the case here from the post and the other one it links) are not really providing any value to the discourse.
There is nothing new about using machinery to automate boring / repetitive tasks, including the wall of resistance that comes up. But it should be clear that genuinely useful tooling and automation tends to become a normal part of life, from the plow, to the printing press, to the dishwasher, to digital video editing, to autocorrect, and now to large language models.
There's a lot that has to be worked out with LLMs in particular as they are now encroaching heavily upon human creativity and thought. This is an extremely important topic. But rants like these with terms like "the plagarism machine" and "the solution is that we all must vow to never use AI in any shape or form" are not really contributing.
Trying to understand why it would matter if their hosting provider used ai or not. Genuine question so I can understand your take.
You are a good employee! Python people always shill for their employer's opinions.
"No AI" right above a robot voice playback button.
Mixed messages fr
Hot take, folks packing it in because of AI probably were not difference makers before AI, and wouldn't be difference makers after it either.
I agree with the author, keep writing. It helps hone your ability to communicate effectively which we all need for some time to come (at least until we become batteries).
> folks packing it in because of AI probably were not difference makers before AI
Anecdotal but I’ve been seeing a lot of the opposite. Some of those leaning in strongly are being propped up by the tools. Holding onto them like a lifeboat when they would have fallen off earlier.
What does a synthesized audio playback button have to do with AI as commonly and hotly discussed?
David Bushell, your writing was boring before AI. The bar can't be lowered any further.