This might sound like snark, but I truly don’t mean it that way.
I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development. All the people I work with who are getting the most value out of using AI to deliver software are people who are already very high-skilled engineers, and the more years of real experience they have, the better.
I know some guys who were road warriors for many years —- everything from racking and cabling servers, setting up infrastructure, and getting huge cloud deployments going all the way to embedded software, video game backends, etc. These guys were already really good at automation, seeing the whole life cycle of software, and understanding all the pressure points. For them, AI is the ultimate power tool. They’re just flying with it right now. (All of them also are aware that the AI vampire is very real.)
There’s still a lot to learn, and the tools are still very, very early on, but the value is clear.
I think for quite a few people, engaging with AI is maybe the first time ever in their entire career they are having to engage with systems thinking in a very concrete and directed way. Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.
So I think we’re going to keep talking for quite a while, and the conversation will continue to be very unevenly distributed. Paradoxically, I’m not bored of it, because I’m learning so much listening to intelligent people share their learnings.
Hey, I don't think this sounded like snark at all. Super grounded take.
> I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development.
This I agree with completely. You can see it in the difference between a prompt where you know exactly what you want and when things are a little woolley. A tool in the hands of a well trained craftsperson is always better used.
> So I think we’re going to keep talking for quite a while
Me neither, and to be clear I'm okay with that. This was mostly a rant at the lack of diversity of discourse.
This is really not true. There are stories of people who had no background in software engineering who now write entire applications using AI. And I have personally seen this happen.
Smart people can hit the ground running if they're freed from the need to first learn the intricacies of a new language. We're going to see an explosion in the number of people writing software as clever people who invested their time in something other than learning to program are now able to write software for themselves.
absolutely. as a early/mid level SDET/SRE, I can move so fast on prototyping full good apps now.That style of thinking is serving me well, even knowing about queues, basic infra knowledge, etc is plenty to produce decent code. Interesting time to be laid off.
AI makes a ton of bad decisions too and it's up to you to work with it. If I had the knowledge of the dangers hidden in things I'm developing, I'd move even faster
Was able to make a great full web app, which I think is hardened for prod but it had to be refactored to do so. Which it happily did.
It's really about asking the right questions, breaking down tasks, and planning now. I'm going to tackle a huge project, hoping to share it here.
Spot on take. The people I’ve noticed that say things like “it’s not useful” are the ones who are doing so little they can’t see the value.
This isn’t to say there’s not hype. Just that if you’re not seeing big productivity gains you need to make sure you really are an outlier and not just surplus to requirements.
Isn't that scary though: A bunch of people are going to be forced to use a tool that keeps them ignorant and they absolutely won't know if it's doing correct things, to the point that as you retire, the next crop is going to be much less involved in knowing whats going on.
It's what happened with the internet and computer usage. As Apple made it easier to get online with zero computer knowledge, suddenly we're electing people like donald trump.
This is bad in tech. But at least we are (relatively) well equipped to deal with it.
My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.
Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.
Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.
The wild part is they’re having this reaction while using the most rigid and limited interfaces to the LLMs. Imagine when the capabilities of coding agents surface up to these professions. It’s already starting to happen with Claude Cowork. I swear if I see another presentation with that default theme…
This. As annoying as all sorts of 'safety features' are, the sheer amount of effort that goes into further restricting that on the corporate wrapper side side makes llm nigh unusable. How can those kids even begin to get the idea of what it can do, when it seems like its severely locked down.
This is really interesting. I've been out of education for a long time, but I was wondering how they were dealing with the advent of AI. Are exams still a thing? Do people do coursework now that you can spew out competent sounding stuff in seconds?
I’m bored of using the AI for anything other than my work. Because with my work I can give very detailed and structured prompts and get the best results, while also being able to evaluate the answer. For everything else I’m kinda worn out by second guessing all the time or having to enter a long thread until I get a decent response.
How do I answer this without spamming: Yes, very much.
Everyone is in their own place adapting (or not) to AI. The disconnect b/w even folks on the same team is just crazy. At least it's gotten more concrete (here's what works for me, what do you do) vs catastrophizing jobpocolypse or "teh singularity", at least on day to day conversations.
I'm sure as hell bored of the current conversations people are having about ai.
> here's what works for me, what do you do
This is at least progress... but many want to remain in denial, and cant even contemplate this portion of the conversation.
We're also ignoring the light AI shines on our industry, and how (badly) we have been practicing our craft. As an example there is a lot of gnashing of teeth right now about the VOLUME of code generated and how to deal with it... how were you dealing with code reviews? How were you reviewing the dependencies in your package manager? (Another supply chain attack today so someone is looking but maybe not you). Do you look at your DB or OS? Does the 2 decades of leet code, brain teaser fang style interview qualify candidates who are skilled at reading code? What is good code? Because after close to 30 years working in the industry, let me tell you the sins of the LLM have nothing on what I have seen people do...
What I miss is people showing off their hand-crafted libraries or frameworks. That’s become way less common now that everyone is building a layer up the stack. I fear we’ll be stuck in a permanent state of using Tailwind and React and all the LLM-favored libraries as they were frozen in time at the beginning of 2025. Then again, that’ll be the agent’s problem, not mine…
All that said, it’s extremely exciting. I’ve been in tech, in one way or another, for 25 years. This is the most energizing (and simultaneously exhausting) atmosphere I’ve ever felt. The 2006-2011 years of early Facebook, Uber, etc. were exciting but nothing like this. The future is developing faster than we can process it.
Perhaps we're in an AI summer and a tech winter. Winter is always the time when people hole up, dream, and work on whatever big thing is next.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
The stacks of turtles that we use to run everything are starting to show their bloat.
The other day someone was lamenting dealign with an onslaught of bot traffic, and having to deal with blocking it. Maybe we need to get back to good old fashioned engineering and optimization. There was a thread on here the other day about PC gamer recommending RSS readers and having a 36gb webpage ( https://news.ycombinator.com/item?id=47480507 )
Among non-programmers, you always hear about some fool that fell in love with an AI girlfriend or whatever, but you never hear about the people who open chatgpt up once, tried some things with it, said to themselves "huh, that's kind of neat" and then lost interest a day or two later, having conceived of no further items to which AI could provide assistance.
I actually hear about this fairly often. In quite a few of my college classes, there's a large focus on AI (even outside the computer science department). I find it surprising the amount of non-technical people who don't even think to use it, or otherwise haven't interacted with it except when required.
“Everything has already been said, but not yet by everyone.”
— Karl Valentin
---
Personally, I'm still very interested in the topic.
But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.
This is a great saying, thank you for sharing it. Out of curiosity, do you have any links to intersting AI articles you've read recently? Maybe I'll change my mind.
Not limited to here, of course. Net-new publications to ArXiv for some (most?) CS subcategories are >=90% about models, transformers, training, quantization, or some other directly related field, or how to apply these towards a different specialty.
It's a black box that thinks for me, sometimes it's good, sometimes it's bad, sometimes it times out.
I am extremely skeptical of AI products anyone builds. It's just using one black box to build scaffolding around another black box and then typically want to charge money for it. I don't see any value there.
AI is starting to look like a net negative for humanity. I remember the early days of OpenAI. I was super excited about it. There was a new space to uncover and learn about. I was hopeful.
Now I have this love/hate relationship with it. Claude Code is amazing. I use it everyday because it makes me so much more efficient at my job. But I also know that by using it I’m contributing to making my job redundant one day.
At the same time I see how much resources we are wasting on AI. And to what end? Does anybody really buy the BS that this will all make the world a better place one day? So many people we could shelter and feed, but instead we are spending it on trying to make your computer check and answer your emails for you. At what point do we just look up and ask… what is the damn purpose of all of this? I guess money.
Well, on the other hand, software isn’t all about checking emails.
I know someone who worked for a nonprofit that made pregnancy health software that worked over text messaging. Its clients were women in Africa who didn’t have much, but they had a cell phone, so they could get reminders, track vitals, and so forth.
They had to find enough funding to pay several software engineers to build and maintain that system. If AI allows a single person to do it, at much lower cost, is that bad?
> But I also know that by using it I’m contributing to making my job redundant one day.
I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities. You can do so much more now. We are more limited by our ideas at this point than anything else.
Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"
(You can go on about corporate culture as the cause, but I've worked at regular corporations and most of FAANG. Initiative is rewarded almost everywhere.)
> Does anybody really buy the BS that this will all make the world a better place one day?
Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor. The march of automation and technology has already "made the world a better place."
Yes. Go to Mastodon. I accidently stumbled on Mastodon last night (I knew about it of course but largely ignored it). Of the 100 or so posts they were all cool stuff. Only one was AI related and it was more a researchy geeky thing than the brainrot "I fired all my staff an hour ago. They were not happy. CRLF. CRLF. I have an agentic circus and I am the ringmaster of 666 agents. CRLF....." crap you get on Linked in.
I've been meaning to try Mastodon for a long time (I was never really a Twiiter user). As others have said elsewhere though, I'm not sure where to start. Did you just download the app and join mastodon.social?
I'm old enough to remember being fatigued with so many people talking about making "apps". Programs that run on a phone. Before that everyone was excited about blogging. Web 2.0 ugh.
Before that we were excited about the wheel and the creation of fire. All capital drained into those ephemeral fancies.
Yeah. I don't mind AI, but I'm waiting for it to stabilize and a good work flow being replicable for non-toy problems that should survive and evolve for a long time. I don't think I lose out much by not having 10 agents doing my work for me right now. In 6 months or some years or whatever I can just learn the new way of doing it. It's just exhausting with how much it changes month to month. Do I use it? Yes. Probably suboptimally. I'll learn later, though.
Like the new frontend frameworks coming every week after 2010 sometime. Not jumping on every single one, and waiting until react was declared the winner and learn that worked well. Sure, someone that used it from day 1 had more experience, but one quickly catch up.
Gosh how i miss the old HN Days… where one would actually code, read docs, and develop stuff and feel happy about it. Not write a prompt and watch a chatbox do all the work in a matter of seconds. It’s like we’re losing the meaning of building something… dk how to explain it more. But yeah, it’s tech! Nothing stays the same
I'm like 99% convinced that most of the AI conversation upvotes at this point is astroturfing. I just don't see the correlation with the sentiment I get from talking to people in the real world (mostly negative AI sentiment) vs what I see here
There's definitely some people working overtime to overhype AI on here. like 50% of the comments on this are from simianwords who only posts when people say negative AI sentiments.
I think the advancements around models and such are still somewhat interesting but its all the hype around peripheral things like OpenClaw, agentic workflows and other hyped up AI-adjacent news that are getting pretty old.
I think the workflows can be really interesting to read about. The other week I read a reddit post how someone got Qwen3.5 35B-A3B to go from 22.2% on the 45 hard problems of swebench-verified to 37.8% (opus 4.6 gets 40%).
All they essentially did was tell the LLM to test and verify whether the answer is correct with a prompt like the following:
>"You just edited X. Before moving on, verify the change is correct: write a short inline python -c or a /tmp test script that exercises the changed code path, run it with bash, and confirm the output is as expected."
Now whether this is true, I don't know, but I think talking about this kind of stuff is cool!
I'm becoming more bullish on AI, but it's still frustrating how much of the metaphorical oxygen it's taking. I feel like I'm hearing less about developments in software tech outside of AI fields.
I’m confused why the hype and the investment got so high. And why everyone treats it like a race. Why can’t we gradually develop it like dna sequencing.
To be fair, DNA sequencing was very hyped up (although not nearly as much as AI). The HGP finished two years ahead of schedule, which is sort of unheard of for something in it's domain, and was mainly a result of massive public interest about personalized medicine and the like. I will admit that a ton of foundational DNA sequencing stuff evolved over decades, but the massive leap forward in the early 2000s is comparable to the LLM hype now.
I assumed it was obvious. Being first is all that matters. Investors don't want to invest in second place. Obviously, first is achieving AGI and not some GPT bot. That's why so many people keep saying AGI is in _____ weeks away with some even being preposterous stating AGI might have already happened. They need to keep attracting investors. Same as Musk constantly saying FSD is ____ weeks away.
I think what's crazy is the desire to replicate current day corporate structures. Look at this multi agent Jira story reading bot that builds stuff cause we let it churn overnight. Like the whole idea that you don't need that nonsense to build something amazing.
Pretty funny boasting about accelerated results, when his public contributions are only in two repositories (gstack itself and a rails bundle with 14 commits).
Endlessly grooming the Agent reminds me of Gastown.
Curios to see what he'll present, if, from his 700+ contributions in private repositories.
It's the most transformative technology I've clocked in my lifetime (and that includes home computers and the Internet).
Large organizations are making major decisions on the basis of it. Startups new and old will live and die by the shift that it's creating (is SaaS dead? Well investors due will make it so). Mass engineering layoffs could be inevitable.
Sure. I vibe coded a thing is getting pretty tired. The rest? If anything we're not talking about it enough.
I'm largely bored of wrappers, what still interests me are the new modalities of models being released and progressed on like small local VLMs, voice to voice and tts
No, well, I still enjoy the articles. The thing that always surprises me is the negativity in comment threads. I'm genuinely quite excited about AI based development. Yesterday I was playing around with developing a marketing plan for a market gap where we could leverage our product and finding what features in our product would need changing/adding to improve our offering. Quite interesting results!
I think in most places on the internet the negative comments are the ones that will win out. Same for AI I suppose. I tried not to bemoan the whole concept here, just the amount of 'airtime' it gets. Sort of like when something happens in the news (lately it's been the Epstein files for me), and you wish you could see a more balanced picture of world events.
No, but definitely tired of the "influencer" takes. You would think that this AI thing has been all but figured out, when really, even with the biggest openest claws we are still barely scratching the surface of a new era human-computer interaction
The worst part in all that noise: ask your customers what they need ; they will tell you "AI features". No matter what it is, or even how it compares to more traditional approaches when it comes to solving their pains. These two letters got beyond obsession.
The analogy is someone from the 19th century talking about their slaves all day which is of course nonsense because they had other things to talk about.
I think it's kinda double whammy, one the one hand working with AI leaves a lot of 5-15 minute breaks perfect for squeezing in a comment on a HN thread, while also supplanting the sort of work that would typically lead to interesting ideas or projects, substituting it with work that isn't that interesting to talk about (or at least hasn't been thought about for long enough to have interesting things to say).
This resonates. I build products on top of LLMs, and the most interesting work I do has nothing to do with AI; it's designing structured methodologies, figuring out what data to feed in before a conversation starts, deciding what to do when the model gives a weak answer. The AI is plumbing.
But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen
AI is fine. The hype is annoying. What's even worse though are the incredible amounts of money and energy that are being thrown at it, with no regard for the consequences, in times of record inequality and looming climate apocalypse.
AI is the red herring that'll waste all our attention until it's too late.
Im not sure I follow. AI barely consumes energy compared to other industries and instead of focusing on the heavy hitters first wasting time on the climate impact on AI doesn’t seem useful
Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.
> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.
> Compare that to ~30% of all energy use for transportation
Transportation, especially ALL transportation, does a LOT. You're looking for ROI not the absolute values. I think it's undeniable that the positive economic effect of every car, truck, train, and plane is unfathomably huge. That's trains moving minerals, planes moving people, trucks transporting goods, and hundreds of combinations thereof, all interconnected.
I think it might be more emissions-efficient at generating value than AI by a factor exceeding the +26% energy use gap. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.
People sure don't care about it anymore and it coincided with rise of AI. There's barely any mention of climate change compared to 5+ years ago. I really think this is all about how to keep the capitalist system from imploding because of so much debt (so the next big thing needs to happen to keep the growth).
I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"
In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.
We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.
[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.
Follow closely on what the detractors say. Most of them are using AI themselves and are just pushing back on the hype or other ludicrous claims and that's a good thing. Is the current crop of Gen AI anything near AGI? Is it worth the current valuation? Can a company fire most staff and run on gen AI? We may see the economy completely crash and not because AI takes over but because of bad investments, hype and greed.
On reddit there are two sub-Reddits that are mirrors, /accelerate and /betteroffline. The people in the subs go there for dopamine hits. One for how AI is going to transform their lives and lead to a work-free future. The other how AI is worthless and how everyone (except them) is being fooled. They are the same people with opposite views. The people in either sub don't recognize this.
> a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then.
On the contrary. I update my opinion all the time, but every time I try the latest LLM it still sucks just as much. That is why it sounds like my opinion hasn't changed.
You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.
This is such a perfect example of the mania behind this rollout.
There's no way you can make the financials work here compared to JetBrains spending the same millions spent on AI infrastructure and instead building better search in Confluence. Confluence search SUCKS, but that's just a lack of focus (or resources) on building a more complex, more robust solution. It's a wiki.
Either way, making a more robust search is a one time cost that benefits everyone. Instead, you're running a piece of software that goes directly to Anthropic's bank account, and to the data centers and to hyper scalers. Every single query must be re-run from scratch, costing your company a fortune, that if not managed properly will come out of spending that money elsewhere.
> You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.
If I could pay a world class architect $1.50 to give me tips on how to maximize sunlight in my loft I would.
Would it be nice if confluence just had a robust search that had a one time cost and then benefited everyone thereafter? Sure, but that's not the current reality, and I do not have control over their actions. I can only control mine.
And what is using Confluence in the first place? Your MacBook Pro is faster than a supercomputer from 20 years ago. As we make compute cheaper, we find ways to use it that are less efficient in an absolute sense but more efficient for the end user. A graphical docs portal like Confluence is a hell of a lot easier to use than EMacs and SSH to edit plain text files on an 80 character terminal. But it uses thousands of times more compute.
It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.
The way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.
I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.
> certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?)
I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".
Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.
For example, I will bet that non-straw people will take issue with this, and for good reasons:
I use the latest codex with gpt5.4 and Claude opus every day. they hallucinate every day. If you think they don't, you are probably being gaslighted by the models.
This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.
In contrast, what harm do those detractors cause? They don't generate as much code per hour?
By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!
The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)
Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.
That's a pretty poor straw man - the issue is the amount of harm caused, not that there is a potential for some minuscule amount.
Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.
Can vouch for this, plus, when it does work, stuff can take forever. Then, if I let it unsupervised, higher risk of doing the wrong thing. If I supervise it, then I become agent nanny.
The detractors are a lot less numerous and certainly a lot less preachy than the ones on the hype train.
AI is alright. It's moderately useful, in certain contexts it speeds me up a lot, in other contexts not so much.
I also think that the economics of it make no sense and that it is, generally, a destructive technology. But it's not up to me to fix anything, I just try to keep on top of best practices while I need to pay bills.
The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.
On the flip side, if all this slop is floating around, and AI services do become untenable, think of all the immediate jobs that will open up to fix and maintain all the slop that's being thrown around right now. The millions of dollars of contracts spent to use these LLMs will be redirected back to hiring.
Though, my cynical take is that the investor class seemed dead-set on forcing us all to weave LLMs deep into our corporate infrastructures in a way that I'm not too sure it will ever "disappear" now. It'll cost just as much to detangle it as it was to adopt it.
It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).
> If we get computers to think for us, we can solve a lot of our most pressing issues
If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?
This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?
> If we get computers to think for us, we can solve a lot of our most pressing issues
How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?
In 2-3 decades 30% of the world population will be over 60 years old (~3 BILLION seniors).We don't have an economic model for it, nor does gen-z want to all be Personal Support Workers while paying rent.
Nvidia only makes 6million data center GPUs a year. Huawei makes 900k. We need 10 to 100x more to be able to automate enough just to hold civilization together. Amazon built datacenters with near 0 water use but it used 35% more electricity overall. So tha problem can be solved however we need to change out of the whole scarcity mentality if we're going to actually make the planet nice.
So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.
What do LLMs replace, pray tell? More like moving from a screwdriver to a drill, rather than replacing the carpenter all together.
Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.
This isn't even our first AI hype cycle. That happened in the late 70s-80s. Every lab and agency needed Lisp machines to teach computers how to identify Russian missiles—or targets. The "GOFAI" techniques did not live up to the expectations of them, but they settled into niches where they were tremendously useful, and life went on. The same will happen with today's matmul-as-a-service AI.
I don't see the threat from AI as capitalist at all, but more so feudalist. I mean, if things go in the direction of the worst-case scenario. It seems like the power potential transcends the problems of capitalism entirely.
But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.
If we wanna go full-on Marxist analysis it is an attempt of the capitalist class to finally rid themselves of their dependence on labor and their pesky demands like sick leave and fair wages.
Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.
I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...
Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.
This is correct at the firm level and breaks down at the aggregate level, which is where it gets interesting.
At the firm level, automating away labor costs is obviously rational. But capital in aggregate can't actually rid itself of labor, since labor is where surplus value comes from. A fully automated economy would be insanely productive and generate basically no profit. So the capitalist class pursuing this logic collectively is, without knowing it, pursuing the dissolution of the system that makes them the capitalist class.
You don't have to buy any of that to notice the more immediate mechanism though: AI doesn't need to actually replace workers to discipline them. The credible threat of replacement is enough to suppress wages, justify restructuring, and extract more from whoever's left. That's already happening and requires no AGI.
Because AI is attacking, plagiarizing, competing with, and destroying the most common industry of people here on HN, so suddenly it mattered more to people who were previously unaffected.
Some people have been concerned with this kind of politics all along. Some people are realizing they should be now, because of AI. And that's okay; both groups can still work together.
Modern AI is a miracle. The math that makes it work is beautiful and really impressive. For example, if you wanted to map all knowledge on earth, how would you do it? AI answers that question by building a high dimensional vector space of embeddings, and traversing that space moves you through a topology of basically every concept that humans have.
Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.
Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.
So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.
This is a really intersting take, and maybe shows that I haven't been thorough enough with my reading. My guess is that the deep technical articles are few and far between and the higher level 'hot takes' are what fills the room. Do you have any recommendations for interesting places to start?
Yes it feels like a full time job just to try to keep up. And I’ve been in AI for close to 10 years so I feel like I have to keep up at least a minimum.
An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.
Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.
But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.
Yes — talking and hearing/reading about it. I don’t fault folks for being excited when first getting into ut, but it’s rare to hear anything new said. And what is new is increasingly niche and unlikely to have any application to what I do.
It's just a buzzword that draws more attention and more clicks. I also use AI for some projects, but it can be annoying when companies try to incorporate it in places it doesn't belong.
Everything is fandom now. I grew up around people obsessed with Nascar and NFL. So much of the discourse sounds exactly the same. It beats listening to people talk about their dogs though.
The debate around "AGI" is the thing that gets me. People just moving goalposts and arbitrarily applying their own standards makes for a lot of wheel spinning
It's easily abused by both sides of the debate because there's no strict widely accepted definition. I find it tiring because it's a largely inconsequential benchmark anyways (outside of Microsoft-OpenAI contract disputes).
So then...don't talk about it? Do your job. Go home. Spend time with family. Find some non-tech hobbies. The solution isn't to change the world but to break your social media addiction (and yes, HN/Linkedin/X are included).
Never tired of talking about AI. There are so many fascinating aspects to explore and papers delivering new ideas. It's a bit tiring keeping up with the new stuff but talking about what we've found is one of the things that makes it easier to keep up.
I'm somewhat tired of seeing the same rehashed claims of future ability, non-ability, profit, loss.
I actually like talking about the implications, future risks and challenges of AI. I have made submissions on ways AI should be regulated to benefit society. The problem is the assumption of what is happening and what will happen.
To many people seem to enter the conversation feeling that the absence of doubt is the same thing as being informed.
And especially people making claims based on premises that they seem to believe that if they build big enough towers on them, they will become true.
The number one thing that bothers me in all this, is people assuming the contents of the minds of others.
I find the pathologising of Sam Altman to be the most egregious form of this. It is one thing to disagree with someone's decisions, another thing to disagree with their stated opinions, but to decide upon a person's character based upon what you believe they are thinking in their private thoughts is simply projection.
I know this is an opinion of little worth to many, but my impression of Sam Altman is just a person who has different perspectives to me. The capitalist tech world he lives in would inevitably shape different values to me. What I have seen of him is consistent with a sincere expression of values. I can accept that a person might do something different to what I would, even the opposite of what I want while believing that they can be doing so for reasons that seem to be morally the right thing to do.
This also happened with cryptocurrency. Crypto advocates believe that it is a good thing for the world. Too many consider those who believe that crypto could benefit society to be evil. There is a difference between being wrong and being evil. No matter how certain you are you can still be wrong, in fact beyond a point I would say increased certancy would indicate a higher likelihood of being wrong.
So I'm happy to talk about AI. I have plenty to learn. I wonder if others went in with the goal to learn whether they would find it less tiring.
I deeply wish to hear about other tech trends; I get enough of use more ai, do more with less, and ship faster at work. I'd rather hear about new tools and techniques here
Nope. It remains the most dynamic and impactful area in software today. I'm sure it will fade in to common practice over the next few years and become less talked about. I find it infinitely more interesting than yet another article talking about the wonders/horrors of the Rust borrow checker.
Management spins up something on Lovable and believes that building any software is as easy as typing a few prompts.
It's worse when there's a colleague of yours encouraging that by using AI blindly, piling up technical debt just to move at the pace that Management expects after signing you all up on some AI tool.
At the end of the day, everyone is talking about AI. For AI or against AI, it doesn't really matter.
Bored of hearing about it, bored of reading about it.
I love using these LLM tools, but honestly, it feels like every man and his dog has something to say about it, and is angling to make a quick buck or two from it.
And the slop, oh my goodness, it's never-ending on every site and service.
I definitely get the comment about HN and seeing a billion posts about OpenClaw, Claude, or yet another post on an industry being disrupted by AI.
Tack on to that the increasing number of political stuff on here as well just makes it less and less an interesting place to visit.
Don't agree with the angry mob on the political stuff especially and you get downvoted/flagged into oblivion.
Just another echo chamber looking to have viewpoints confirmed in yet another one of the disappearing places online that foster any level of intellectual curiosity.
It feels like during the previous hype cycle of bitcoins, blockchains and NFTs. People are trying to find uses for new technologies but it seems like a lot of the conversations come from people (at this point I guess it's still people?) trying to increase the hype. Maybe they are trying to be thought leaders or maybe they are trying to boost some stock valuations.
The amount of questions I fielded about web3, coins, ledgers, etc as an IC speaking with customers or internal leadership was around an order of magnitude lower, and well-known brands weren't trying to sell me any of them. It was much rarer for it to get shoved into a product it wasn't helpful for, too.
Never thought I'd feel nostalgic about that era...
Oh great, we're at the stage of constantly talking about it, AND talking about how we're sick of talking about it. Now every article will be as long as before + a prefix paragraph explaining how they know we're all sick of talking about it, but...
I'm bloody sick of it, but more exhausted than bored.
My workflow that was pretty stable for years, keeps changing massively on an almost monthly basis and that means I'm already skipping the fads of the week.
What's more annoying is that it feels actually worth it and thus keeps me churning.
Over the last couple of years I've realized how shitty and tiring it is to do anything at all on the computer. Reading something like Reddit was tiring before, because of spam, submarine advertising, etc. But it was still worth it because the signal to noise ratio was still there. Now? No way. Easily 50% of comments are AI generated.
I used to have this idea that if I built something cool it would be valuable to donate it to the world for free. But now increasingly I'd be just making a donation to the training data, and on top of this I'm in competition with AI slop. Most people won't tell the difference and won't care. The noise floor for doing absolutely anything collaboratively on the computer is now 10x higher than it was before, and I'm basically checked out at this point. Even HN is becoming tiring to read since I think around 10-15% of comments that I read are AI generated. When that number reaches 30% I'm done forever, gone. My life is too short to waste time on this shit.
I think that they mean that "routine" work like AI agent prompting and config is repetitive, predictable and somewhat thoughtless work. Human employees that perform repetitive, predictable, thoughtless work are easy to replace with AI
I'm not sure if this is a joke but the field is advancing so dramatically it's hard to stop talking about it. Every week at work I have to show a new AI feature to an executive, about how we can now write 1000's of lines of codes in minutes at a higher quality than the greatest engineers. This necessitates new tools and new purchases, as well as team and org shifts.
If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times. There are a lot of people who are deep in tech who still don't understand what agents and LLM's can do
> If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times.
I'd love for discussions of the tech to stop with the genAI version of the cryptobro cry "have fun being poor". It's mildly insulting and adds literally nothing to the conversations.
(Not meaning to single you out, just using it as an example. This is a very common rhetorical problem with most of the evangelism.)
My only hope is that it is such a disaster that it is effectively an extinction level event for this current technoscene (along the lines of the Permian–Triassic extinction event and others).
Then we can get back to the unglamorous, boring, thankless task of delivering business value to paying clients, and the public discourse will no longer be polluted by our inane witterings.
> At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
Umm.
> I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
It’s all positives. So what’s the problem?
There isn’t a problem with AI. Of course. It’s just the discourse around it is “boring”. And the managers are lame about it.
And what has been the AI discourse for the last few years. The same formula.
- AI is either good
- ... or it is the best thing to have happened to Me
- But I have feelings[1] or concerns about everything around AI, like the discourse, or people having two-hundred concurrent AI agents mania
It’s all just grease for the AI Inevitabilism bytemill.
> And this one will be different?
I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.
>Umm.
??
> It’s all positives. So what’s the problem?
The article is trying to say that these things are great, but the level of conversation leads to a lack of novelty.
> It’s just the discourse around it is “boring”. And the managers are lame about it.
Exactly.
> OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”
Very self-aware.
Why on earth is the parent comment downvoted?
the title of the TFA asks a question. This statement directly answers that question. Seems very on-topic.
At least I'm not tired of talking about how it's killing websites and filling everything with spam. I have spent most of a decade building a useful resource, and Google AI overviews has killed my traffic. It killed everyone's traffic. This thing gave me purpose, and I'm watching AI slowly strangle it.
I mourn the death of the independent web, and it frightens me that this is still the happy stage. We haven't yet felt the effect of stiffing content creators, and the LLM tools haven't yet begun to enshittify.
I am tired of discussions about agentic coding, but I would feel a lot better if we acknowledged all the harm being caused. Big tech went all in on this, stealing everything, putting everyone out of work, using up all resources with no regards for consequences, and they threaten to kill the economy if we don't let them have their way.
I feel like we are heading for a much worse place as a society, and all we can talk about is how to 10x our bullshit jobs, because we're afraid of falling behind.
This might sound like snark, but I truly don’t mean it that way.
I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development. All the people I work with who are getting the most value out of using AI to deliver software are people who are already very high-skilled engineers, and the more years of real experience they have, the better.
I know some guys who were road warriors for many years —- everything from racking and cabling servers, setting up infrastructure, and getting huge cloud deployments going all the way to embedded software, video game backends, etc. These guys were already really good at automation, seeing the whole life cycle of software, and understanding all the pressure points. For them, AI is the ultimate power tool. They’re just flying with it right now. (All of them also are aware that the AI vampire is very real.)
There’s still a lot to learn, and the tools are still very, very early on, but the value is clear.
I think for quite a few people, engaging with AI is maybe the first time ever in their entire career they are having to engage with systems thinking in a very concrete and directed way. Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.
So I think we’re going to keep talking for quite a while, and the conversation will continue to be very unevenly distributed. Paradoxically, I’m not bored of it, because I’m learning so much listening to intelligent people share their learnings.
Hey, I don't think this sounded like snark at all. Super grounded take.
> I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development.
This I agree with completely. You can see it in the difference between a prompt where you know exactly what you want and when things are a little woolley. A tool in the hands of a well trained craftsperson is always better used.
> So I think we’re going to keep talking for quite a while Me neither, and to be clear I'm okay with that. This was mostly a rant at the lack of diversity of discourse.
> I’m learning so much listening to intelligent people share their learnings.
Me too. A key purpose of HN, and a bright time for that.
This is really not true. There are stories of people who had no background in software engineering who now write entire applications using AI. And I have personally seen this happen.
Smart people can hit the ground running if they're freed from the need to first learn the intricacies of a new language. We're going to see an explosion in the number of people writing software as clever people who invested their time in something other than learning to program are now able to write software for themselves.
Its silly to say this but one such person is „pewdiepie”
absolutely. as a early/mid level SDET/SRE, I can move so fast on prototyping full good apps now.That style of thinking is serving me well, even knowing about queues, basic infra knowledge, etc is plenty to produce decent code. Interesting time to be laid off.
AI makes a ton of bad decisions too and it's up to you to work with it. If I had the knowledge of the dangers hidden in things I'm developing, I'd move even faster
Was able to make a great full web app, which I think is hardened for prod but it had to be refactored to do so. Which it happily did.
It's really about asking the right questions, breaking down tasks, and planning now. I'm going to tackle a huge project, hoping to share it here.
Spot on take. The people I’ve noticed that say things like “it’s not useful” are the ones who are doing so little they can’t see the value.
This isn’t to say there’s not hype. Just that if you’re not seeing big productivity gains you need to make sure you really are an outlier and not just surplus to requirements.
Agreed, though I prefer "Fae Folk" to vampires.
Isn't that scary though: A bunch of people are going to be forced to use a tool that keeps them ignorant and they absolutely won't know if it's doing correct things, to the point that as you retire, the next crop is going to be much less involved in knowing whats going on.
It's what happened with the internet and computer usage. As Apple made it easier to get online with zero computer knowledge, suddenly we're electing people like donald trump.
This is bad in tech. But at least we are (relatively) well equipped to deal with it.
My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.
Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.
Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.
Its an absolute disaster.
The wild part is they’re having this reaction while using the most rigid and limited interfaces to the LLMs. Imagine when the capabilities of coding agents surface up to these professions. It’s already starting to happen with Claude Cowork. I swear if I see another presentation with that default theme…
This. As annoying as all sorts of 'safety features' are, the sheer amount of effort that goes into further restricting that on the corporate wrapper side side makes llm nigh unusable. How can those kids even begin to get the idea of what it can do, when it seems like its severely locked down.
Could you provide an example of such a thing that is prevented?
When industrialization was taking root yes indeed the factory jobs sucked AND it was the future. Two things can be true
This is really interesting. I've been out of education for a long time, but I was wondering how they were dealing with the advent of AI. Are exams still a thing? Do people do coursework now that you can spew out competent sounding stuff in seconds?
> These people are absolutely lost, with administration totally sold on the idea that "AI is the future" ...
Doesn't sound that different from my tech job
I need a job where using AI is mandatory, as I am quite well-equipped to do so! :D
I’m bored of using the AI for anything other than my work. Because with my work I can give very detailed and structured prompts and get the best results, while also being able to evaluate the answer. For everything else I’m kinda worn out by second guessing all the time or having to enter a long thread until I get a decent response.
How do I answer this without spamming: Yes, very much.
Everyone is in their own place adapting (or not) to AI. The disconnect b/w even folks on the same team is just crazy. At least it's gotten more concrete (here's what works for me, what do you do) vs catastrophizing jobpocolypse or "teh singularity", at least on day to day conversations.
I'm sure as hell bored of the current conversations people are having about ai.
> here's what works for me, what do you do
This is at least progress... but many want to remain in denial, and cant even contemplate this portion of the conversation.
We're also ignoring the light AI shines on our industry, and how (badly) we have been practicing our craft. As an example there is a lot of gnashing of teeth right now about the VOLUME of code generated and how to deal with it... how were you dealing with code reviews? How were you reviewing the dependencies in your package manager? (Another supply chain attack today so someone is looking but maybe not you). Do you look at your DB or OS? Does the 2 decades of leet code, brain teaser fang style interview qualify candidates who are skilled at reading code? What is good code? Because after close to 30 years working in the industry, let me tell you the sins of the LLM have nothing on what I have seen people do...
What I miss is people showing off their hand-crafted libraries or frameworks. That’s become way less common now that everyone is building a layer up the stack. I fear we’ll be stuck in a permanent state of using Tailwind and React and all the LLM-favored libraries as they were frozen in time at the beginning of 2025. Then again, that’ll be the agent’s problem, not mine…
All that said, it’s extremely exciting. I’ve been in tech, in one way or another, for 25 years. This is the most energizing (and simultaneously exhausting) atmosphere I’ve ever felt. The 2006-2011 years of early Facebook, Uber, etc. were exciting but nothing like this. The future is developing faster than we can process it.
> What I miss is people showing off their hand-crafted libraries or frameworks.
Saame. I wonder if the use of AI will lead to less invention and adoption of new ideas in favour of ideas with lots of training data.
Perhaps we're in an AI summer and a tech winter. Winter is always the time when people hole up, dream, and work on whatever big thing is next.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
We have 2 decades of abstraction.
The stacks of turtles that we use to run everything are starting to show their bloat.
The other day someone was lamenting dealign with an onslaught of bot traffic, and having to deal with blocking it. Maybe we need to get back to good old fashioned engineering and optimization. There was a thread on here the other day about PC gamer recommending RSS readers and having a 36gb webpage ( https://news.ycombinator.com/item?id=47480507 )
If it helps, I've mostly been using AI to implement things in the craziest languages I can justify.
I write Typescript and SQL by day, my last two personal projects were Rust and Perl.
I do worry that I'm not learning them as deeply, but I am learning them and without AI as an accelerant I probably wouldn't be trying them at all.
Among non-programmers, you always hear about some fool that fell in love with an AI girlfriend or whatever, but you never hear about the people who open chatgpt up once, tried some things with it, said to themselves "huh, that's kind of neat" and then lost interest a day or two later, having conceived of no further items to which AI could provide assistance.
I actually hear about this fairly often. In quite a few of my college classes, there's a large focus on AI (even outside the computer science department). I find it surprising the amount of non-technical people who don't even think to use it, or otherwise haven't interacted with it except when required.
“Everything has already been said, but not yet by everyone.” — Karl Valentin
---
Personally, I'm still very interested in the topic.
But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.
> “Everything has already been said, but not yet by everyone.” — Karl Valentin
Never heard this and I like it very much. This is just an off-topic comment to say thanks!
This is a great saying, thank you for sharing it. Out of curiosity, do you have any links to intersting AI articles you've read recently? Maybe I'll change my mind.
I’m sad that it’s crowded out all the interesting stuff I used to love learning about on HN.
I'm sad that it's crowding some of those things out of existence, not just out of being talked about.
Not limited to here, of course. Net-new publications to ArXiv for some (most?) CS subcategories are >=90% about models, transformers, training, quantization, or some other directly related field, or how to apply these towards a different specialty.
It's a black box that thinks for me, sometimes it's good, sometimes it's bad, sometimes it times out.
I am extremely skeptical of AI products anyone builds. It's just using one black box to build scaffolding around another black box and then typically want to charge money for it. I don't see any value there.
depends on if they're selling you an AI wrapper or if they built something useful.
Also, depends on who target user is.
AI can be used to build deterministic software
AI is starting to look like a net negative for humanity. I remember the early days of OpenAI. I was super excited about it. There was a new space to uncover and learn about. I was hopeful.
Now I have this love/hate relationship with it. Claude Code is amazing. I use it everyday because it makes me so much more efficient at my job. But I also know that by using it I’m contributing to making my job redundant one day.
At the same time I see how much resources we are wasting on AI. And to what end? Does anybody really buy the BS that this will all make the world a better place one day? So many people we could shelter and feed, but instead we are spending it on trying to make your computer check and answer your emails for you. At what point do we just look up and ask… what is the damn purpose of all of this? I guess money.
Well, on the other hand, software isn’t all about checking emails.
I know someone who worked for a nonprofit that made pregnancy health software that worked over text messaging. Its clients were women in Africa who didn’t have much, but they had a cell phone, so they could get reminders, track vitals, and so forth.
They had to find enough funding to pay several software engineers to build and maintain that system. If AI allows a single person to do it, at much lower cost, is that bad?
> But I also know that by using it I’m contributing to making my job redundant one day.
I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities. You can do so much more now. We are more limited by our ideas at this point than anything else.
Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"
(You can go on about corporate culture as the cause, but I've worked at regular corporations and most of FAANG. Initiative is rewarded almost everywhere.)
> Does anybody really buy the BS that this will all make the world a better place one day?
Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor. The march of automation and technology has already "made the world a better place."
Keep marching that automation and tehcnology to an acidified ocean. But hey, at least now we can code faster than we can review!
Yes. Go to Mastodon. I accidently stumbled on Mastodon last night (I knew about it of course but largely ignored it). Of the 100 or so posts they were all cool stuff. Only one was AI related and it was more a researchy geeky thing than the brainrot "I fired all my staff an hour ago. They were not happy. CRLF. CRLF. I have an agentic circus and I am the ringmaster of 666 agents. CRLF....." crap you get on Linked in.
I've been meaning to try Mastodon for a long time (I was never really a Twiiter user). As others have said elsewhere though, I'm not sure where to start. Did you just download the app and join mastodon.social?
I think you meant Mastodon[1]
[1]: https://en.wikipedia.org/wiki/Mastodon_(social_network)
Thanks. Edited. I have a mental block for some reason spelling it!
I'm old enough to remember being fatigued with so many people talking about making "apps". Programs that run on a phone. Before that everyone was excited about blogging. Web 2.0 ugh.
Before that we were excited about the wheel and the creation of fire. All capital drained into those ephemeral fancies.
The cycles cycle on.
Yeah. I don't mind AI, but I'm waiting for it to stabilize and a good work flow being replicable for non-toy problems that should survive and evolve for a long time. I don't think I lose out much by not having 10 agents doing my work for me right now. In 6 months or some years or whatever I can just learn the new way of doing it. It's just exhausting with how much it changes month to month. Do I use it? Yes. Probably suboptimally. I'll learn later, though.
Like the new frontend frameworks coming every week after 2010 sometime. Not jumping on every single one, and waiting until react was declared the winner and learn that worked well. Sure, someone that used it from day 1 had more experience, but one quickly catch up.
Big Data, The Cloud, Quantum Computing, Web 3.0, and maybe a few I've forgotten about.
Only thing that stuck thus far is the cloud. Though not for infinite scalability and resiliency, cause that just dumps big invoices in your lap.
Gosh how i miss the old HN Days… where one would actually code, read docs, and develop stuff and feel happy about it. Not write a prompt and watch a chatbox do all the work in a matter of seconds. It’s like we’re losing the meaning of building something… dk how to explain it more. But yeah, it’s tech! Nothing stays the same
I'm like 99% convinced that most of the AI conversation upvotes at this point is astroturfing. I just don't see the correlation with the sentiment I get from talking to people in the real world (mostly negative AI sentiment) vs what I see here
There's definitely some people working overtime to overhype AI on here. like 50% of the comments on this are from simianwords who only posts when people say negative AI sentiments.
I think the advancements around models and such are still somewhat interesting but its all the hype around peripheral things like OpenClaw, agentic workflows and other hyped up AI-adjacent news that are getting pretty old.
I think the workflows can be really interesting to read about. The other week I read a reddit post how someone got Qwen3.5 35B-A3B to go from 22.2% on the 45 hard problems of swebench-verified to 37.8% (opus 4.6 gets 40%).
All they essentially did was tell the LLM to test and verify whether the answer is correct with a prompt like the following:
>"You just edited X. Before moving on, verify the change is correct: write a short inline python -c or a /tmp test script that exercises the changed code path, run it with bash, and confirm the output is as expected."
Now whether this is true, I don't know, but I think talking about this kind of stuff is cool!
I'm becoming more bullish on AI, but it's still frustrating how much of the metaphorical oxygen it's taking. I feel like I'm hearing less about developments in software tech outside of AI fields.
I’m confused why the hype and the investment got so high. And why everyone treats it like a race. Why can’t we gradually develop it like dna sequencing.
To be fair, DNA sequencing was very hyped up (although not nearly as much as AI). The HGP finished two years ahead of schedule, which is sort of unheard of for something in it's domain, and was mainly a result of massive public interest about personalized medicine and the like. I will admit that a ton of foundational DNA sequencing stuff evolved over decades, but the massive leap forward in the early 2000s is comparable to the LLM hype now.
I assumed it was obvious. Being first is all that matters. Investors don't want to invest in second place. Obviously, first is achieving AGI and not some GPT bot. That's why so many people keep saying AGI is in _____ weeks away with some even being preposterous stating AGI might have already happened. They need to keep attracting investors. Same as Musk constantly saying FSD is ____ weeks away.
I think what's crazy is the desire to replicate current day corporate structures. Look at this multi agent Jira story reading bot that builds stuff cause we let it churn overnight. Like the whole idea that you don't need that nonsense to build something amazing.
And the desire to not want to understand things.
See also this insanity: https://github.com/garrytan/gstack/
Pretty funny boasting about accelerated results, when his public contributions are only in two repositories (gstack itself and a rails bundle with 14 commits).
Endlessly grooming the Agent reminds me of Gastown.
Curios to see what he'll present, if, from his 700+ contributions in private repositories.
Yes, my wife asks me to shut up when I mention AI. Hah
It's the most transformative technology I've clocked in my lifetime (and that includes home computers and the Internet).
Large organizations are making major decisions on the basis of it. Startups new and old will live and die by the shift that it's creating (is SaaS dead? Well investors due will make it so). Mass engineering layoffs could be inevitable.
Sure. I vibe coded a thing is getting pretty tired. The rest? If anything we're not talking about it enough.
I'm largely bored of wrappers, what still interests me are the new modalities of models being released and progressed on like small local VLMs, voice to voice and tts
No, well, I still enjoy the articles. The thing that always surprises me is the negativity in comment threads. I'm genuinely quite excited about AI based development. Yesterday I was playing around with developing a marketing plan for a market gap where we could leverage our product and finding what features in our product would need changing/adding to improve our offering. Quite interesting results!
I think in most places on the internet the negative comments are the ones that will win out. Same for AI I suppose. I tried not to bemoan the whole concept here, just the amount of 'airtime' it gets. Sort of like when something happens in the news (lately it's been the Epstein files for me), and you wish you could see a more balanced picture of world events.
No, but definitely tired of the "influencer" takes. You would think that this AI thing has been all but figured out, when really, even with the biggest openest claws we are still barely scratching the surface of a new era human-computer interaction
Agreed, LinkedIn is a cesspit of this. But then it always has been so nothing new there.
The worst part in all that noise: ask your customers what they need ; they will tell you "AI features". No matter what it is, or even how it compares to more traditional approaches when it comes to solving their pains. These two letters got beyond obsession.
I wish there were a filter on Hacker News to hide all AI related posts.
This is hacker news. Somebody made that and uses it so they don't see this post to tell you about that but it exists.
a Kafkaesque loop
https://news.ycombinator.com/item?id=35654401
Of course talking about AI is boring.
The analogy is someone from the 19th century talking about their slaves all day which is of course nonsense because they had other things to talk about.
I think it's kinda double whammy, one the one hand working with AI leaves a lot of 5-15 minute breaks perfect for squeezing in a comment on a HN thread, while also supplanting the sort of work that would typically lead to interesting ideas or projects, substituting it with work that isn't that interesting to talk about (or at least hasn't been thought about for long enough to have interesting things to say).
This resonates. I build products on top of LLMs, and the most interesting work I do has nothing to do with AI; it's designing structured methodologies, figuring out what data to feed in before a conversation starts, deciding what to do when the model gives a weak answer. The AI is plumbing.
But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen
AI is fine. The hype is annoying. What's even worse though are the incredible amounts of money and energy that are being thrown at it, with no regard for the consequences, in times of record inequality and looming climate apocalypse.
AI is the red herring that'll waste all our attention until it's too late.
AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.
Im not sure I follow. AI barely consumes energy compared to other industries and instead of focusing on the heavy hitters first wasting time on the climate impact on AI doesn’t seem useful
This is wrong. AI uses ~4% of the US grid, and projections are that it will grow to 10%+ in the next 6 years.
And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.
> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.
[0]: https://www.statista.com/statistics/183943/us-carbon-dioxide...
> Compare that to ~30% of all energy use for transportation
Transportation, especially ALL transportation, does a LOT. You're looking for ROI not the absolute values. I think it's undeniable that the positive economic effect of every car, truck, train, and plane is unfathomably huge. That's trains moving minerals, planes moving people, trucks transporting goods, and hundreds of combinations thereof, all interconnected.
I think it might be more emissions-efficient at generating value than AI by a factor exceeding the +26% energy use gap. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.
Pretty large amounts of energy go towards training large language models. Running them is also a non-negligible energy cost at scale.
But yeah, there's way worse industries out there when it comes to climate change impact.
? Am I misunderstanding the push for nuclear energy and record energy prices in locales with new “data centers”?
Before large models things were starting to move to micro VM, lean hardware, firecracker cloud platforms running thin containers.
Ai buzz and now we are building giga factories. It stands for gigawatt usage, no less target.
Which is why talk about AI datacenters typically involve energy supply constraints, and possibly the need to build power plants along with it.
It is, of course, because it barely uses any energy.
> AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.
If you want to point at causes of climate change, look no further than adtech. It's the driving force behind our overconsumption.
And it has perhaps an even longer list of reasons to hate it.
The EPA repealed its 2009 conclusion that greenhouse gases warm the Earth and endanger human health and well-being.
So this is not a good reason to oppose AI. Now the sheer energy it requires does mean we might want to go nuclear though.
Natural gas is nice though because it does pollute the air far less than coal.
You might argue the EPA only repealed that because of political agendas, but the same argument could be made for why it was passed.
A lot of people got very rich off the fear mongering from climate alarmists.
People sure don't care about it anymore and it coincided with rise of AI. There's barely any mention of climate change compared to 5+ years ago. I really think this is all about how to keep the capitalist system from imploding because of so much debt (so the next big thing needs to happen to keep the growth).
climate change was an important issue when they were trying to peddle EVs and solar.
They == the lizard people, I assume?
Seeing this kind of populist misinformation/bikeshedding on HN is particularly disappointing.
So then explain to me where I wrote misinformation?
> AI is fine. The hype is annoying.
I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"
In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.
We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.
[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.
Follow closely on what the detractors say. Most of them are using AI themselves and are just pushing back on the hype or other ludicrous claims and that's a good thing. Is the current crop of Gen AI anything near AGI? Is it worth the current valuation? Can a company fire most staff and run on gen AI? We may see the economy completely crash and not because AI takes over but because of bad investments, hype and greed.
I don't think it's worthless. It can greatly speed up coding. And learning foreign languages. And many other things.
But I do think humanity is worse off because of it. So I'm a detractor in that way. :)
On reddit there are two sub-Reddits that are mirrors, /accelerate and /betteroffline. The people in the subs go there for dopamine hits. One for how AI is going to transform their lives and lead to a work-free future. The other how AI is worthless and how everyone (except them) is being fooled. They are the same people with opposite views. The people in either sub don't recognize this.
> a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then.
On the contrary. I update my opinion all the time, but every time I try the latest LLM it still sucks just as much. That is why it sounds like my opinion hasn't changed.
You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.
This is such a perfect example of the mania behind this rollout.
There's no way you can make the financials work here compared to JetBrains spending the same millions spent on AI infrastructure and instead building better search in Confluence. Confluence search SUCKS, but that's just a lack of focus (or resources) on building a more complex, more robust solution. It's a wiki.
Either way, making a more robust search is a one time cost that benefits everyone. Instead, you're running a piece of software that goes directly to Anthropic's bank account, and to the data centers and to hyper scalers. Every single query must be re-run from scratch, costing your company a fortune, that if not managed properly will come out of spending that money elsewhere.
> You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.
If I could pay a world class architect $1.50 to give me tips on how to maximize sunlight in my loft I would.
Would it be nice if confluence just had a robust search that had a one time cost and then benefited everyone thereafter? Sure, but that's not the current reality, and I do not have control over their actions. I can only control mine.
And what is using Confluence in the first place? Your MacBook Pro is faster than a supercomputer from 20 years ago. As we make compute cheaper, we find ways to use it that are less efficient in an absolute sense but more efficient for the end user. A graphical docs portal like Confluence is a hell of a lot easier to use than EMacs and SSH to edit plain text files on an 80 character terminal. But it uses thousands of times more compute.
It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.
> Hallucinations are exceptionally rare now
The way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.
I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.
> certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?)
I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".
Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.
For example, I will bet that non-straw people will take issue with this, and for good reasons:
> Hallucinations are exceptionally rare now
I use the latest codex with gpt5.4 and Claude opus every day. they hallucinate every day. If you think they don't, you are probably being gaslighted by the models.
This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.
In contrast, what harm do those detractors cause? They don't generate as much code per hour?
By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!
The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)
> detractors slow the pace of progress
Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.
that’s a stretch and taking a measured approach to change is valid
That's a pretty poor straw man - the issue is the amount of harm caused, not that there is a potential for some minuscule amount.
Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.
Claude Opus 4.6 regularly makes up shit and hallucinates. I'm not a detractor by any means but "exceptionally rare" is fantasyland.
Can vouch for this, plus, when it does work, stuff can take forever. Then, if I let it unsupervised, higher risk of doing the wrong thing. If I supervise it, then I become agent nanny.
I have been experiencing it too.
I honestly am finding Codex considerably better, as much as I despise OpenAI.
This is going to sound flippant, but truly, I imagine most people find the group that disagrees with their take annoying as well.
The detractors are a lot less numerous and certainly a lot less preachy than the ones on the hype train.
AI is alright. It's moderately useful, in certain contexts it speeds me up a lot, in other contexts not so much.
I also think that the economics of it make no sense and that it is, generally, a destructive technology. But it's not up to me to fix anything, I just try to keep on top of best practices while I need to pay bills.
The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.
On the flip side, if all this slop is floating around, and AI services do become untenable, think of all the immediate jobs that will open up to fix and maintain all the slop that's being thrown around right now. The millions of dollars of contracts spent to use these LLMs will be redirected back to hiring.
Though, my cynical take is that the investor class seemed dead-set on forcing us all to weave LLMs deep into our corporate infrastructures in a way that I'm not too sure it will ever "disappear" now. It'll cost just as much to detangle it as it was to adopt it.
It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).
> If we get computers to think for us, we can solve a lot of our most pressing issues
If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?
> It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues.
All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.
What are you talking about? AGI is practically a prerequisite for transhumanism, and, well, not dying.
If you want to "keep existing" AGI happening is probably your only hope.
I highly doubt OP was talking about immortality
This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?
> If we get computers to think for us, we can solve a lot of our most pressing issues
How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?
In 2-3 decades 30% of the world population will be over 60 years old (~3 BILLION seniors).We don't have an economic model for it, nor does gen-z want to all be Personal Support Workers while paying rent. Nvidia only makes 6million data center GPUs a year. Huawei makes 900k. We need 10 to 100x more to be able to automate enough just to hold civilization together. Amazon built datacenters with near 0 water use but it used 35% more electricity overall. So tha problem can be solved however we need to change out of the whole scarcity mentality if we're going to actually make the planet nice.
> incredible amounts of ... energy
So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.
[1] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...
It's not just the absolute expenditure. It's the type of expenditure.
https://www.selc.org/news/resistance-against-elon-musks-xai-...
1% of worldwide energy expenditure is massive, incredible amounts of energy in fact.
It's not fine at all.
It's a capitalistic device, which in its current form is going to increase even more the inequalities.
We should fight against capitalism before it ruins our planet
No, it’s… fine. Useful in a limited capacity. Not the machine god, but not machine Satan either. The reality is kind of boring.
This summarizes mostly how I feel about it. It's a tool like any other tool we have advanced since the beginning of human civilization
Machine tools replaced blacksmiths
CNC machines replaced manual machines.
Robots replaced CNC machine tenders
CAD replaced draftsman (and also pushed that job onto engineers (grr))
P&P robots replaced human production lines.
The steam train replaced the horse and cart
This is a tale as old as time itself
What do LLMs replace, pray tell? More like moving from a screwdriver to a drill, rather than replacing the carpenter all together.
Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.
Those were deterministic rather than stochastic
This isn't even our first AI hype cycle. That happened in the late 70s-80s. Every lab and agency needed Lisp machines to teach computers how to identify Russian missiles—or targets. The "GOFAI" techniques did not live up to the expectations of them, but they settled into niches where they were tremendously useful, and life went on. The same will happen with today's matmul-as-a-service AI.
I don't see the threat from AI as capitalist at all, but more so feudalist. I mean, if things go in the direction of the worst-case scenario. It seems like the power potential transcends the problems of capitalism entirely.
But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.
Capitalism is feudalism but with raw generational wealth instead of generational wealth with divine right characteristics.
Capitalism is just feudalism that works for the merchant class
If we wanna go full-on Marxist analysis it is an attempt of the capitalist class to finally rid themselves of their dependence on labor and their pesky demands like sick leave and fair wages.
Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.
I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...
Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.
We will all be shocked at the rug pull after it has finished training on all our high-quality feedback for code it has written.
This is correct at the firm level and breaks down at the aggregate level, which is where it gets interesting.
At the firm level, automating away labor costs is obviously rational. But capital in aggregate can't actually rid itself of labor, since labor is where surplus value comes from. A fully automated economy would be insanely productive and generate basically no profit. So the capitalist class pursuing this logic collectively is, without knowing it, pursuing the dissolution of the system that makes them the capitalist class.
You don't have to buy any of that to notice the more immediate mechanism though: AI doesn't need to actually replace workers to discipline them. The credible threat of replacement is enough to suppress wages, justify restructuring, and extract more from whoever's left. That's already happening and requires no AGI.
AI is more likely to destroy capitalism than it is to increase inequality.
Ten years ago, what would it have cost you to build a Jira clone / competitor? Today one person can do it in a week, at least for the core tech.
In a year, only the very largest companies will pay for that kind of infrastructure tooling.
We’ve just started seeing the democratization of software and the capitalists are terrified.
How did HN become this kind of website?
Because AI is attacking, plagiarizing, competing with, and destroying the most common industry of people here on HN, so suddenly it mattered more to people who were previously unaffected.
Some people have been concerned with this kind of politics all along. Some people are realizing they should be now, because of AI. And that's okay; both groups can still work together.
The parent comment is a pretty measured take. What’s your problem with it?
I went to a conference and people were suggesting nationalizing AI companies so it's basically everywhere.
It is getting stuffy in the tech sector lately with all these AI postings but it's still a very new and very disruptive technology.
I also have to say that I don't use AI in my personal or professional life. And that is simply because I haven't felt any need to use it.
Very much so. I wouldn't mind some interesting projects or results. But it's very basic opinions or parables all over again.
Modern AI is a miracle. The math that makes it work is beautiful and really impressive. For example, if you wanted to map all knowledge on earth, how would you do it? AI answers that question by building a high dimensional vector space of embeddings, and traversing that space moves you through a topology of basically every concept that humans have.
Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.
Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.
So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.
This is a really intersting take, and maybe shows that I haven't been thorough enough with my reading. My guess is that the deep technical articles are few and far between and the higher level 'hot takes' are what fills the room. Do you have any recommendations for interesting places to start?
Why is it that a stochastic parrot can solve logic puzzles consistently and accurately?
I don't think I'm quite bored. I'm exhausted/fatigued with the pace.
Yes it feels like a full time job just to try to keep up. And I’ve been in AI for close to 10 years so I feel like I have to keep up at least a minimum.
An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.
Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.
But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.
This.
I don't know how I'm burnt out from making this thing do work for me. But I am.
Yes — talking and hearing/reading about it. I don’t fault folks for being excited when first getting into ut, but it’s rare to hear anything new said. And what is new is increasingly niche and unlikely to have any application to what I do.
It's just a buzzword that draws more attention and more clicks. I also use AI for some projects, but it can be annoying when companies try to incorporate it in places it doesn't belong.
You may be bored of AI, but because AI is not yet bored of us, turning away may be dangerous.
yes. it's like a giant finger pointing at the moon, and everyone's talking about the finger.
I'm kind of bored by AI promotion posts that pretend to be about something else.
I'm bored of the everyday Claude spam. I've used Claude extensively and it was very sub par.
Everything is fandom now. I grew up around people obsessed with Nascar and NFL. So much of the discourse sounds exactly the same. It beats listening to people talk about their dogs though.
The debate around "AGI" is the thing that gets me. People just moving goalposts and arbitrarily applying their own standards makes for a lot of wheel spinning
AI enthusiasts love to misuse and abuse the goalpost metaphor. It’s practically always an attempt to silence opponents.
It's easily abused by both sides of the debate because there's no strict widely accepted definition. I find it tiring because it's a largely inconsequential benchmark anyways (outside of Microsoft-OpenAI contract disputes).
So then...don't talk about it? Do your job. Go home. Spend time with family. Find some non-tech hobbies. The solution isn't to change the world but to break your social media addiction (and yes, HN/Linkedin/X are included).
Never tired of talking about AI. There are so many fascinating aspects to explore and papers delivering new ideas. It's a bit tiring keeping up with the new stuff but talking about what we've found is one of the things that makes it easier to keep up.
I'm somewhat tired of seeing the same rehashed claims of future ability, non-ability, profit, loss.
I actually like talking about the implications, future risks and challenges of AI. I have made submissions on ways AI should be regulated to benefit society. The problem is the assumption of what is happening and what will happen.
To many people seem to enter the conversation feeling that the absence of doubt is the same thing as being informed.
And especially people making claims based on premises that they seem to believe that if they build big enough towers on them, they will become true.
The number one thing that bothers me in all this, is people assuming the contents of the minds of others.
I find the pathologising of Sam Altman to be the most egregious form of this. It is one thing to disagree with someone's decisions, another thing to disagree with their stated opinions, but to decide upon a person's character based upon what you believe they are thinking in their private thoughts is simply projection.
I know this is an opinion of little worth to many, but my impression of Sam Altman is just a person who has different perspectives to me. The capitalist tech world he lives in would inevitably shape different values to me. What I have seen of him is consistent with a sincere expression of values. I can accept that a person might do something different to what I would, even the opposite of what I want while believing that they can be doing so for reasons that seem to be morally the right thing to do.
This also happened with cryptocurrency. Crypto advocates believe that it is a good thing for the world. Too many consider those who believe that crypto could benefit society to be evil. There is a difference between being wrong and being evil. No matter how certain you are you can still be wrong, in fact beyond a point I would say increased certancy would indicate a higher likelihood of being wrong.
So I'm happy to talk about AI. I have plenty to learn. I wonder if others went in with the goal to learn whether they would find it less tiring.
Talking about AI being boring is boring as hell for sure.
only of people constantly complaining about it like they have some special insight
I deeply wish to hear about other tech trends; I get enough of use more ai, do more with less, and ship faster at work. I'd rather hear about new tools and techniques here
It's ruined the sparkle emoji for everyone.
Yes!
Sounds very much like this blog I read too… he laughs at AI in his workplace a lot Www.sometimesworking.com
To answer the OP's question, apparently not! :)
Nope. It remains the most dynamic and impactful area in software today. I'm sure it will fade in to common practice over the next few years and become less talked about. I find it infinitely more interesting than yet another article talking about the wonders/horrors of the Rust borrow checker.
Let's get back to filling the front page with Web3, DeFi, NFTs. Oh the good ol' days.
It’s been almost two days since someone posted /e/OS, so those good ol’ days aren’t entirely gone :)
yes, so bored. yada yada.. i've been 'obsolete' for 36 years and counting.
Management spins up something on Lovable and believes that building any software is as easy as typing a few prompts.
It's worse when there's a colleague of yours encouraging that by using AI blindly, piling up technical debt just to move at the pace that Management expects after signing you all up on some AI tool.
At the end of the day, everyone is talking about AI. For AI or against AI, it doesn't really matter.
It seems some people in this thread are not :)
I am. To keep talking about it I might just deploy a chatbot to do that for me.
I've been sick of it since 2022.
I wish there was an option to hide AI stories on HN, and AI-related repos on Github's trending page.
You could use AI to do it! Fight fire with fire.
I'm neutral on AI - so far it seems useful but flawed. But I don't want to hear about it constantly.
100% this; GitHub is littered with a bunch of AI shit projects that pollute the trending page.
Yup.
Bored of hearing about it, bored of reading about it.
I love using these LLM tools, but honestly, it feels like every man and his dog has something to say about it, and is angling to make a quick buck or two from it.
And the slop, oh my goodness, it's never-ending on every site and service.
No, because what most people miss about AI is that…
I’m just kidding. LinkedIn feed became so unbearable, that I had to install an extension to turn it off.
Yes!
There are other interesting things in the world today, and HN is overwhelmed with pretend intelligence.
Hype, detractors, ALL OF IT!
Maybe a separate web page or RSS feed could be created that is dedicated to the subject...
I definitely get the comment about HN and seeing a billion posts about OpenClaw, Claude, or yet another post on an industry being disrupted by AI.
Tack on to that the increasing number of political stuff on here as well just makes it less and less an interesting place to visit.
Don't agree with the angry mob on the political stuff especially and you get downvoted/flagged into oblivion.
Just another echo chamber looking to have viewpoints confirmed in yet another one of the disappearing places online that foster any level of intellectual curiosity.
It feels like during the previous hype cycle of bitcoins, blockchains and NFTs. People are trying to find uses for new technologies but it seems like a lot of the conversations come from people (at this point I guess it's still people?) trying to increase the hype. Maybe they are trying to be thought leaders or maybe they are trying to boost some stock valuations.
The amount of questions I fielded about web3, coins, ledgers, etc as an IC speaking with customers or internal leadership was around an order of magnitude lower, and well-known brands weren't trying to sell me any of them. It was much rarer for it to get shoved into a product it wasn't helpful for, too.
Never thought I'd feel nostalgic about that era...
Oh great, we're at the stage of constantly talking about it, AND talking about how we're sick of talking about it. Now every article will be as long as before + a prefix paragraph explaining how they know we're all sick of talking about it, but...
I'm bloody sick of it, but more exhausted than bored. My workflow that was pretty stable for years, keeps changing massively on an almost monthly basis and that means I'm already skipping the fads of the week. What's more annoying is that it feels actually worth it and thus keeps me churning.
Honestly, a bit but only because the hype cycle is louder than the genuinely interesting work.
AI is an ok tool.
Over the last couple of years I've realized how shitty and tiring it is to do anything at all on the computer. Reading something like Reddit was tiring before, because of spam, submarine advertising, etc. But it was still worth it because the signal to noise ratio was still there. Now? No way. Easily 50% of comments are AI generated.
I used to have this idea that if I built something cool it would be valuable to donate it to the world for free. But now increasingly I'd be just making a donation to the training data, and on top of this I'm in competition with AI slop. Most people won't tell the difference and won't care. The noise floor for doing absolutely anything collaboratively on the computer is now 10x higher than it was before, and I'm basically checked out at this point. Even HN is becoming tiring to read since I think around 10-15% of comments that I read are AI generated. When that number reaches 30% I'm done forever, gone. My life is too short to waste time on this shit.
Kinda tired of being inundated with low quality AI slop absolutely everywhere.
You can call an engineer a "product manager" but that does not make them one.
Yepp.
Anything to distract from the real war, billionaires vs everyone else.
“it’s all starting to feel a bit… routine” and that’s how I know it’s going to replace me if I stay an employee
I don't follow your logic, it was always either going to disappear or become routine much like any other tool you would use at work.
I think that they mean that "routine" work like AI agent prompting and config is repetitive, predictable and somewhat thoughtless work. Human employees that perform repetitive, predictable, thoughtless work are easy to replace with AI
AI has become a commodity- for the better or worse. And yes, we should treat it as such, especially no more big ideas from (C-level) managers, please.
I'm just getting started ;)
Not me
I'm not sure if this is a joke but the field is advancing so dramatically it's hard to stop talking about it. Every week at work I have to show a new AI feature to an executive, about how we can now write 1000's of lines of codes in minutes at a higher quality than the greatest engineers. This necessitates new tools and new purchases, as well as team and org shifts.
If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times. There are a lot of people who are deep in tech who still don't understand what agents and LLM's can do
> If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times.
I'd love for discussions of the tech to stop with the genAI version of the cryptobro cry "have fun being poor". It's mildly insulting and adds literally nothing to the conversations.
(Not meaning to single you out, just using it as an example. This is a very common rhetorical problem with most of the evangelism.)
"higher quality than the greatest engineers". right...
and why do so many articles or comments have a general approach of 'It's great and if you don't think it is it's because you don't understand it.'
Bored is a nice way to say it. Never has a technology been so odious also been so ubiquitous.
My only hope is that it is such a disaster that it is effectively an extinction level event for this current technoscene (along the lines of the Permian–Triassic extinction event and others).
Then we can get back to the unglamorous, boring, thankless task of delivering business value to paying clients, and the public discourse will no longer be polluted by our inane witterings.
The replies lol.
"Yes" Proceeds to talk about AI.
And this one will be different?
> At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
Umm.
> I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
It’s all positives. So what’s the problem?
There isn’t a problem with AI. Of course. It’s just the discourse around it is “boring”. And the managers are lame about it.
And what has been the AI discourse for the last few years. The same formula.
- AI is either good
- ... or it is the best thing to have happened to Me
- But I have feelings[1] or concerns about everything around AI, like the discourse, or people having two-hundred concurrent AI agents mania
It’s all just grease for the AI Inevitabilism bytemill.
[1] https://news.ycombinator.com/item?id=47487774
> … And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.
OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”
Very self-aware.
And now 117 points and 53 comments in 23 minutes.
Hey :)
> And this one will be different? I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.
>Umm. ??
> It’s all positives. So what’s the problem? The article is trying to say that these things are great, but the level of conversation leads to a lack of novelty.
> It’s just the discourse around it is “boring”. And the managers are lame about it. Exactly.
> OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe” Very self-aware.
Is this sarcasm?
Not really? It's kind of a big deal.
> Not really? It's kind of a big deal.
Why on earth is the parent comment downvoted? the title of the TFA asks a question. This statement directly answers that question. Seems very on-topic.
Talking about how you are bored about talking about the thing is still talking about the thing.
I'm not.
At least I'm not tired of talking about how it's killing websites and filling everything with spam. I have spent most of a decade building a useful resource, and Google AI overviews has killed my traffic. It killed everyone's traffic. This thing gave me purpose, and I'm watching AI slowly strangle it.
I mourn the death of the independent web, and it frightens me that this is still the happy stage. We haven't yet felt the effect of stiffing content creators, and the LLM tools haven't yet begun to enshittify.
I am tired of discussions about agentic coding, but I would feel a lot better if we acknowledged all the harm being caused. Big tech went all in on this, stealing everything, putting everyone out of work, using up all resources with no regards for consequences, and they threaten to kill the economy if we don't let them have their way.
I feel like we are heading for a much worse place as a society, and all we can talk about is how to 10x our bullshit jobs, because we're afraid of falling behind.