I think people are just getting lost in the sauce. Forget all the "singularity" or "AGI" nonsense. LLMs are genuinely useful automation machines. They're fantastic for going from semi-structured data to structured data. They're great for going from text blob to decision points. They're great for going from vague instructions to step-by-step inference.
No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.
I think what's surprising people is how a rough, first-order approximate solution (produced with little cognitive effort) is good enough for 90% of everyday tasks
This is what I've been saying. We're not so much learning that LLMs are intelligent, we're learning that a lot of what we think of as intelligence is actually just pattern matching.
I think this post is specifically an answer to yet another "AGI is just around the corner" post that made waves recently.
Fundamentally, I think that many problems in white-collar life are text comprehension problems or basic automation problems. Further, they often don't even need to be done particularly well. For example, we've long decided that it's OK for customer support to suck, and LLMs are now an upgrade over an overseas call center worker who must follow a rigid script.
So yeah, LLMs can be quite useful and will be used more and more. But this is also not the discourse we're having on HN. Every day, there's some AGI marketing headline, including one at #1 right now from OpenAI.
The AI-assisted GPT theoretical physics derivation? There's literally no mention of AGI in the article, and it's pretty tame, especially considering it's a PR piece by OpenAI.
It's a press release from a vendor that constantly talks about AGI, and it's meant to showcase the capabilities of an unreleased model in an experiment you can't replicate. But my comment was less about the link and more about the discussion, which has immediately bifurcated into the "it's done and dusted" and "this is overhyped and LLMs are useless" camps.
If I showed you a new species of animal that does exactly what an LLM does, what would you say? Let’s say a bird, you ask it a question , and it returns an expert level human response. What if these new birds were everywhere?
The long tail is fatter and longer than many people expect.
AlphaZero was a special/unusual case, I would say an outlier.
FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.
I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.
The article feels, I don't know… maybe like someone calmly sitting in a rocking chair staring at the sea. Then the camera turns, and there's an erupting volcano in the background.
> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.
A self-driving car with a vision-language-action model inside buzzes by.
> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.
A large multimodal model listens to your request and produces a picture.
> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!
Is that the hang-up? Like are people so unimaginative to see that none of this was here five years ago and now this machine is -- if still only in part -- assembling itself?
And the details involved in closing some of the rest of that loop do not seem THAT complicated.
You don't know how involved it was. I would imagine it helped debug some tools that they used to create it. Getting it to actually end to end produce a more capable model without any human help absolutely is that complicated.
This has been popping up quite a bit but as far as I can tell, neither the original thought piece nor (therefore) the critiques are particularly above-the-bar?
The original [0] that this is in response to, essentially posits that something you cannot afford to ignore is going on, especially if you work in a white collar job. Admittedly a little bit of FUD [1] is going on with the "AI is coming for your job" narrative, but the core idea, that this is a fast moving field where it's worth re-examining your assumptions from time to time, appears to be sound and hard to disagree with.
This article has a confrontational title, but the point made here seems to not be incompatible with the original...the author is confronting the FUD directly, which is understandable but perhaps not quite as useful as refuting the core thesis, which is that something you cannot afford to ignore is happening.
In fact, both these people seem to be in agreement that you need to keep an eye on this ball, they just have a "panic" versus "don't panic" framing. Should you panic in an emergency? Research says no [2].
The original seems to be arguing, among other things, that the singularity has begun because AI has been employed to improve AI development tooling. I can see it both ways, but skepticism on these claims is natural and warranted. I agree with you that there's no shortage of people underestimating the importance of this moment in history.
I think people are just getting lost in the sauce. Forget all the "singularity" or "AGI" nonsense. LLMs are genuinely useful automation machines. They're fantastic for going from semi-structured data to structured data. They're great for going from text blob to decision points. They're great for going from vague instructions to step-by-step inference.
No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.
I think what's surprising people is how a rough, first-order approximate solution (produced with little cognitive effort) is good enough for 90% of everyday tasks
This is what I've been saying. We're not so much learning that LLMs are intelligent, we're learning that a lot of what we think of as intelligence is actually just pattern matching.
I think this post is specifically an answer to yet another "AGI is just around the corner" post that made waves recently.
Fundamentally, I think that many problems in white-collar life are text comprehension problems or basic automation problems. Further, they often don't even need to be done particularly well. For example, we've long decided that it's OK for customer support to suck, and LLMs are now an upgrade over an overseas call center worker who must follow a rigid script.
So yeah, LLMs can be quite useful and will be used more and more. But this is also not the discourse we're having on HN. Every day, there's some AGI marketing headline, including one at #1 right now from OpenAI.
The AI-assisted GPT theoretical physics derivation? There's literally no mention of AGI in the article, and it's pretty tame, especially considering it's a PR piece by OpenAI.
It's referencing https://shumer.dev/something-big-is-happening
It's a press release from a vendor that constantly talks about AGI, and it's meant to showcase the capabilities of an unreleased model in an experiment you can't replicate. But my comment was less about the link and more about the discussion, which has immediately bifurcated into the "it's done and dusted" and "this is overhyped and LLMs are useless" camps.
If I showed you a new species of animal that does exactly what an LLM does, what would you say? Let’s say a bird, you ask it a question , and it returns an expert level human response. What if these new birds were everywhere?
That’s very big.
I still can’t believe parrots are real.
The long tail is fatter and longer than many people expect.
AlphaZero was a special/unusual case, I would say an outlier.
FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.
I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.
The article feels, I don't know… maybe like someone calmly sitting in a rocking chair staring at the sea. Then the camera turns, and there's an erupting volcano in the background.
> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.
A self-driving car with a vision-language-action model inside buzzes by.
> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.
A large multimodal model listens to your request and produces a picture.
> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!
GPT‑5.3‑Codex helps debug its own training.
> GPT‑5.3‑Codex helps debug its own training
Doesn't this support the author's point? It still required humans.
Is that the hang-up? Like are people so unimaginative to see that none of this was here five years ago and now this machine is -- if still only in part -- assembling itself?
And the details involved in closing some of the rest of that loop do not seem THAT complicated.
You don't know how involved it was. I would imagine it helped debug some tools that they used to create it. Getting it to actually end to end produce a more capable model without any human help absolutely is that complicated.
> A self-driving car with a vision-language-action model inside buzzes by.
Vision-action maybe. Jamming language in the middle there is an indicator you should run for public office.
This has been popping up quite a bit but as far as I can tell, neither the original thought piece nor (therefore) the critiques are particularly above-the-bar?
Something Big Is Coming (Annotated by Ed Zitron) [pdf] - https://news.ycombinator.com/item?id=47007991 - Feb 2026 (31 comments)
Something Big Is Happening - https://news.ycombinator.com/item?id=46973011 - Feb 2026 (74 comments)
the frequency of the same subject matter with no apparent evolution is spam posting, in my opinion.
interesting how little discussion “something big is happening” got considering it surpassed 100 million views …
The original [0] that this is in response to, essentially posits that something you cannot afford to ignore is going on, especially if you work in a white collar job. Admittedly a little bit of FUD [1] is going on with the "AI is coming for your job" narrative, but the core idea, that this is a fast moving field where it's worth re-examining your assumptions from time to time, appears to be sound and hard to disagree with.
This article has a confrontational title, but the point made here seems to not be incompatible with the original...the author is confronting the FUD directly, which is understandable but perhaps not quite as useful as refuting the core thesis, which is that something you cannot afford to ignore is happening.
In fact, both these people seem to be in agreement that you need to keep an eye on this ball, they just have a "panic" versus "don't panic" framing. Should you panic in an emergency? Research says no [2].
[0] https://shumer.dev/something-big-is-happening
[1] https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt - note the original author is an AI founder
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC9180869/
https://news.ycombinator.com/item?id=47008929
The strawberry/seahorse emoji meme is like a century old in AI-time.
The mere idea that you could derive new correspondence to an emerging reality by rearranging fragments of the past is just insane to me.
Isn't "rearranging fragments of the past" what humans do? We call it creativity.
In part, but we also actually live in the present. The ideas as being dynamically confronted with reality instead of having a fixed arrangement*.
The LLM couldn’t be enhanced by dynamic training because that’s already what humans do. It’s by design that their “guidelines” are fixed.
That is one theory of creativity. It is extremely far from proven.
Too few “r”s.
Thus making humanity an ever-receding area of AI-incompetence.
This is a reference to the unaccountably viral article from a couple of days ago, discussed here:
Something big is happening (97 points, 77 comments)
https://news.ycombinator.com/item?id=46973011
These responses to AI seems to be from people who have not experienced what AI can do, and are therefore skeptical.
But I have personally repeatedly used AI instead of humans across domains.
AI displacement isn’t a prediction. It’s here.
The original seems to be arguing, among other things, that the singularity has begun because AI has been employed to improve AI development tooling. I can see it both ways, but skepticism on these claims is natural and warranted. I agree with you that there's no shortage of people underestimating the importance of this moment in history.