> Give an agent the right interfaces and it becomes less conversational and more ambient. It no longer needs to constantly ask, explain, summarize, and negotiate. It can stay in the background, react to changes, and make steady progress with less supervision and less noise. That is closer to Weiser’s vision: calm technology, but for machines.
I tend to agree quite a bit.
I created a ambient background agent for my projects that does just that.
It is there, in the background, constantly analysing my code and opening PRs to make it better.
The hard part is finding a definition of "better" and for now it is whatever makes the longer and type checker happy.
Just take a look at the pull requests / issues opened of a repository that’s popular with LLM agents, to understand how well that works.
If there’s one take away it’s that these agents need more, not less, oversight. I don’t agree at all with the “just remove a few tools and you can remove the human from the loop” approach. It just reduces the blast radius in case the agent gets it wrong, not the fact that it gets it wrong.
I'd pay more for deterministic, explainable, and fast software without agents. The value of computers is that they do tasks repeatably, reliably, and at blinding speed.
Ambient agents premise lands and is thought provoking.
But the more you read the article the more the point is lost. The prescriptions given aren't ambient?
CLI: a good command-line interface makes it easy for an agent loop to interact with your system and saves tokens.
Specs: Declarative configs, schemas, manifests. Artifacts that state the desired outcome, not the steps.
Reconciliation loops: you declare the target state, let the system continuously converge toward it. Detect if something drifts.
(seems you're talking to the AI above (and you'll need to refine just like a conversation), it's just not synchronously in chat)
The gripe seems to be specifically with being able to chat with the AI. Yes, ideally the AI just knows to do stuff. But the chat interface is also the reason every Bob and Sarah has chatGPT in their pocket. It's also just growing pains.
I like using them for coding, but I'm wary of making software that depends on an unreliable, expensive remote API. I'd rather have the agent write code and have no runtime dependency.
It might be nice to have something simple and cheap for basic text classification, but I'm not sure what to use. (My websites are written in Deno.)
> Agentic management software is all the hype today: What started with Moltbot and OpenClaw now has a lot of competition: ZeroClaw, Hermes, AutoGPT etc.
Moltbot is OpenClaw, AutoGPT was born significantly before. I just couldn’t read after the first paragraph, I’ve lost the trust entirely, whatever/whoever wrote it.
It’s marketing. They’re selling some change management solution, so obviously they advocate for showing AI agents only changes, rather than the full context.
> Give an agent the right interfaces and it becomes less conversational and more ambient. It no longer needs to constantly ask, explain, summarize, and negotiate. It can stay in the background, react to changes, and make steady progress with less supervision and less noise. That is closer to Weiser’s vision: calm technology, but for machines.
I tend to agree quite a bit.
I created a ambient background agent for my projects that does just that.
It is there, in the background, constantly analysing my code and opening PRs to make it better.
The hard part is finding a definition of "better" and for now it is whatever makes the longer and type checker happy.
But overall it is a pleasure to use.
Just take a look at the pull requests / issues opened of a repository that’s popular with LLM agents, to understand how well that works.
If there’s one take away it’s that these agents need more, not less, oversight. I don’t agree at all with the “just remove a few tools and you can remove the human from the loop” approach. It just reduces the blast radius in case the agent gets it wrong, not the fact that it gets it wrong.
Yeah, but my projects are personal and not popular.
I crafted the AI loop to do exactly what I would be doing by manually.
Out of 10 PRs, 6 to 7 gets merged. The other simply get closed.
Yeah my experience is that this works for a short time and then after a few weeks your codebase is a complete disaster.
I'd pay more for deterministic, explainable, and fast software without agents. The value of computers is that they do tasks repeatably, reliably, and at blinding speed.
This stuff is negative value.
Right and modulo agents, they're just describing event-driven architecture like Lambda.
Ambient agents premise lands and is thought provoking.
But the more you read the article the more the point is lost. The prescriptions given aren't ambient?
(seems you're talking to the AI above (and you'll need to refine just like a conversation), it's just not synchronously in chat)The gripe seems to be specifically with being able to chat with the AI. Yes, ideally the AI just knows to do stuff. But the chat interface is also the reason every Bob and Sarah has chatGPT in their pocket. It's also just growing pains.
I like using them for coding, but I'm wary of making software that depends on an unreliable, expensive remote API. I'd rather have the agent write code and have no runtime dependency.
It might be nice to have something simple and cheap for basic text classification, but I'm not sure what to use. (My websites are written in Deno.)
> Agentic management software is all the hype today: What started with Moltbot and OpenClaw now has a lot of competition: ZeroClaw, Hermes, AutoGPT etc.
Moltbot is OpenClaw, AutoGPT was born significantly before. I just couldn’t read after the first paragraph, I’ve lost the trust entirely, whatever/whoever wrote it.
Hermes agent dates back to at least September last year too, pre-dating Moltbot/OpenClow by a couple of months https://github.com/NousResearch/hermes-agent/commit/17608c11...
It’s marketing. They’re selling some change management solution, so obviously they advocate for showing AI agents only changes, rather than the full context.
Doesn’t mean it’s a good idea, though.
not yet coworkers*
you wouldn't download a coworker
I would, along with a car for the coworker to drive me around in.