One thing I have always wanted to do is cancel an AI Agent executing remotely that I kicked off as it streamed its part by part response(part could words, list of urls or whatever you want the FE to display). A good example is web-researcher agent that searches and fetches web pages remotely and sends it back to the local sub-agent to summarize the results. This is something claude-code in the terminal does not quite provide. In Instant would this be trivial to build?
Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a `x` box on "parent" widget using the `id` from a SSE event using a simple REST call. The `id` could belong to parent web-search or to certain URLs which are being fetched.
And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).
You kick off an agent. It reports work back to the user. The user can click cancel, and the agent gets terminated.
You are right, this kind of UX comes very naturally with Instant. If an agent writes data to Instant, it will show up right away to the user. If the user clicks an `X` button, it will propagate to the agent.
The basic sync engine would handle a lot of the complexity here. If the data streaming gets more complicated, you may want to use Instant streams. For example, if you want to convey updates character by character, you can use Instant streams as an included service, which does this extremely efficiently.
This is super cool and exactly what I've been looking for for personal projects I think. I wanna try it out, but the "agent" part could be more seamless. How does my coding agent know how to work this thing?
I'd suggest including a skill for this, or if there's already one linking to it on the blog!
I wonder if people really need this. How many people are really building multiplayer apps like Figma, Linear etc? I'm guessing 99% are CRUD and I doubt that will change. Even if so, would you want to vendor lock into some proprietary technology rather than build with tried and tested open source components?
> really building multiplayer apps like Figma, Linear
I think there's two surprises about this:
1. If it was easier to make apps multiplayer, I bet more apps would be. For example, I don't see why Linear has to be multiplayer, but other CRUD apps don't.
2. When the abstraction is right, building apps with sync engines is easier than building traditional CRUD apps. The Linear team mentioned this themselves here: https://x.com/artman/status/1558081796914483201
Yeah I kinda agree. Considering llms write most of the code today, the need for fancy tech is lower than ever. A good old crud app looks like a perfect fit for ai - its simple, repetitive and ai is great at sql. Go binary for backend and react for frontend - covers 99.9% use cases with basically zero resource usage. 5 usd node will handle 100k mau without breaking a sweat.
> 5 usd node will handle 100k mau without breaking a sweat.
One problem you may encounter with the 5 usd node: how do you handle multiple projects? You could put them all in one VM, but that set up can get esoteric, and as you look for more isolation, the processes won't fit on such a small machine.
With Instant, you can make unlimited projects. Your app also gets a sync engine, which is both good for your users, and at least in our experiments, the AIs prefer building with it.
And if you ever want to get off Instant, the whole system is open source.
I still resonate with a good Hetzner box though, and it can make sense to self-host or to use more tried-and-true tech.
For what it's worth, with Instant you would get a lot more support for easy projects. At least in our benchmarks, AI
For people like me — who are kind of familiar with how react/jetpack compose/flutter like frameworks work — I recall using react-widget/composables which seamlessly update when these register to receive updates to the underlying datamodel. The persistence boundary in these apps was the app/device where it was running. The datamodel was local. You still had to worry about making the data updates to servers and back to get to other devices/apps.
Instant crosses that persistence boundary, your app can propagate updates to any one who has subscribed to the abstract datastore — which is on a server somewhere, so you the engineer don't have to write that code. Right?
But how is this different/better than things like, i wanna say, vercel/nextjs or the like that host similar infra?
I would say NextJS focuses a lot more on server-rendering. If you use the app router, the default path is to render as much as you can on the server.
This can work great, but you lose some benefits: your pages won't work offline, they won't be real-time, and if you make changes, you'll have to wait for the server to acknowledge them.
Instant pushes handles more of the work on the frontend. You make queries directly in your frontend, and Instant handles all the offline caching, the real-time, and the optimistic updates.
You can have the best of both worlds though. We have an experimental SSR package, which to our knowledge is the first to combine _both_ SSR and real-time. The way it works:
1. Next SSRs the page
2. But when it loads, Instant picks it up and makes every query reactive.
Thank you! We spent a lot of time with the demos on the home page, the essays page, and upgrading the docs.
We're going to redesign the dashboard in the next few weeks.
One interesting observation from our users: though they use the dashboard less in some ways (the AI agents spin up apps and make schema changes for them), we found people use them _more_ in other ways. Instant comes with an Explorer component, which lets you query your data. We found users want to engage with that a lot more.
In places where we process throughput, we generally stick a grouped queue and a threadpool that takes from it. The mechanics for this queue make it so that if there's one noisy neighbor, it can't hog all the threads.
2. Operations
There's also a big part that's just about operating the system. If you think about what happens in big companies, they are effectively dealing with noisy neighbors all the time. You keep the an eye on the system at all times, and manage against any spikes.
The benefit to centralizing the operations is, when a small company gets big quick, we likely already have the buffer to help them scale. The drawdown to all systems like this, is that sometimes we get it wrong.
When we get it wrong though, we write it down, and improve our operations.
We wanted to make a tool that (a) would make it easy to build delightful apps, and that (b) builders would find easy to use.
This got us into making things that touch both local-first and AI.
On the local-first side, we took on problems like offline-mode, real-time, and optimisitc updates.
On the AI side, we built a multi-tenant abstraction, so you can spin up as many apps as you like, and focused on great DX/AX so agents found Instant easy to use too.
One thing I have always wanted to do is cancel an AI Agent executing remotely that I kicked off as it streamed its part by part response(part could words, list of urls or whatever you want the FE to display). A good example is web-researcher agent that searches and fetches web pages remotely and sends it back to the local sub-agent to summarize the results. This is something claude-code in the terminal does not quite provide. In Instant would this be trivial to build?
Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a `x` box on "parent" widget using the `id` from a SSE event using a simple REST call. The `id` could belong to parent web-search or to certain URLs which are being fetched. And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).
If I understood you correctly:
You kick off an agent. It reports work back to the user. The user can click cancel, and the agent gets terminated.
You are right, this kind of UX comes very naturally with Instant. If an agent writes data to Instant, it will show up right away to the user. If the user clicks an `X` button, it will propagate to the agent.
The basic sync engine would handle a lot of the complexity here. If the data streaming gets more complicated, you may want to use Instant streams. For example, if you want to convey updates character by character, you can use Instant streams as an included service, which does this extremely efficiently.
More about the sync engine: https://www.instantdb.com/product/sync More about streams: https://www.instantdb.com/docs/streams
This is super cool and exactly what I've been looking for for personal projects I think. I wanna try it out, but the "agent" part could be more seamless. How does my coding agent know how to work this thing?
I'd suggest including a skill for this, or if there's already one linking to it on the blog!
Good idea! I went ahead and updated the essay:
https://github.com/instantdb/instant/pull/2530
It should be live in a few minutes.
We do have a skill!
npx skills add instantdb/skills
Would recommend doing `bunx/pnpx/npx create-instant-app` to scaffold a project too!
Can I view the source code of this skill / install it manually? I am incredibly not a fan of automated installers for this type of stuff.
You can! The skill lives here
https://github.com/instantdb/skills
I wonder if people really need this. How many people are really building multiplayer apps like Figma, Linear etc? I'm guessing 99% are CRUD and I doubt that will change. Even if so, would you want to vendor lock into some proprietary technology rather than build with tried and tested open source components?
> really building multiplayer apps like Figma, Linear
I think there's two surprises about this:
1. If it was easier to make apps multiplayer, I bet more apps would be. For example, I don't see why Linear has to be multiplayer, but other CRUD apps don't.
2. When the abstraction is right, building apps with sync engines is easier than building traditional CRUD apps. The Linear team mentioned this themselves here: https://x.com/artman/status/1558081796914483201
For what it's worth, Instant is 100% open source!
https://github.com/instantdb/instant
Yeah I kinda agree. Considering llms write most of the code today, the need for fancy tech is lower than ever. A good old crud app looks like a perfect fit for ai - its simple, repetitive and ai is great at sql. Go binary for backend and react for frontend - covers 99.9% use cases with basically zero resource usage. 5 usd node will handle 100k mau without breaking a sweat.
> 5 usd node will handle 100k mau without breaking a sweat.
One problem you may encounter with the 5 usd node: how do you handle multiple projects? You could put them all in one VM, but that set up can get esoteric, and as you look for more isolation, the processes won't fit on such a small machine.
With Instant, you can make unlimited projects. Your app also gets a sync engine, which is both good for your users, and at least in our experiments, the AIs prefer building with it.
And if you ever want to get off Instant, the whole system is open source.
I still resonate with a good Hetzner box though, and it can make sense to self-host or to use more tried-and-true tech.
For what it's worth, with Instant you would get a lot more support for easy projects. At least in our benchmarks, AI
For people like me — who are kind of familiar with how react/jetpack compose/flutter like frameworks work — I recall using react-widget/composables which seamlessly update when these register to receive updates to the underlying datamodel. The persistence boundary in these apps was the app/device where it was running. The datamodel was local. You still had to worry about making the data updates to servers and back to get to other devices/apps.
Instant crosses that persistence boundary, your app can propagate updates to any one who has subscribed to the abstract datastore — which is on a server somewhere, so you the engineer don't have to write that code. Right?
But how is this different/better than things like, i wanna say, vercel/nextjs or the like that host similar infra?
I would say NextJS focuses a lot more on server-rendering. If you use the app router, the default path is to render as much as you can on the server.
This can work great, but you lose some benefits: your pages won't work offline, they won't be real-time, and if you make changes, you'll have to wait for the server to acknowledge them.
Instant pushes handles more of the work on the frontend. You make queries directly in your frontend, and Instant handles all the offline caching, the real-time, and the optimistic updates.
You can have the best of both worlds though. We have an experimental SSR package, which to our knowledge is the first to combine _both_ SSR and real-time. The way it works:
1. Next SSRs the page
2. But when it loads, Instant picks it up and makes every query reactive.
More details here: https://www.instantdb.com/docs/next-ssr
They actually deliver on the promise of "relational queries && real-time," which is no small feat.
Though, their console feels like it didn't get the love that the rest of the infra / website did.
Congrats on the 1.0 launch! I'm excited to keep building with Instant.
Thank you! We spent a lot of time with the demos on the home page, the essays page, and upgrading the docs.
We're going to redesign the dashboard in the next few weeks.
One interesting observation from our users: though they use the dashboard less in some ways (the AI agents spin up apps and make schema changes for them), we found people use them _more_ in other ways. Instant comes with an Explorer component, which lets you query your data. We found users want to engage with that a lot more.
with a huge multi-tenant database, how do you deal with noisy neighbors? indexes are surely necessary, which impose a non-trivial cost at scale.
There's two answers: data structures, and operations
1. Data structures
One data structure that helps a lot is the grouped queue.
I cover it in the essay here:
https://www.instantdb.com/essays/architecture#:~:text=is%20t...
To summarize:
In places where we process throughput, we generally stick a grouped queue and a threadpool that takes from it. The mechanics for this queue make it so that if there's one noisy neighbor, it can't hog all the threads.
2. Operations
There's also a big part that's just about operating the system. If you think about what happens in big companies, they are effectively dealing with noisy neighbors all the time. You keep the an eye on the system at all times, and manage against any spikes.
The benefit to centralizing the operations is, when a small company gets big quick, we likely already have the buffer to help them scale. The drawdown to all systems like this, is that sometimes we get it wrong.
When we get it wrong though, we write it down, and improve our operations.
Is InstantDB no longer about local-first or is the AI angle just a marketing thing?
This looks like a blog post highlighting that this can be used for vibecoded apps, not necessarily a pivot on the product.
We built Instant to optimize for two things:
We wanted to make a tool that (a) would make it easy to build delightful apps, and that (b) builders would find easy to use.
This got us into making things that touch both local-first and AI.
On the local-first side, we took on problems like offline-mode, real-time, and optimisitc updates.
On the AI side, we built a multi-tenant abstraction, so you can spin up as many apps as you like, and focused on great DX/AX so agents found Instant easy to use too.
Looks very nice! I'll give it a spin for prototypes.
Would love to check out /docs but it's currently a 404.
Docs should be working now! If anyone else has issues please let us know!
how is this better than vercel?
Stopa answers this here https://news.ycombinator.com/item?id=47711254
But pairing Instant with Vercel works great too! We have a tutorial on how you can build an app with Instant and deploy it to vercel here
https://www.instantdb.com/tutorial