I'm still waiting on how they'll prevent accidental corruption from multiple writers; there's a PR implementing write leases, not sure if that's the direction they'll take.
That said, pausing local polling when writes are enabled - i.e. assuming you're the only writer - makes sense, it's a good idea; hadn't occurred to me.
Ideally, I'd like to offer durability on fullfsync. I think this is feasible. In a concurrent system (single host), while a writer is waiting for durability confirmation, readers can continue reading the previous state, and the next writer can read the committed - but not yet durable - data and queue its writes to be batched. You can have as many pending writes as you're willing to have connections.
Litestream author here. Currently we're handling the "single writer" issue outside of Litestream. We have controls in our internal systems that make it work well. But yes, the lease PR is the direction we're looking at going.
I'm not sure you can have readers see something separate than writers. When SQLite promotes a read lock to a write lock under WAL then it checks if any of the data has changed and then fails the transaction if it has.
I glad this got re-upped, I was sad there wasn't much (any?) discussion when this was posted a few days ago.
I find the ways people extend or build on top of Sqlite to be fascinating. I use it in a few apps but not on the server (yet). Multi-writer for something like would be amazing (incredibly difficult to do well, obviously). I work on a home-rolled distributed database (multi-writer) but it has numerous downsides/issues so I love seeing how other people approach and solve these things.
Litestream author here. You can use the built-in file replication. It'll replicate all your database changes to another path on disk. I use it a lot for testing things out:
Easiest is probably a local S3-compatible like MinIO (docker) and point Litestream at that endpoint. If you want hosted, R2/B2 free tiers work too. It only needs S3 creds + endpoint.
I love litestream. I've used it with pocketbase and it works. sqlite is a great building block for almost everything.
Does anyone know whether you could use this to stitch together a bunch of .db files (that share the same schema) in an ad-hoc way?
For example, if I decided I wanted to synchronize my friend's .db file, could I enable this using litestream? And, what if my friend wanted to sync two of his friends' .db files, but I'm only interested in his changes, not theirs? I assume this kind of fan out is not possible, but it would be fun if so.
If you can have multiple writers to a single database then you'd need to look at something like cr-sqlite[1] that uses CRDT to figure out conflicts. If you're just replicating separate databases then you might be able to replicate each one using Litestream and then use SQLite's ATTACH[2] to connect them together. There is a limit on how many databases you can attach together in a session though.
- anyone knows what is the equivalent of litestream for postgres?
- i want to be able to pg_dump and barman my database to s3 by streaming it. is that possible?
> We recently unveiled Sprites. If you don’t know what Sprites are, you should just go check them out. They’re one of the coolest things we’ve ever shipped.
Been about two weeks now since the linked article was published.
Hey fly, what are the internal usage numbers on "sprites"?
Is anyone using them? To do what? Worth the effort?
Sometimes Ben writes a sleeper. This is really more of a Litestream case study (I guess technically an announcement of a Litestream feature, but a very situational one) than a Sprites thing.
Sprites are the best thing we've ever built. Extremely worth it.
Obviously, people are using them, but I feel like you're trying to bait me into talking about something that doesn't have anything to do with running a SQLite database directly off an S3 bucket. I'm sure we'll write something else soon where it'll make more sense to talk about how Sprites are going.
If you haven't played with them yet, my best and for now only answer is: go kick the tires. They're very fun.
[sorry for the weird timestamps - the OP was submitted a while ago and I just re-upped it and it hit a dumb bug which I haven't gotten around to fixing yet]
I wonder how this compares to running sqlite off of an s3-backed ZeroFS https://github.com/Barre/ZeroFS
I need to bring writes to my version of the VFS.
I'm still waiting on how they'll prevent accidental corruption from multiple writers; there's a PR implementing write leases, not sure if that's the direction they'll take.
That said, pausing local polling when writes are enabled - i.e. assuming you're the only writer - makes sense, it's a good idea; hadn't occurred to me.
Ideally, I'd like to offer durability on fullfsync. I think this is feasible. In a concurrent system (single host), while a writer is waiting for durability confirmation, readers can continue reading the previous state, and the next writer can read the committed - but not yet durable - data and queue its writes to be batched. You can have as many pending writes as you're willing to have connections.
Litestream author here. Currently we're handling the "single writer" issue outside of Litestream. We have controls in our internal systems that make it work well. But yes, the lease PR is the direction we're looking at going.
I'm not sure you can have readers see something separate than writers. When SQLite promotes a read lock to a write lock under WAL then it checks if any of the data has changed and then fails the transaction if it has.
I glad this got re-upped, I was sad there wasn't much (any?) discussion when this was posted a few days ago.
I find the ways people extend or build on top of Sqlite to be fascinating. I use it in a few apps but not on the server (yet). Multi-writer for something like would be amazing (incredibly difficult to do well, obviously). I work on a home-rolled distributed database (multi-writer) but it has numerous downsides/issues so I love seeing how other people approach and solve these things.
I’m a hobbyist who doesn’t have any s3-compatible storage. (That I know of, anyway.) What’s the easiest way to try it out?
Litestream author here. You can use the built-in file replication. It'll replicate all your database changes to another path on disk. I use it a lot for testing things out:
https://litestream.io/guides/file/
It also supports:
- webdav
- file
- SFTP basically ssh ( i have used tailscale and linux laptop ssh)
Easiest is probably a local S3-compatible like MinIO (docker) and point Litestream at that endpoint. If you want hosted, R2/B2 free tiers work too. It only needs S3 creds + endpoint.
I love litestream. I've used it with pocketbase and it works. sqlite is a great building block for almost everything.
Does anyone know whether you could use this to stitch together a bunch of .db files (that share the same schema) in an ad-hoc way?
For example, if I decided I wanted to synchronize my friend's .db file, could I enable this using litestream? And, what if my friend wanted to sync two of his friends' .db files, but I'm only interested in his changes, not theirs? I assume this kind of fan out is not possible, but it would be fun if so.
If you can have multiple writers to a single database then you'd need to look at something like cr-sqlite[1] that uses CRDT to figure out conflicts. If you're just replicating separate databases then you might be able to replicate each one using Litestream and then use SQLite's ATTACH[2] to connect them together. There is a limit on how many databases you can attach together in a session though.
[1]: https://github.com/vlcn-io/cr-sqlite
[2]: https://sqlite.org/lang_attach.html
- anyone knows what is the equivalent of litestream for postgres? - i want to be able to pg_dump and barman my database to s3 by streaming it. is that possible?
There is wal-g that moves the wal files to s3 and you can spin up any number of instances off of that. Works great for catching up secondary servers
just a quick look on it, is that more like pg_dump or barman?
Did Postgres's WAL archiving feature not work for you?
You might be interested in ZeroFS [0], it works great with Postgres [1].
To achieve what you describe, you should be just able to setup a Postgres replica that’s setup on top of ZeroFS.
[0] https://github.com/Barre/ZeroFS
[1] https://github.com/Barre/ZeroFS?tab=readme-ov-file#postgresq...
> We recently unveiled Sprites. If you don’t know what Sprites are, you should just go check them out. They’re one of the coolest things we’ve ever shipped.
Been about two weeks now since the linked article was published.
Hey fly, what are the internal usage numbers on "sprites"?
Is anyone using them? To do what? Worth the effort?
Sometimes Ben writes a sleeper. This is really more of a Litestream case study (I guess technically an announcement of a Litestream feature, but a very situational one) than a Sprites thing.
Sprites are the best thing we've ever built. Extremely worth it.
Yeah, it's now been said they are the "coolest" and "best", but the question was is anyone using them and for what?
https://news.ycombinator.com/item?id=46890881
Obviously, people are using them, but I feel like you're trying to bait me into talking about something that doesn't have anything to do with running a SQLite database directly off an S3 bucket. I'm sure we'll write something else soon where it'll make more sense to talk about how Sprites are going.
If you haven't played with them yet, my best and for now only answer is: go kick the tires. They're very fun.
Its common practice on this site to hide evidence for unsuccessful products behind vague language like "coolest" and "best".
I was expecting that commonality to be dispelled you chose otherwise.
Sorry, not taking the bait.
Why do you keep saying "bait"?!?
[sorry for the weird timestamps - the OP was submitted a while ago and I just re-upped it and it hit a dumb bug which I haven't gotten around to fixing yet]