Great work! Can you manage back pressure towards webhook senders in the interface? For example, responding with 429 when volume is potentially approaching or exceeding throughput capacity.
Well yeah for a simple queue where messages live in memory and disappear once consumed there are plenty of technologies that work (rabbitmq, redis...).
Here is the goal is not to just build a queue but rather have something persisted where you can inspect payload and replay even days after the original even was sent
Great work! Can you manage back pressure towards webhook senders in the interface? For example, responding with 429 when volume is potentially approaching or exceeding throughput capacity.
why postgres for this? feels heavy for a queue. tried something similar with redis and it was way simpler
Well yeah for a simple queue where messages live in memory and disappear once consumed there are plenty of technologies that work (rabbitmq, redis...). Here is the goal is not to just build a queue but rather have something persisted where you can inspect payload and replay even days after the original even was sent