If you use `culsans.Queue().async_q` as a direct replacement for `asyncio.Queue()`, then there is essentially no difference. The difference becomes apparent when you use additional features:
1. If checkpoints are enabled (by default when using Trio, or if you explicitly apply `aiologic.lowlevel.enable_checkpoints()`), then every call that is not explicitly non-blocking can be cancelled (even if no waiting is required). For comparison, `await queue.put(await queue.get())` for `queue = asyncio.Queue()` in an infinite loop will never yield back to the event loop (when 0 < size < maxsize is true), and as a result, no other asyncio tasks will ever continue their execution, and such a loop cannot be cancelled (see PEP 492).
2. With multithreading and corresponding race conditions, method calls are synchronized using the underlying lock (as in `queue.Queue`). This means that such synchronization can temporarily block the event loop, but this is rarely a bottleneck (the same is used in Janus). In general, this delays task cancellation and timeout handling if someone else is still holding the lock. If you need extremely fast and scalable queues, `aiologic.SimpleQueue` may be the best option (it does not use any form of internal state synchronization!).
I am not sure I understand your question well enough. `asyncio.Queue` works exclusively in cooperative multitasking (it is not thread-safe) with all the resulting simplifications. The principle of operation of Culsans queues under the same conditions is almost the same as that of any other queues capable of operating as purely asynchronous with cancellation support (perhaps you are referring to starting new threads or new tasks as an implementation detail? aiologic and Culsans do not use any of this). As soon as preemptive multitasking is introduced, the behavior may change somewhat - `culsans.Queue` relies on sync-only synchronization of the internal state, `aiologic.Queue` on async-aware synchronization (without blocking the event loop; still used because `heapq` functions are not thread-safe, and they are required for priority queues; but the wait queues are combined, which achieves fairness and solves python/cpython#90968), and `aiologic.SimpleQueue` does not synchronize the internal state at all due to the use of effectively atomic operations.
I would like to add that you can also read about some non-trivial details in the "Performance" section of the aiologic documentation [4]. What is described there for standard primitives also applies to Culsans queues (specifically, the mutex case; however, other documentation sections (such as "Why?", "Overview", and "Libraries") are also relevant to Culsans, since aiologic is used under the hood).
Thanks, that clarifies it. The checkpoint-based cancellation and the sync-vs-async locking model differences were exactly what I was trying to understand.
How does this differ from asyncio.Queue in terms of backpressure or cancellation semantics?
If you use `culsans.Queue().async_q` as a direct replacement for `asyncio.Queue()`, then there is essentially no difference. The difference becomes apparent when you use additional features:
1. If checkpoints are enabled (by default when using Trio, or if you explicitly apply `aiologic.lowlevel.enable_checkpoints()`), then every call that is not explicitly non-blocking can be cancelled (even if no waiting is required). For comparison, `await queue.put(await queue.get())` for `queue = asyncio.Queue()` in an infinite loop will never yield back to the event loop (when 0 < size < maxsize is true), and as a result, no other asyncio tasks will ever continue their execution, and such a loop cannot be cancelled (see PEP 492).
2. With multithreading and corresponding race conditions, method calls are synchronized using the underlying lock (as in `queue.Queue`). This means that such synchronization can temporarily block the event loop, but this is rarely a bottleneck (the same is used in Janus). In general, this delays task cancellation and timeout handling if someone else is still holding the lock. If you need extremely fast and scalable queues, `aiologic.SimpleQueue` may be the best option (it does not use any form of internal state synchronization!).
I am not sure I understand your question well enough. `asyncio.Queue` works exclusively in cooperative multitasking (it is not thread-safe) with all the resulting simplifications. The principle of operation of Culsans queues under the same conditions is almost the same as that of any other queues capable of operating as purely asynchronous with cancellation support (perhaps you are referring to starting new threads or new tasks as an implementation detail? aiologic and Culsans do not use any of this). As soon as preemptive multitasking is introduced, the behavior may change somewhat - `culsans.Queue` relies on sync-only synchronization of the internal state, `aiologic.Queue` on async-aware synchronization (without blocking the event loop; still used because `heapq` functions are not thread-safe, and they are required for priority queues; but the wait queues are combined, which achieves fairness and solves python/cpython#90968), and `aiologic.SimpleQueue` does not synchronize the internal state at all due to the use of effectively atomic operations.
I would like to add that you can also read about some non-trivial details in the "Performance" section of the aiologic documentation [4]. What is described there for standard primitives also applies to Culsans queues (specifically, the mutex case; however, other documentation sections (such as "Why?", "Overview", and "Libraries") are also relevant to Culsans, since aiologic is used under the hood).
[4] https://aiologic.readthedocs.io/latest/performance.html
Thanks, that clarifies it. The checkpoint-based cancellation and the sync-vs-async locking model differences were exactly what I was trying to understand.