30 releases
0.14.6 | May 31, 2024 |
---|---|
0.14.4 | Mar 21, 2024 |
0.12.0 | Jun 10, 2023 |
0.9.0 | Mar 31, 2023 |
0.0.1-beta.12 | Nov 8, 2022 |
#553 in Data structures
24 downloads per month
Used in 2 crates
(via redis-swapplex)
64KB
2K
SLoC
Stack Queue
A (mostly) heapless auto-batching queue featuring deferrable batching by way of negotiating exclusive access over task ranges on thread-owned circular buffers. As tasks continue to be enqueued until batches are bounded, doing so can be deferred until after a database connection has been acquired as to allow for opportunitistic batching. This approach delivers optimal batching at all workload levels without batch collection overhead, superfluous timeouts, nor unnecessary allocations.
Usage
Impl one of the following while using the local_queue macro:
TaskQueue
, for batching with per-task receiversBackgroundQueue
, for background processsing task batches without receiversBatchReducer
, for collecting or reducing batched data
Optimal Runtime Configuration
For best performance, exclusively use the Tokio runtime as configured via the tokio::main or tokio::test macro with the crate
attribute set to async_local
while the barrier-protected-runtime
feature is enabled on async-local
. Doing so configures the Tokio runtime with a barrier that rendezvous runtime worker threads during shutdown in a way that ensures tasks never outlive thread local data owned by runtime worker threads and obviates the need for Box::leak as a fallback means of lifetime extension.
Benchmark results // batching 16 tasks
crossbeam |
flume |
TaskQueue |
tokio::mpsc |
---|---|---|---|
576.33 ns (✅ 1.00x) |
656.54 ns (❌ 1.14x slower) |
255.33 ns (🚀 2.26x faster) |
551.48 ns (✅ 1.05x faster) |
Dependencies
~3–32MB
~476K SLoC