4 releases
0.2.0 | Dec 31, 2023 |
---|---|
0.1.3 | Dec 21, 2023 |
0.1.2 | Dec 19, 2023 |
0.1.1 | Dec 18, 2023 |
0.1.0 |
|
#387 in Concurrency
430 downloads per month
100KB
2K
SLoC
Latches
A latch is a downward counter which can be used to synchronize threads or coordinate tasks. The value of the counter is initialized on creation. Threads/tasks may block/suspend on the latch until the counter is decremented to 0.
In contrast to std::sync::Barrier
, it is a one-shot phenomenon, that mean the counter will not be reset after reaching 0. Instead, it has the useful property that it does not make them wait for the counter to reach 0 by calling count_down()
or arrive()
. This also means that it can be decremented by a participating thread/task more than once.
See also std::latch
in C++ 20, java.util.concurrent.CountDownLatch
in Java, Concurrent::CountDownLatch
in concurrent-ruby
.
Quick Start
The sync
implementation with atomic-wait
by default features:
cargo add latches
The task
implementation with std
:
cargo add latches --no-default-features --features task --features std
The futex
implementation:
cargo add latches --no-default-features --features futex
See also Which One Should Be Used?
It can be used with no_std
when the std
feature is not enabled.
Usage
Wait For Completion
use std::{sync::Arc, thread};
// Naming rule: `latches::{implementation-name}::Latch`.
use latches::sync::Latch;
let latch = Arc::new(Latch::new(10));
for _ in 0..10 {
let latch = latch.clone();
thread::spawn(move || latch.count_down());
}
// Waits 10 threads complete their works.
// Requires `.await` if it is the `task` implementation.
latch.wait();
Gate
use std::{sync::Arc, thread};
// Naming rule: `latches::{implementation-name}::Latch`.
use latches::sync::Latch;
let gate = Arc::new(Latch::new(1));
for _ in 0..10 {
let gate = gate.clone();
thread::spawn(move || {
// Waits for the gate signal.
// Requires `.await` if it is the `task` implementation.
gate.wait();
// Do some work after gate.
});
}
// Allows 10 threads start their works.
gate.count_down();
Implementations
Sync
The sync
implementation is the default implementation of threads.
Feature dependencies:
- Add
std
feature will make it usestd::sync::Mutex
andstd::sync::Condvar
as condition variables, it supports timeouts - If
std
is disabled, addatomic-wait
feature will make it useatomic-wait
as condition variables, it does not support timeouts - If both
std
andatomic-wait
are disabled, it will throw a compile error
Both atomic-wait
and sync
are enabled in default features for easy-to-use. So if you want to use sync
with std
and don't want to import the unnecessary crate atomic-wait
, please disable default features.
Futex
The futex
implementation is similar to popular implementations of C++20 std::latch
, which provides slightly better performance compared to the sync
implementation.
It does not support timeouts for waiting.
Feature dependencies:
- It depends on feature
atomic-wait
and crateatomic-wait
, and cannot be disabled
Task
The task
implementation is typically used to coordinate asynchronous tasks.
It requires extern crate alloc
if in no_std
.
Feature dependencies:
- Add
std
feature will make it usestd::sync::Mutex
as thread mutexes on waker collection - If
std
is disabled, addatomic-wait
feature will make it use [atomic-wait
][atomic-wait] as thread mutexes on waker collection - If both
std
andatomic-wait
are disabled, it will use spinlocks as thread mutexes on waker collection
Similarities and differences
Similarities:
- All implementations are based on atomic, so will not work on platforms that do not support atomic, i.e.
#[cfg(target_has_atomic)]
- All implementations can be used on
no_std
if thestd
feature does not be enabled - All methods except waits: do not need
async
/await
Differences:
sync |
task |
futex |
|
---|---|---|---|
counter type | usize |
usize |
u32 |
mutexes | std or atomic-wait |
std , atomic-wait , or spinlock |
No mutex* |
waits | Blocking | Futures | Blocking |
timeouts | Requries std |
Requries an async timer | Not support |
* No mutex doesn't mean it doesn't need to eliminate race conditions, in fact it uses Futex (i.e. atomic-wait) instead
Which One Should Be Used?
If your project is using async
/await
tasks, use the task
implementation. Add it with std
or atomic-wait
feature might make it more concurrency-friendly for gate scenarios. Like following:
cargo add latches --no-default-features --features task --feature atomic-wait
If the amount of concurrency in your project is small, use the futex
implementation. Like following:
cargo add latches --no-default-features --features futex
Otherwise, use the sync
implementation. It has the same counter type usize
as the task
implementation and std::sync::Barrier
. Add it with std
feature will make it supports timeouts. Note that it should be used with one of the std
or atomic-wait
features, otherwise a compilation error will be thrown. Like following:
# Both `sync` and `atomic-wait` are enabled in default features
cargo add latches
Or enable std
feature for timeouts support:
cargo add latches --no-default-features --features sync --features std
Under a large amount of concurrency, there is no obvious performance gap between the futex
implementation and the sync
implementation.
Additionally, if you are migrating C++ code to Rust, using a futex
implementation may be an approach which makes more conservative, e.g. similar memory usage, ABI calls, etc. Note that the futex
implementation has no undefined behavior, which is not like the std::latch
in C++.
Performance
Run Benchmarks
Run benchmarks for all implementations with atomic-wait
(the futex
implementation depends on atomic-wait
):
cargo bench --package benches
Run benchmarks with the sync
implementation with std
:
cargo bench --package benches --no-default-features --features sync --features std
Run benchmarks with the task
implementation with atomic-wait
:
cargo bench --package benches --no-default-features --features task --features atomic-wait
Or run benchmarks with std
and comparison group:
cargo bench --package benches --features std --features comparison
etc.
Overall benchmarks include thread scheduling overhead, and Latch
is much faster than thread scheduling, so there may be timing jitter and large standard deviations. All overall benchmarks will have name postfix -overall
.
The asynchronous comparison groups are also atomic-based and depend on specific async libraries such as tokio
and async-std
.
The synchronous comparison group uses Mutex
state instead of atomic.
License
Latches is released under the terms of either the MIT License or the Apache License Version 2.0, at your option.
Dependencies
~0–10MB
~43K SLoC