4 releases (breaking)
Uses old Rust 2015
0.4.0 | Jan 18, 2019 |
---|---|
0.3.0 | Jan 18, 2019 |
0.2.0 | Jul 31, 2018 |
0.1.0 | May 4, 2018 |
#1167 in Concurrency
Used in dpdk-packet-distributor
48KB
788 lines
lock-free-multi-producer-single-consumer-ring-buffer
A lock-free, multi-producer, single-consumer (MPSC) ring buffer. Optimized for sending and receiving 'bursts' of messages. Can also be used as a ring queue. It is a Rust port of Mindaugas Rasiukevicius's ringbuf. The original C code this is derived from is "Copyright (c) 2016-2017 Mindaugas Rasiukevicius ".
Licensing
The license for this project is BSD-2-Clause.
lib.rs
:
lock-free-multi-producer-single-consumer-ring-buffer
Usage
extern crate lock_free_multi_produce_single_consume_ring_buffer;
use ::lock_free_multi_produce_single_consume_ring_buffer::*;
let (ring_buffer_consumer, ring_buffer_producers) = RingBuffer::new(capacity, number_of_producers);
// For each producer thread.
let ring_buffer_producer = ring_buffer_producers.get(0);
let result = ring_buffer_producer.acquire(length);
// result is `None` if length was too much; try a shorter length.
let slice_guard = result.unwrap();
// Dereferences to a slice.
// slice_guard[0] = some_value;
// Produce (relinquishes the slice).
drop(slice_guard);
ring_buffer_producer.produce();
// For each consumer thread.
let slice_guard = ring_buffer_consumer.consume();
// Iterate, move out, etc.
println!("should be `some_value`", slice_guard.move_out(slice_guard.len())[0]);
// Releases the slice so producers can now use it.
drop(slice_guard);
Once all the producers and the consumer are dropped then the memory underlying the ring buffer is freed and any unconsumed items in it are safely Drop
ped.
The following documentation is originally "Copyright (c) 2016-2017 Mindaugas Rasiukevicius ".
Atomic multi-producer single-consumer ring buffer, which supports contiguous range operations and which can be conveniently used for message passing.
There are three offsets ("think of clock hands"):-
NEXT
: marks the beginning of the available space,WRITTEN
: the point up to which the data is actually written.- Observed
READY
: point up to which data is ready to be written.
Producers
Observe and save the 'next' offset, then request N
bytes from the ring buffer by atomically advancing the next
offset.
Once the data is written into the "reserved" buffer space, the thread clears the saved value; these observed values are used to compute the ready
offset.
Consumer
Writes the data between written
and ready
offsets and updates the written
value.
The consumer thread scans for the lowest seen value by the producers.
Key invariant
Producers cannot go beyond the written
offset
Producers are not allowed to catch up with the consumer.
Only the consumer is allowed to catch up with the producer, ie set the written
offset to be equal to the next
offset.
Wrap-around
If the producer cannot acquire the requested length due to little available space at the end of the buffer, then it will wraparound.
The WrapLockBit
in next
offset is used to lock the end
offset.
There is an ABA problem if one producer stalls while a pair of producer and consumer would both successfully wrap-around and set the next
offset to the stale value of the first producer, thus letting it to perform a successful compare-and-swap (CAS) violating the invariant.
A counter in the next
offset (masked by WrapCounter
) is used to prevent from this problem.
It is incremented on wraparounds.
The same ABA problem could also cause a stale ready
offset, which could be observed by the consumer.
The algorithm sets WrapLockBit
in the seen
value before advancing the next
and clears this bit after the successful advancing; this ensures that only the stable ready
observed by the consumer.
Dependencies
~8KB