#data-structures #lock-free #atomic #garbage #free-memory #rcu

seize

Fast, efficient, and predictable memory reclamation for concurrent data structures

21 releases

new 0.5.0 Feb 17, 2025
0.4.9 Nov 30, 2024
0.4.5 Jul 18, 2024
0.3.1 Mar 12, 2024
0.0.0 Sep 27, 2021

#59 in Concurrency

Download history 11096/week @ 2024-10-30 12406/week @ 2024-11-06 12849/week @ 2024-11-13 11942/week @ 2024-11-20 14225/week @ 2024-11-27 14654/week @ 2024-12-04 14774/week @ 2024-12-11 10900/week @ 2024-12-18 5209/week @ 2024-12-25 10322/week @ 2025-01-01 12694/week @ 2025-01-08 14140/week @ 2025-01-15 16086/week @ 2025-01-22 18930/week @ 2025-01-29 26017/week @ 2025-02-05 22502/week @ 2025-02-12

86,044 downloads per month
Used in 64 crates (8 directly)

MIT license

90KB
1K SLoC

seize

crates.io github docs.rs

Fast, efficient, and predictable memory reclamation for concurrent data structures.

Refer to the quick-start guide to get started.

Background

Concurrent data structures are faced with the problem of deciding when it is safe to free memory. Despite an object being logically removed, it may still be accessible by other threads that are holding references to it, and thus it is not safe to free immediately. Over the years, many algorithms have been devised to solve this problem. However, most traditional memory reclamation schemes make a tradeoff between performance and efficiency.

For example, hazard pointers track individual pointers, making them very memory efficient but also relatively slow. On the other hand, epoch based reclamation is fast and lightweight, but lacks predictability, requiring periodic checks to determine when it is safe to free memory. This can cause reclamation to trigger unpredictably, leading to poor latency distributions.

Alternative epoch-based schemes forgo workload balancing, relying on the thread that retires an object always being the one that frees it. While this can avoid synchronization costs, it also leads to unbalanced reclamation in read-dominated workloads; parallelism is reduced when only a fraction of threads are writing, degrading memory efficiency as well as performance.

Implementation

seize is based on the hyaline reclamation scheme, which uses reference counting to determine when it is safe to free memory. However, unlike traditional reference counting schemes where every memory access requires modifying shared memory, reference counters are only used for retired objects. When a batch of objects is retired, a reference counter is initialized and propagated to all active threads. Threads cooperate to decrement the reference counter as they exit, eventually freeing the batch. Reclamation is naturally balanced as the thread with the last reference to an object is the one that frees it. This also removes the need to check whether other threads have made progress, leading to predictable latency without sacrificing performance.

seize provides performance competitive with that of epoch based schemes, while memory efficiency is similar to that of hazard pointers. seize is compatible with all modern hardware that supports single-word atomic operations such as FAA and CAS.

Dependencies

~0–8.5MB
~66K SLoC