19 releases (10 breaking)

0.11.1 Mar 17, 2024
0.11.0 Feb 6, 2023
0.10.0 Feb 1, 2023
0.6.1 Dec 31, 2022
0.1.0 Jan 5, 2020

#31 in Caching

Download history 388/week @ 2024-08-01 197/week @ 2024-08-08 131/week @ 2024-08-15 119/week @ 2024-08-22 110/week @ 2024-08-29 207/week @ 2024-09-05 105/week @ 2024-09-12 157/week @ 2024-09-19 175/week @ 2024-09-26 121/week @ 2024-10-03 73/week @ 2024-10-10 121/week @ 2024-10-17 127/week @ 2024-10-24 104/week @ 2024-10-31 101/week @ 2024-11-07 86/week @ 2024-11-14

454 downloads per month
Used in 3 crates (2 directly)

MIT license

135KB
2K SLoC

HashLRU

HashLRU is a LRU cache implemented in Rust.

It tries to follow the API exposed by a standard Rust HashMap while enforcing a limited memory footprint by limiting the number of keys using the LRU (least recently used) strategy, which is a quite common cache replacement policy.

It has been heavily inspired by lru by Jerome Froelich. The underlying logic is similar, using a double-linked list implemented with pointers, combined with a hash map. HashLRU is slightly less optimized, but the API is a bit richer in some areas.

Typically, on top of common LRU implementations, HashLRU has:

  • (de)serialization support
  • all kinds of import/export/iterators/dump and other goodies
  • a sync version which can be shared between threads

HashLRU icon

  • DiskLRU is very similar in its design, and acts as a persistent store, as opposed to HashLRU being an in-memory cache.
  • MenhirKV solves the same problem that DiskLRU addresses, which is "store stuff, keep most used value at hand, drop the rest". It does it in a less pedantic and precise way, but much more efficiently.

Status

HashLRU is between the toy project and something you could use. It comes with with a rather complete test harness and tries to have reasonable documentation, so that's a plus. OTOH it is quite young, and to my knowledge not used in production anywhere.

Also there are many other libraries you could use instead:

  • lru is faster, and has a more bare-metal API. See doc and source.
  • cached comes with batteries included, has support for many other features than just LRU. See doc and source.
  • caches implementes many different types of caches. See doc and source.

Build Status Crates.io Gitlab License

Usage

use hashlru::Cache;

let mut cache = Cache::new(4);
cache.insert("key1", 10);
cache.insert("key2", 20);
cache.insert("key3", 30);
cache.insert("key4", 40);
cache.insert("key5", 50);
// key1 has been dropped, size is limited to 4
assert_eq!(Some("key2"), cache.lru());
assert_eq!(Some(&20), cache.get(&"key2"));
// getting key2 has made key3 the least recently used item
assert_eq!(Some("key3"), cache.lru());
assert_eq!(Some(&40), cache.get(&"key4"));
// getting key4 makes it the most recently used item
assert_eq!("[key3: 30, key5: 50, key2: 20, key4: 40]", format!("{}", cache));

Benchmarks

Taken from a random CI job:

running 10 tests
test tests::bench_read_usize_builtin_hashmap     ... bench:          34 ns/iter (+/- 1)
test tests::bench_read_usize_extern_caches       ... bench:          58 ns/iter (+/- 30)
test tests::bench_read_usize_extern_lru          ... bench:          24 ns/iter (+/- 5)
test tests::bench_read_usize_hashlru_cache       ... bench:          25 ns/iter (+/- 3)
test tests::bench_read_usize_hashlru_sync_cache  ... bench:          61 ns/iter (+/- 18)
test tests::bench_write_usize_builtin_hashmap    ... bench:         104 ns/iter (+/- 19)
test tests::bench_write_usize_extern_caches      ... bench:         116 ns/iter (+/- 46)
test tests::bench_write_usize_extern_lru         ... bench:          62 ns/iter (+/- 32)
test tests::bench_write_usize_hashlru_cache      ... bench:          64 ns/iter (+/- 34)
test tests::bench_write_usize_hashlru_sync_cache ... bench:         100 ns/iter (+/- 42)
test result: ok. 0 passed; 0 failed; 0 ignored; 10 measured; 0 filtered out; finished in 43.16s

Those are done with a 100k items cache.

Not the results of thorough intensive benchmarking but here are a few facts:

  • using hasbrown hash maps speeds up things, this is mostly why hashlru and lru perform better than a vanilla HashMap on these benches.
  • hashlru is not the fastest, lru always performs best, it has interesting corner-case optimizations. Use lru if you are after raw speed.

To run the benchmarks:

cd bench
rustup default nightly
cargo bench

Links

License

HashLRU is licensed under the MIT license.

Dependencies

~2MB
~25K SLoC