#database #rocksdb #leveldb #lsm

velarixdb

An LSM Storage Engine focused on reduced IO amplification

9 releases

new 0.0.17 Apr 14, 2025
0.0.16 Nov 22, 2024
0.0.15 Aug 28, 2024
0.0.14 Jul 23, 2024

#69 in Database implementations

MIT license

510KB
8K SLoC

Contributors ✨

Thanks to these amazing people for their contributions: @NickNYU, @arrowler

codecov Tests Crates.io Documentation Clippy Contributor Covenant License: MIT

VelarixDB is an LSM-based storage engine designed to significantly reduce IO amplification, resulting in better performance and durability for storage devices.

Introduction

VelarixDB: Designed to reduce I/O amplification

VelarixDB is an ongoing project (not production ready) designed to optimize data movement during load times and compaction. Inspired by the WiscKey paper, WiscKey: Separating Keys from Values in SSD-conscious Storage, velarixdb aims to significantly enhance performance over traditional key-value stores.

Problem

During compaction in LevelDB or RocksDB, in the worst case, up to 10 SSTable files need to be read (more accurately, based on the key distribution per level), sorted, and re-written since keys are not allowed to overlap across all the sstables from Level 1 downwards. Suppose after merging SSTables in one level, the next level exceeds its threshold; compaction can cascade from Level 0 all the way to Level 6, meaning the overall write amplification can be up to 50 (ignoring the first compaction level).[ Reference -> Official LevelDB Compaction Process Docs ]. This repetitive data movement can cause significant wear on SSDs, reducing their lifespan due to the high number of write cycles. The goal is to minimize the amount of data moved during compaction, thereby reducing the amount of data re-written and extending the device's lifetime.

Solution

To address this, we focus on whether a key has been deleted or updated. Including values in the compaction process (which are often larger than keys) unnecessarily amplifies the amount of data read and written. Therefore, we store keys and values separately. Specifically, we map value offsets to the keys, represented as 32-bit integers.

This approach reduces the amount of data read, written, and moved during compaction, leading to improved performance and less wear on storage devices, particularly SSDs. By minimizing the data movement, we not only enhance the efficiency of the database but also significantly extend the lifespan of the underlying storage hardware.

Performance Benefits

According to the benchmarks presented in the WiscKey paper, implementations can outperform LevelDB and RocksDB by:

  • 2.5x to 111x for database loading
  • 1.6x to 14x for random lookups

Addressing major concerns

  • Range Query: Since keys are separate from values, won't that affect range queries performance? Well, we now have internal parallelism in SSDs, as we fetch the keys from the LSM tree we can fetch the values in parallel from the vlog file. This benchmark from the Wisckey Paper shows how, for request size ≥ 64KB, the aggregate throughput of random reads with 32 threads matches the sequential read throughput.
  • More Disk IO for Reads: Since keys are now separate from values, do we have to make extra disk IO to fetch values? Yes, but since the key density now increases for each level (since we are only storing keys and value offsets in the sstable), we will most likely search fewer levels compared to LevelDB or RocksDB for the same query. A significant portion of the LSM tree can also be cached in memory.

Designed for asynchronous runtime (unstable)

Based on the introduction and efficiency of asynchronous IO at the OS kernel level e.g io_uring for the Linux kernel, VelarixDB is designed for asynchronous runtime. In this case, Tokio runtime. Tokio allows for efficient and scalable asynchronous operations, making the most of modern multi-core processors. While some operating systems — such as macOS — do not currently provide native asynchronous file system APIs, Tokio handles this limitation gracefully. It uses a dedicated thread pool to offload blocking file system operations, effectively spawning separate threads behind the scenes for file I/O. This ensures that the async runtime remains non-blocking and responsive, even when interacting with inherently blocking system calls. The goal is to use tokio-uring which supports the use of io_uring in an asyncrhorous context in this case Tokio. Tokio might natively support in the future. (We haven't benchmarked the async version therefore, this is unstable and might be removed in a future version).

Disclaimer

Please note that velarixdb is still under development and is not yet production-ready.

Basic Features

  • Atomic Put(), Get(), Delete(), and Update() operations
  • 100% safe & stable Rust
  • Separation of keys from values, reducing the amount of data moved during compaction (i.e., reduced IO amplification)
  • Garbage Collector
  • Lock-free memtable with Crossbeam SkipMap (no Mutex)
  • Tokio Runtime for efficient thread management
  • Bloom Filters for fast in-memory key searches
  • Crash recovery using the Value Log
  • Index to improve searches on Sorted String Tables (SSTs)
  • Key Range to store the largest and smallest keys in an SST
  • Sized Tier Compaction Strategy (STCS)

TODO

  • Snapshot Isolation
  • Block Cache
  • Batched Writes
  • Range Query
  • Snappy Compression
  • Value Buffer to keep values in memory and only flush in batches to reduce IO (under investigation)
  • Checksum to detect data corruption
  • Leveled Compaction (LCS), Time-Window Compaction (TCS), and Unified Compaction (UCS)
  • Monitoring module to continuously monitor and generate reports
  • Introduce Learned Index for Lower Levels in the LSM Tree (Machine Learning)

It is not:

  • A standalone server
  • A relational database

Constraint

  • Keys are limited to 65,536 bytes, and values are limited to 2^32 bytes. Larger keys and values have a bigger performance impact.

Basic usage

cargo add velarixdb
use velarixdb::db::DataStore;
use tempfile::tempdir;

#[tokio::main]
async fn main() {
    let root = tempdir().unwrap();
    let path = root.path().join("velarix");
    let mut store = DataStore::open("big_tech", path).await.unwrap(); // handle IO error

    store.put("apple", "tim cook").await;
    store.put("google", "sundar pichai").await;
    store.put("nvidia", "jensen huang").await;
    store.put("microsoft", "satya nadella").await;
    store.put("meta", "mark zuckerberg").await;
    store.put("openai", "sam altman").await;


    let entry1 = store.get("apple").await.unwrap(); // Handle error
    let entry2 = store.get("google").await.unwrap();
    let entry3 = store.get("nvidia").await.unwrap();
    let entry4 = store.get("microsoft").await.unwrap();
    let entry5 = store.get("meta").await.unwrap();
    let entry6 = store.get("openai").await.unwrap();
    let entry7 = store.get("***not_found_key**").await.unwrap();

    assert_eq!(std::str::from_utf8(&entry1.unwrap().val).unwrap(), "tim cook");
    assert_eq!(std::str::from_utf8(&entry2.unwrap().val).unwrap(), "sundar pichai");
    assert_eq!(std::str::from_utf8(&entry3.unwrap().val).unwrap(), "jensen huang");
    assert_eq!(std::str::from_utf8(&entry4.unwrap().val).unwrap(), "satya nadella");
    assert_eq!(std::str::from_utf8(&entry5.unwrap().val).unwrap(), "mark zuckerberg");
    assert_eq!(std::str::from_utf8(&entry6.unwrap().val).unwrap(), "sam altman");
    assert!(entry7.is_none());

    // Remove an entry
    store.delete("apple").await.unwrap();

    // Update an entry
    let success = store.update("microsoft", "elon musk").await;
    assert!(success.is_ok());
}

Store JSON

use serde::{Deserialize, Serialize};
use serde_json;
use velarixdb::db::DataStore;
use tempfile::tempdir;

#[tokio::main]
async fn main() {
    let root = tempdir().unwrap();
    let path = root.path().join("velarix");
    let mut store = DataStore::open("big_tech", path).await.unwrap(); // handle IO error

    #[derive(Serialize, Deserialize)]
    struct BigTech {
        name: String,
        rank: i32,
    }
    let new_entry = BigTech {
        name: String::from("Google"),
        rank: 50,
    };
    let json_string = serde_json::to_string(&new_entry).unwrap();

    let res = store.put("google", json_string).await;
    assert!(res.is_ok());

    let entry = store.get("google").await.unwrap().unwrap();
    let entry_string = std::str::from_utf8(&entry.val).unwrap();
    let big_tech: BigTech = serde_json::from_str(&entry_string).unwrap();

    assert_eq!(big_tech.name, new_entry.name);
    assert_eq!(big_tech.rank, new_entry.rank);
}

Examples

See here for practical examples

Dependencies

~12–25MB
~378K SLoC