#distributed-applications #actors #erlang #elixir #tokio-task #actor-system

hydra

A framework for writing fault tolerant, highly scalable applications with the Rust programming language

30 releases

new 0.1.32 Nov 21, 2024
0.1.31 Nov 15, 2024
0.1.30 Jun 30, 2024
0.1.22 May 31, 2024
0.1.0 Mar 23, 2016

#88 in Asynchronous

Download history 99/week @ 2024-08-02 42/week @ 2024-08-09 58/week @ 2024-08-16 23/week @ 2024-08-23 28/week @ 2024-08-30 7/week @ 2024-09-06 43/week @ 2024-09-13 142/week @ 2024-09-20 43/week @ 2024-09-27 7/week @ 2024-10-04 11/week @ 2024-10-11 3/week @ 2024-10-18 7/week @ 2024-11-01 3/week @ 2024-11-08 191/week @ 2024-11-15

210 downloads per month
Used in hydra-websockets

MIT license

260KB
6K SLoC

Hydra

A framework for writing fault tolerant, highly scalable applications with the Rust programming language. It is:

  • Fast: Native performance powered by Tokio's light weight task architecture.
  • Scalable: All Hydra code runs inside lightweight threads of execution (called Processes) that are isolated and exchange information via messages.
  • Fault-Tolerant: Inspired by Erlang/Elixir's OTP, Hydra provides many of the same concepts like GenServer and Supervisor to restart parts of your system when things to awry.
  • Distributed: Hydra provides built-in support for running a fully distributed cluster of processes over a network of any size.

Crates.io Docs.rs MIT licensed Build Status

Overview

Hydra runs on the Tokio runtime, known for powering reliable, asynchronous, and slim applications with the Rust programming language. At a high level, Hydra provides the following major components:

  • Process: A light weight task that supports sending and receiving messages.
  • GenServer: A generic server process that provides request/reply and state management.
  • Supervisor: A process which supervises other processes, used to provide fault-tolerance and encapsulate how our applications start and shutdown (gracefully).
  • Registry: A process which acts as a centralized 'registry' of processes allowing you to lookup running processes by any key.
  • Node: A mechanism to connect multiple nodes (instances) of Hydra together and monitor those connections.

Example

A basic GenServer Stack application with Hydra.

Make sure you have added Hydra, Serde in your Cargo.toml:

[dependencies]
hydra = "0.1"
serde = { version="1.0", features = "derive" }

Make sure that you are not aborting on panics, Hydra catches and manages panics for all Processes. Find and remove this line if it exists in your Cargo.toml:

panic = "abort"

Then, in your main.rs:

use hydra::Application;
use hydra::ExitReason;
use hydra::From;
use hydra::GenServer;
use hydra::GenServerOptions;
use hydra::Pid;

use serde::Deserialize;
use serde::Serialize;

#[derive(Debug, Serialize, Deserialize)]
enum StackMessage {
    Pop,
    PopResult(String),
    Push(String),
}

struct Stack {
    stack: Vec<String>,
}

impl Stack {
    pub fn with_entries(entries: Vec<&'static str>) -> Self {
        Self {
            stack: Vec::from_iter(entries.into_iter().map(Into::into)),
        }
    }
}

impl GenServer for Stack {
    type Message = StackMessage;

    async fn init(&mut self) -> Result<(), ExitReason> {
        Ok(())
    }

    async fn handle_call(
        &mut self,
        message: Self::Message,
        _from: From,
    ) -> Result<Option<Self::Message>, ExitReason> {
        match message {
            StackMessage::Pop => Ok(Some(StackMessage::PopResult(self.stack.remove(0)))),
            _ => unreachable!(),
        }
    }

    async fn handle_cast(&mut self, message: Self::Message) -> Result<(), ExitReason> {
        match message {
            StackMessage::Push(value) => self.stack.insert(0, value),
            _ => unreachable!(),
        }
        Ok(())
    }
}

struct StackApplication;

impl Application for StackApplication {
    // Here, we must link a process for the application to monitor, usually, a Supervisor, but it can be any process.
    async fn start(&self) -> Result<Pid, ExitReason> {
        let pid = Stack::with_entries(vec!["hello", "world"])
            .start_link(GenServerOptions::new())
            .await
            .expect("Failed to start stack!");

        let result = Stack::call(pid, StackMessage::Pop, None)
            .await
            .expect("Stack call failed!");

        tracing::info!("{:?}", result);

        Stack::cast(pid, StackMessage::Push(String::from("rust")));

        let result = Stack::call(pid, StackMessage::Pop, None)
            .await
            .expect("Stack call failed!");

        tracing::info!("{:?}", result);

        // Otherwise, the application will run forever waiting for Stack to terminate.
        Stack::stop(pid, ExitReason::Normal, None).await?;

        Ok(pid)
    }
}

fn main() {
    // This method will return once the linked Process in StackApplication::start has terminated.
    Application::run(StackApplication)
}

Find more examples in the examples folder. You can run them with: cargo run --example=stack.

The following projects are related to, or in use in Hydra such that it would be wise to learn them as well:

  • tokio: A runtime for writing reliable, asynchronous, and slim applications with the Rust programming language.
  • serde: A framework for serializing and deserializing Rust data structures efficiently and generically.
  • tracing: A framework for application-level tracing and async-aware diagnostics.

Changelog

View Changelog

Benchmark

There is a message passing benchmark in the examples you can run using cargo run --example=benchmark --release. I've gotten the following results:

  • Intel 14700k: 26257627 msg/s
  • Intel 7700k: 7332666 msg/s
  • Apple M1 Max: 5743765 msg/s

License

This project is licensed under the MIT license

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Hydra by you, shall be licensed as MIT, without any additional terms or conditions.

Dependencies

~7–16MB
~201K SLoC