8 releases
0.5.6 | Oct 6, 2024 |
---|---|
0.5.5 | Oct 4, 2024 |
0.5.4 | Sep 25, 2024 |
0.4.0 | Sep 4, 2024 |
#249 in Asynchronous
57,408 downloads per month
Used in 33 crates
(3 directly)
58KB
1K
SLoC
tokio-retry2
Forked from https://github.com/srijs/rust-tokio-retry to keep it up-to-date
Extensible, asynchronous retry behaviours for the ecosystem of tokio libraries.
Installation
Add this to your Cargo.toml
:
[dependencies]
tokio-retry2 = { version = "0.5", features = ["jitter", "tracing"] }
Features:
jitter
: adds jittery duration to the retry. Mechanism to avoid multiple systems retrying at the same time.tracing
: usingtracing
crate to indicate that a strategy has reached itsmax_duration
ormax_delay
.
Examples
use tokio_retry2::{Retry, RetryError};
use tokio_retry2::strategy::{ExponentialBackoff, jitter, MaxInterval};
async fn action() -> Result<u64, RetryError<()>> {
// do some real-world stuff here...
RetryError::to_transient(())
}
#[tokio::main]
async fn main() -> Result<(), ()> {
let retry_strategy = ExponentialBackoff::from_millis(10)
.factor(1) // multiplication factor applied to deplay
.max_delay_millis(100) // set max delay between retries to 500ms
.max_interval(10000) // set max interval to 10 seconds
.map(jitter) // add jitter to delays
.take(3); // limit to 3 retries
let result = Retry::spawn(retry_strategy, action).await?;
Ok(())
}
Or, to retry with a notification function:
use tokio_retry2::{Retry, RetryError};
use tokio_retry2::strategy::{ExponentialBackoff, jitter, MaxInterval};
async fn action() -> Result<u64, RetryError<std::io::Error>> {
// do some real-world stuff here...
RetryError::to_permanent(()) // Early exits on this error
}
fn notify(err: &std::io::Error, duration: std::time::Duration) {
tracing::info!("Error {err:?} occurred at {duration:?}");
}
#[tokio::main]
async fn main() -> Result<(), ()> {
let retry_strategy = ExponentialBackoff::from_millis(10)
.factor(1) // multiplication factor applied to deplay
.max_delay_millis(100) // set max delay between retries to 500ms
.max_interval(10000) // set max interval to 10 seconds
.map(jitter) // add jitter to delays
.take(3); // limit to 3 retries
let result = Retry::spawn_notify(retry_strategy, action, notify).await?;
Ok(())
}
Early Exit and Error Handling
Actions must return a RetryError
that can wrap any other error type. There are 2 RetryError
error trypes:
Permanent
, which receives an error and brakes the retry loop. It can be constructed manually or with auxiliary functionsRetryError::permanent(e: E)
, that returns aRetryError::Permanent<E>
, orRetryError::to_permanent(e: E)
, that returns anErr(RetryError::Permanent<E>)
.Transient
, which is the Default error for the loop. It has 2 modes:RetryError::transient(e: E)
andRetryError::to_transient(e: E)
, that return aRetryError::Transient<E>
, which is an error that triggers the retry strategy.RetryError::retry_after(e: E, duration: std::time::Duration)
andRetryError::to_retry_after(e: E, duration: std::time::Duration)
, that return aRetryError::Transient<E>
, which is an error that triggers the retry strategy after the specified duration.
- Thet is also the trait
MapErr
that possesses 2 auxiliary functions that map the current function Result toResult<T, RetryError<E>>
:fn map_transient_err(self) -> Result<T, RetryError<E>>;
fn map_permanent_err(self) -> Result<T, RetryError<E>>;
- Using the
?
operator on anOption
type will always propagate aRetryError::Transient<E>
with no extra duration.
Retry Strategies breakdown:
There are 4 backoff strategies:
ExponentialBackoff
: base is considered the initial retry interval, so if defined from 500ms, the next retry will happen at 250000ms.attempt delay 1 500ms 2 250000ms ExponentialFactorBackoff
: this is a exponential backoff strategy with a base factor. What is exponentially configured is the factor, while the base retry delay is the same. So if a factor 2 is applied to an initial delay off 500ms, the attempts are as follows:attempt delay 1 500ms 2 1000ms 3 2000ms 4 4000ms FixedInterval
: in this backoff strategy, a fixed interval is used as constant. so if defined from 500ms, all attempts will happen at 500ms.attempt delay 1 500ms 2 500ms 3 500ms FibonacciBackoff
: a Fibonacci backoff strategy is used. so if defined from 500ms, the next retry will happen at 500ms, and the following will be at 1000ms.attempt delay 1 500ms 2 500ms 3 1000ms 4 1500ms
Dependencies
~2.3–8MB
~66K SLoC