1 unstable release
new 0.0.0-beta2 | Jan 16, 2025 |
---|
#809 in Algorithms
630KB
16K
SLoC
Raddy
An automatic differentiation system for geometry and simulation.
An attempt to port some portion of TinyAD to Rust.
Usage
First add to your Cargo.toml
:
raddy = "*"
Scalars
use raddy::make::var;
use rand::{thread_rng, Rng};
fn example_scalar() {
// 1. Scalar
let mut rng = thread_rng();
let val = rng.gen_range(0.0..10.0);
let x = var::scalar(val);
let x = &x; // Please read the Note section in README.
let y = x.sin() * x + x.ln();
dbg!(y.grad()[(0, 0)]);
dbg!(y.hess()[(0, 0)]);
}
Vectors
use nalgebra::{Const, SVector};
use raddy::{make::var, Ad};
use rand::{thread_rng, Rng};
fn example_matrix() {
// 2. Vector
// initialize, boilerplate code
let mut rng = thread_rng();
const N_TEST_MAT_4: usize = 4;
type NaConst = Const<N_TEST_MAT_4>;
const N_VEC_4: usize = N_TEST_MAT_4 * N_TEST_MAT_4;
let vals: &[f64] = &(0..N_VEC_4)
.map(|_| rng.gen_range(-4.0..4.0))
.collect::<Vec<_>>();
// Core logic. You can do any kind of reshaping.
let s: SVector<Ad<N_VEC_4>, N_VEC_4> = var::vector_from_slice(vals);
let transpose = s
.clone()
// This reshape is COL MAJOR!!!!!!!!!!!!!
.reshape_generic(NaConst {}, NaConst {})
.transpose();
let z = transpose;
let det = z.determinant();
dbg!(det.grad());
dbg!(det.hess());
}
Sparse
- First define your per-element (per-stencil) objective:
struct SpringEnergy {
k: f64,
restlen: f64,
}
// 2d * 2nodes = 4dof
impl ObjectiveFunction<4> for SpringEnergy {
fn eval(&self, variables: &raddy::types::advec<4, 4>) -> raddy::Ad<4> {
let p1 = advec::<4, 2>::new(variables[0].clone(), variables[1].clone());
let p2 = advec::<4, 2>::new(variables[2].clone(), variables[3].clone());
let len = (p2 - p1).norm();
// Hooke's law
let potential = val::scalar(0.5 * self.k) * (len - val::scalar(self.restlen)).powi(2);
potential
}
}
- Then, define your elements (indices). Here is an example where each element is DOFs of two nodes on each neo-hookean spring:
let springs = vec![[0, 1, 2, 3], [2, 3, 4, 5], [0, 1, 4, 5]];
let x0 = faer::col::from_slice(&[0.0, 0.0, 2.0, 0.0, 1.0, 2.0]).to_owned();
let obj = SpringEnergy {
k: 10000.0,
restlen: 1.0,
};
- Finally, compute:
let computed: ComputedObjective<4> = obj.compute(&x0, &springs);
/*
pub struct ComputedObjective<const N: usize> {
pub value: f64,
pub grad: Col<f64>,
pub hess_trips: Vec<(usize, usize, f64)>,
}
*/
Please see src/examples
and src/test
for details.
Notes
Copy
is not implemented forAd<N>
types, since its cost is not negligible.
- This reminds you to (in most cases) use a borrow type
&Ad<N>
to call methods on&Ad<N>
; or to explicitly clone it if the cost is acceptable.
Progress
- Univariate
- Gradient
- Hessian
- Tests
- Multivariate
- Gradient
- Hessian
- Tests
- Norm
- Determinant
- Matmul
- Get nalgebra as well as compiler happy
- Implement
ComplexField
for&Ad<N>
(NotAd<N>
) (a huge task that may involve codegen/metaprogramming...) - Incrementally implement the trait in
ComplexField
if some methods need them
- Implement
- Sparse interface
- Define sparse problem (generic on problem size)
- Compute sparse problem
- Test
- Mass spring: grad/hess
- Mass spring: results
- Neo Hookean
- Make an example: mass-spring system
- An option to allocate hessian on heap
-
f64
&Scalar
Interop (How to? Seems sort of impossible due to orphan rule) (We use the same sort of workaround asfaer
)
Notes For Myself
- Test code makes use of Symars, which generates Rust code from SymPy expressions.
- When getting numerical bugs, First check the argument order of symars generated functions!!!
Dependencies
~11MB
~244K SLoC