#tensor #ndarray #linear-algebra #contraction #einsum #einstein-summation

ndarray_einsum_beta

Implementation of the einsum function for the Rust ndarray crate. As popularized in numpy, einsum (Einstein summation) implements general multidimensional tensor contraction. Many linear algebra operations and generalizations of those operations can be expressed as special cases of tensor contraction.

16 unstable releases (6 breaking)

0.7.0 Apr 8, 2021
0.6.0 Apr 8, 2021
0.5.0 Jan 20, 2020
0.4.4 Jun 12, 2019
0.1.0 May 24, 2019

#272 in Math

Download history 2973/week @ 2024-06-14 2782/week @ 2024-06-21 2505/week @ 2024-06-28 3559/week @ 2024-07-05 3093/week @ 2024-07-12 3097/week @ 2024-07-19 1251/week @ 2024-07-26 1504/week @ 2024-08-02 1363/week @ 2024-08-09 1683/week @ 2024-08-16 1561/week @ 2024-08-23 2722/week @ 2024-08-30 3611/week @ 2024-09-06 1618/week @ 2024-09-13 2624/week @ 2024-09-20 2130/week @ 2024-09-27

10,681 downloads per month
Used in 8 crates (5 directly)

Apache-2.0

130KB
2K SLoC

Einsum (Einstein Summation) for Rust ndarray

Minimal example

Cargo.toml:

ndarray_einsum_beta = "0.7.0"

src/main.rs:

use ndarray::prelude::*;
use ndarray_einsum_beta::*;

fn main() {
    let m1 = arr1(&[1, 2]);
    let m2 = arr2(&[[1, 2], [3, 4]]);
    println!("{:?}", einsum("i,ij->j", &[&m1, &m2]));
}

Documentation

Documentation Site

Better documentation to follow

General algorithm description in semi-Rust pseudocode

FirstStep = Singleton({
  contraction: Contraction,
}) | Pair({
  contraction: Contraction,
  lhs: usize,
  rhs: usize
})

IntermediateStep = {
  contraction: Contraction,
  rhs: usize
}

ContractionOrder = {
  first_step: FirstStep,
  remaining_steps: Vec<IntermediateStep>,
}

path: ContractionOrder = Optimize(&Contraction, &[OperandShapes]);

result: ArrayD<A> = einsum_path<A>(Path, &[&ArrayLike<A>]);

einsum_path() {
  let mut result = match first_step {
    Singleton => einsum_singleton(contraction, operands[0]),
    Pair => einsum_pair(contraction, operands[lhs], operands[rhs])
  }
  for step in remaining_steps.iter() {
    result = einsum_pair(contraction, &result, operands[rhs])
  }
  result
}

einsum_singleton() {
  // Diagonalizes repeated indices and then sums across indices that don't appear in the output
}

einsum_pair() {
  // First uses einsum_singleton to reduce lhs and rhs to tensors with no repeated indices and where
  // each index is either in the other tensor or in the output
  //
  // If there are any "stack" indices that appear in both tensors and the output, these are not
  // contracted and just used for identifying common elements. These get moved to the front of
  // the tensor and temporarily reshaped into a single dimension. Then einsum_pair_base does
  // the contraction for each subview along that dimension.
}

einsum_pair_base() {
  // Figures out the indices for LHS and RHS that are getting contracted
  // Calls tensordot on the two tensors
  // Permutes the result into the desired output order
}

tensordot() {
  // Permutes LHS so the contracted indices are at the end and permutes RHS so the contracted
  // indices are at the front. Then calls tensordot_fixed_order with the number of contracted indices
}

tensordot_fixed_order() {
  // Reshapes (previously-permuted) LHS and (previously-permuted) RHS into 2-D matrices
  // where, for LHS, the number of rows is the product of the uncontracted dimensions and the number of
  // columns is the product of the contracted dimensions, and vice-versa for RHS. Result is an MxN matrix
  // where M is the dimensionality of uncontracted LHS and N is dimensionality of uncontracted RHS.
  // Finally is reshaped back into (...uncontracted LHS shape, ...uncontracted RHS shape).
}

Dependencies

~3.5–5MB
~86K SLoC