4 releases (1 stable)

1.0.0 May 16, 2023
0.2.0 May 11, 2023
0.1.1 May 9, 2023
0.1.0 May 8, 2023

#480 in Machine learning

GPL-3.0-only

20KB
314 lines

tiny ml

A simple, fast rust crate for simple basic neural networks.

What is this for?

  • Learning about ML
  • Evolution simulations
  • Whatever else you want to use this for

what is this not?

  • A large scale ML library like Tensorflow or PyTorch. This is simple and basic, or just 'tiny'.

How to use this?

As an example, here is how to make a model that can tell if a point is in a circle or not!

use tiny_ml::prelude::*;

// how many input data-points the model has
const NET_INPUTS: usize = 2;
// how many data-points the model outputs
const NET_OUTPUTS: usize = 1;
// radius of the circle
const RADIUS: f32 = 30.0;


fn main() {
    // create a network   
    let mut net: NeuralNetwork<NET_INPUTS, NET_OUTPUTS> = NeuralNetwork::new()
        .add_layer(3, ActivationFunction::ReLU)
        .add_layer(3, ActivationFunction::ReLU)
        .add_layer(1, ActivationFunction::Linear);
    // this network has no weights yet but we can fix that by training it
    
    // for training we first need a  dataset
    let mut inputs = vec![];
    let mut outputs = vec![];

    // well just generate some samples
    for x in 0..100 {
        for y in 0..100 {
            inputs.push([x as f32, y as f32]);
            // we want this to be a classifier, so we will give -1 for in the circle 
            // and +1 for in the circle 
            outputs.push(
                if (x as f32).abs() + (y as f32).abs() < RADIUS{
                    [1.0
                } else {
                    [-1.0]
                }
            )
        }
    }


    let data = DataSet {
        inputs,
        outputs,
    };

    // get ourselves a trainer
    let trainer = BasicTrainer::new(data);
    // let it train 10 times, 50 iterations each 
    for _ in 0..10 {
        trainer.train(&mut net, 50);
        // print the total error, lower is better
        println!("{}", trainer.get_total_error(&net))
    }
}

Features

serialization enables Serde support. parallelization enables rayon, default feature.

Speed?

Here some benchmarks on an AMD Ryzen 5 2600X (12) @ 3.6 GHz with the 'bench' example. Build with --release-flag enabled. Benchmark is 10 Million runs on this network, and then sum the results:

fn main() {
    let mut net: NeuralNetwork<1, 1> = NeuralNetwork::new()
        .add_layer(5, ActivationFunction::ReLU)
        .add_layer(5, ActivationFunction::ReLU)
        .add_layer(5, ActivationFunction::ReLU)
        .add_layer(5, ActivationFunction::ReLU)
        .add_layer(5, ActivationFunction::ReLU)
        .add_layer(5, ActivationFunction::ReLU)
        .add_layer(1, ActivationFunction::Linear);
}
method time Description
run 1.045s Single threaded, but buffers some Vecs
unbuffered_run 1.251s Can be ran in multiple threads at the same time, however has to allocate more often
par_run 240ms Takes a multiple inputs at once. Parallizes computation with rayon, uses unbuffered_run under the hood

Dependencies

~240–740KB
~14K SLoC