#zero #maximum #portability #fixed-point #learning #freestanding #ai-ml

xneuron

A Freestanding, Zero dependency AI/ML library written in Rust with maximum portability

1 unstable release

0.1.0 Feb 9, 2025

#3 in #maximum

Download history 103/week @ 2025-02-07 18/week @ 2025-02-14

121 downloads per month

Custom license

18KB
306 lines

XNeuron 🧠

A Freestanding, Zero dependency AI/ML library written in Rust with maximum portability.

License: MIT

Overview

XNeuron is a #![no_std] compliant machine learning library designed for environments where the Rust standard library is unavailable or undesirable. It's perfect for:

  • Embedded systems
  • Operating system kernels
  • Bare metal applications
  • Resource-constrained environments

Features

  • 🚫 Zero external dependencies
  • 💻 #![no_std] compliant
  • 🔢 Fixed-point arithmetic (no floating-point required)
  • 🧮 Custom memory management
  • 🔄 Full training capabilities
  • 🎯 Inference support
  • 📦 Lightweight and portable

Supported Models

  • Perceptron
  • Feedforward Neural Networks
  • Support Vector Machines (SVM)
  • More coming soon!

Installation

Add this to your Cargo.toml:

[dependencies]
xneuron = "0.1.0"

Usage Examples

Basic Perceptron

use xneuron::{Fixed, models::Perceptron};

// Create a perceptron with 2 inputs
let mut perceptron = Perceptron::new(2, 8); // 8-bit scale for fixed-point arithmetic

// Training data
let input = vec![Fixed::new(1 << 8, 8), Fixed::new(1 << 8, 8)]; // [1.0, 1.0] in fixed-point
let target = true;

// Train the perceptron
perceptron.train(&input, target);

// Make predictions
let prediction = perceptron.predict(&input);

Neural Network

use xneuron::{Fixed, Layer, NeuralNetwork, ReLU};
use alloc::boxed::Box;

// Create a neural network
let mut nn = NeuralNetwork::new(Fixed::new(1, 8)); // Learning rate = 1.0

// Add layers
nn.add_layer(Layer::new(2, 3, 8, Box::new(ReLU))); // Hidden layer
nn.add_layer(Layer::new(3, 1, 8, Box::new(ReLU))); // Output layer

// Training data
let input = vec![Fixed::new(1 << 8, 8), Fixed::new(1 << 8, 8)];
let target = vec![Fixed::new(1 << 8, 8)];

// Train the network
nn.train(&input, &target);

// Make predictions
let output = nn.forward(&input);

Support Vector Machine

use xneuron::{Fixed, models::SVM};

// Create an SVM with 2 input features
let mut svm = SVM::new(2, 8);

// Training data
let input = vec![Fixed::new(1 << 8, 8), Fixed::new(0, 8)]; // [1.0, 0.0] in fixed-point
let target = true;

// Train the SVM
svm.train(&input, target);

// Make predictions
let prediction = svm.predict(&input);

Fixed-Point Arithmetic

XNeuron uses fixed-point arithmetic instead of floating-point numbers. This makes it suitable for platforms without FPU support. The scale factor determines the precision:

// Create a fixed-point number with 8-bit scale
let x = Fixed::new(256, 8); // Represents 1.0 (256 >> 8 = 1)
let y = Fixed::new(128, 8); // Represents 0.5 (128 >> 8 = 0.5)

// Arithmetic operations maintain scale
let sum = x + y;  // 1.5
let product = x * y;  // 0.5

Memory Management

XNeuron includes a basic bump allocator for no_std environments. You can also provide your own allocator implementation:

#[global_allocator]
static ALLOCATOR: CustomAllocator = CustomAllocator::new();

Performance Considerations

  • Fixed-point arithmetic may have lower precision than floating-point
  • The bump allocator never frees memory (consider implementing a proper allocator for long-running applications)
  • Matrix operations are not currently optimized for SIMD

Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Roadmap

  • SIMD optimizations
  • More model implementations (CNNs, Decision Trees)
  • Better memory management
  • Serialization support
  • More activation functions
  • Advanced optimizers (Adam, RMSprop)

No runtime deps