1 unstable release
new 0.1.0-alpha.1 | Apr 12, 2025 |
---|
#144 in Math
Used in 4 crates
740KB
15K
SLoC
SciRS2 Core
Core utilities and common functionality for the SciRS2 library. This crate provides the foundation for the entire SciRS2 ecosystem. All modules in the SciRS2 project should leverage this core module to ensure consistency and reduce duplication.
Features
Core Features
- Error Handling: Comprehensive error system with context, location tracking, and error chaining
- Configuration System: Global and thread-local configuration with environment variable integration
- Numeric Traits: Generic numeric traits for unified handling of different numeric types
- Validation: Utilities for validating numerical operations and data (via
validation
feature) - I/O Utilities: Common I/O operations with proper error handling
- Constants: Mathematical and physical constants
- Utility Functions: Comprehensive set of utility functions for common operations
Performance Optimizations
- Caching: Memoization with TTL (Time-To-Live) support (via
cache
feature) - SIMD Acceleration: CPU vector instructions for faster array operations (via
simd
feature) - Parallel Processing: Multi-core support for improved performance (via
parallel
feature) - GPU Acceleration: Support for GPU computation via CUDA, WebGPU, Metal (via
gpu
feature) - Memory Management: Efficient memory usage for large-scale computations (via
memory_management
feature)
Development Support
- Logging: Structured logging for scientific computing (via
logging
feature) - Profiling: Function-level timing instrumentation and memory tracking (via
profiling
feature) - Random Numbers: Consistent interface for random sampling (via
random
feature) - Type Conversions: Safe numeric and complex number conversions (via
types
feature)
Documentation
- Core Module Usage Guidelines: Complete guide for using scirs2-core across modules
- Error Handling Best Practices: Best practices for error handling
Usage
Add the following to your Cargo.toml
, including only the features you need:
[dependencies]
scirs2-core = { workspace = true, features = ["validation", "simd", "parallel", "cache"] }
Basic usage examples:
// Array operations
use scirs2_core::utils::{linspace, arange, normalize, pad_array, maximum, minimum};
// Validation functions
use scirs2_core::validation::{check_positive, check_probability, check_shape};
// Configuration
use scirs2_core::config::{Config, set_global_config};
// SIMD operations
use scirs2_core::simd::{simd_add, simd_multiply};
// Import ndarray for examples
use ndarray::array;
// Set global configuration
let mut config = Config::default();
config.set_precision(1e-10);
set_global_config(config);
// Create arrays
let x = linspace(0.0, 1.0, 100);
let y = arange(0.0, 5.0, 1.0);
// Normalize a vector to unit energy
let signal = vec![1.0, 2.0, 3.0, 4.0];
let normalized = normalize(&signal, "energy").unwrap();
// Pad an array with zeros
let arr = array![1.0, 2.0, 3.0];
let padded = pad_array(&arr, &[(1, 2)], "constant", Some(0.0)).unwrap();
// Validate inputs
let result = check_positive(0.5, "alpha").unwrap(); // Returns 0.5
let probability = check_probability(0.3, "p").unwrap(); // Returns 0.3
// Element-wise operations
let a = array![[1.0, 2.0], [3.0, 4.0]];
let b = array![[4.0, 3.0], [2.0, 1.0]];
let max_ab = maximum(&a, &b); // [[4.0, 3.0], [3.0, 4.0]]
Feature Flags
The core module uses feature flags to enable optional functionality:
validation
: Enable validation utilities (recommended for all modules)simd
: Enable SIMD acceleration (requires thewide
crate)parallel
: Enable parallel processing (requiresrayon
andndarray/rayon
)cache
: Enable caching and memoization functionality (requirescached
crate)logging
: Enable structured logging and diagnosticsgpu
: Enable GPU acceleration abstractionscuda
: Enable CUDA-specific GPU acceleration (requiresgpu
feature)memory_management
: Enable advanced memory management toolsmemory_metrics
: Enable detailed memory usage tracking and analysismemory_visualization
: Enable memory usage visualization capabilitiesmemory_call_stack
: Enable call stack tracking for memory operationsprofiling
: Enable performance profiling toolsrandom
: Enable random number generation utilitiestypes
: Enable type conversion utilitieslinalg
: Enable linear algebra with BLAS/LAPACK bindingsall
: Enable all features except backend-specific ones
Each module should enable only the features it requires:
# For modules performing numerical computations
scirs2-core = { workspace = true, features = ["validation", "simd"] }
# For modules with parallel operations and caching
scirs2-core = { workspace = true, features = ["validation", "parallel", "cache"] }
# For AI/ML modules that need GPU acceleration
scirs2-core = { workspace = true, features = ["validation", "gpu", "memory_management", "random"] }
# For development and testing
scirs2-core = { workspace = true, features = ["validation", "logging", "profiling"] }
Core Module Components
New Components
GPU Acceleration
use scirs2_core::gpu::{GpuContext, GpuBackend, GpuBuffer};
// Create a GPU context with the default backend
let ctx = GpuContext::new(GpuBackend::default())?;
// Allocate memory on the GPU
let mut buffer = ctx.create_buffer::<f32>(1024);
// Copy data to GPU
let host_data = vec![1.0f32; 1024];
buffer.copy_from_host(&host_data);
// Execute a computation
ctx.execute(|compiler| {
let kernel = compiler.compile(kernel_code)?;
kernel.set_buffer(0, &mut buffer);
kernel.dispatch([1024, 1, 1]);
Ok(())
})?;
Memory Management
use scirs2_core::memory::{ChunkProcessor2D, BufferPool, ZeroCopyView};
// Process large arrays in chunks
let mut processor = ChunkProcessor2D::new(&large_array, (1000, 1000));
processor.process_chunks(|chunk, coords| {
// Process each chunk...
});
// Reuse memory with buffer pools
let mut pool = BufferPool::<f64>::new();
let mut buffer = pool.acquire_vec(1000);
// Use buffer...
pool.release_vec(buffer);
// Efficient transformations with zero-copy views
let view = ZeroCopyView::new(&array);
let transformed = view.transform(|&x| x * 2.0);
Enhanced Memory Metrics, Snapshots, and GPU Memory Tracking
use scirs2_core::memory::metrics::{
track_allocation, track_deallocation, generate_memory_report,
format_memory_report, MemoryMetricsCollector, TrackedBufferPool
};
// Track memory allocations manually
track_allocation("MyComponent", 1024, 0x1000);
// Do work with the memory
track_deallocation("MyComponent", 1024, 0x1000);
// Automatically track memory with a buffer pool
let mut pool = TrackedBufferPool::<f64>::new("NumericalComputation");
let vec = pool.acquire_vec(1000);
// Use the vector...
pool.release_vec(vec);
// Generate a memory report
let report = generate_memory_report();
println!("Total current memory usage: {}", report.total_current_usage);
println!("Peak memory usage: {}", report.total_peak_usage);
// Print a formatted report
println!("{}", format_memory_report());
// Track memory usage during chunk processing
let mut processor = TrackedChunkProcessor2D::new(
&large_array,
(1000, 1000),
"ArrayProcessing"
);
processor.process_chunks(|chunk, coords| {
// Process each chunk...
println!("Processing chunk at {:?}", coords);
// Get memory usage after processing this chunk
let report = generate_memory_report();
println!("Current memory: {}", format_bytes(report.total_current_usage));
});
// Track GPU memory allocations
use scirs2_core::gpu::GpuBackend;
use scirs2_core::memory::metrics::{TrackedGpuContext, setup_gpu_memory_tracking};
// Set up GPU memory tracking hooks
setup_gpu_memory_tracking();
// Create a tracked GPU context
let context = TrackedGpuContext::with_backend(GpuBackend::Cpu, "GpuOperations").unwrap();
// Create buffers that are automatically tracked
let buffer = context.create_buffer::<f32>(1000);
let data_buffer = context.create_buffer_from_slice(&[1.0f32, 2.0, 3.0]);
// All allocations and deallocations are automatically tracked
let report = generate_memory_report();
println!("GPU memory usage: {}", format_bytes(report.total_current_usage));
// Memory snapshots and leak detection
use scirs2_core::memory::metrics::{
take_snapshot, compare_snapshots, save_snapshots, load_snapshots
};
// Take snapshots at different points in time
let snapshot1 = take_snapshot("baseline", "Initial memory state");
// ... perform operations that might leak memory ...
// Take another snapshot
let snapshot2 = take_snapshot("after_operations", "After memory-intensive operations");
// Compare snapshots to detect memory leaks
let diff = compare_snapshots("baseline", "after_operations").unwrap();
println!("{}", diff.format());
// Check if there are potential memory leaks
if diff.has_potential_leaks() {
println!("Potential memory leaks detected in components:");
for component in diff.get_potential_leak_components() {
println!(" - {}", component);
}
}
// Save snapshots to disk for later analysis
save_snapshots("/path/to/snapshot/directory").unwrap();
Logging and Progress Tracking
use scirs2_core::logging::{Logger, LogLevel, ProgressTracker};
// Create a logger for a module
let logger = Logger::new("matrix_ops")
.with_field("precision", "double");
// Log at different levels
logger.info("Starting matrix multiplication");
logger.debug("Using algorithm: Standard");
// Track progress for long operations
let mut progress = ProgressTracker::new("Processing", 1000);
for i in 0..1000 {
// Do work...
progress.update(i + 1);
}
progress.complete();
Profiling
use scirs2_core::profiling::{Profiler, Timer};
// Start the global profiler
Profiler::global().lock().unwrap().start();
// Time a block of code
let timer = Timer::start("operation");
// Do work...
timer.stop();
// Time a function with result
let result = Timer::time_function("calculate", || {
// Calculate...
42
});
// Print profiling report
Profiler::global().lock().unwrap().print_report();
Random Number Generation
use scirs2_core::random::{Random, DistributionExt};
use rand_distr::Normal;
// Create a random number generator
let mut rng = Random::default();
// Generate values and arrays
let value = rng.random_range(0.0, 1.0);
let normal = Normal::new(0.0, 1.0).unwrap();
let samples = rng.sample_vec(normal, 100);
let random_array = normal.random_array(&mut rng, [10, 10]);
Type Conversions
use scirs2_core::types::{NumericConversion, ComplexExt};
use num_complex::Complex64;
// Convert with error handling
let float_value: f64 = 123.45;
let int_result: Result<i32, _> = float_value.to_numeric();
// Safe conversions for out-of-range values
let large_value: f64 = 1e20;
let clamped: i32 = large_value.to_numeric_clamped();
// Complex number operations
let z1 = Complex64::new(3.0, 4.0);
let mag = z1.magnitude();
let z_norm = z1.normalize();
Existing Components
Validation Utilities
For validating various types of inputs:
use scirs2_core::validation::{
check_probability, // Check if value is in [0,1]
check_probabilities, // Check if all values in array are in [0,1]
check_probabilities_sum_to_one, // Check if probabilities sum to 1
check_positive, // Check if value is positive
check_non_negative, // Check if value is non-negative
check_in_bounds, // Check if value is in a range
check_finite, // Check if value is finite
check_array_finite, // Check if all array values are finite
check_same_shape, // Check if arrays have same shape
check_shape, // Check if array has expected shape
check_square, // Check if matrix is square
check_1d, // Check if array is 1D
check_2d, // Check if array is 2D
};
Utility Functions
Common utility functions for various operations:
use scirs2_core::utils::{
// Array comparison
is_close, // Compare floats with tolerance
points_equal, // Compare points (slices) with tolerance
arrays_equal, // Compare arrays with tolerance
// Array generation and manipulation
linspace, // Create linearly spaced array
logspace, // Create logarithmically spaced array
arange, // Create range with step size
fill_diagonal, // Fill diagonal of matrix
pad_array, // Pad array with various modes
get_window, // Generate window functions
// Element-wise operations
maximum, // Element-wise maximum
minimum, // Element-wise minimum
// Vector operations
normalize, // Normalize vector (energy, peak, sum, max)
// Numerical calculus
differentiate, // Differentiate function
integrate, // Integrate function
// General utilities
prod, // Product of elements
all, // Check if all elements satisfy predicate
any, // Check if any elements satisfy predicate
};
SIMD Operations
Vectorized operations for improved performance:
use scirs2_core::simd::{
simd_add, // Add arrays using SIMD
simd_subtract, // Subtract arrays using SIMD
simd_multiply, // Multiply arrays using SIMD
simd_divide, // Divide arrays using SIMD
simd_min, // Element-wise minimum using SIMD
simd_max, // Element-wise maximum using SIMD
};
Caching and Memoization
Utilities for caching computation results:
use scirs2_core::cache::{
CacheBuilder, // Builder for cache configuration
TTLSizedCache, // Time-to-live cache with size limit
};
Error Handling
All modules should properly propagate core errors:
use thiserror::Error;
use scirs2_core::error::CoreError;
#[derive(Debug, Error)]
pub enum ModuleError {
// Module-specific errors
#[error("IO error: {0}")]
IOError(String),
// Propagate core errors
#[error("{0}")]
CoreError(#[from] CoreError),
}
Advanced Usage Examples
Error Handling with Context
use scirs2_core::{CoreError, ErrorContext, CoreResult, value_err_loc};
fn calculate_value(x: f64) -> CoreResult<f64> {
if x < 0.0 {
return Err(value_err_loc!("Input must be non-negative, got {}", x));
}
Ok(x.sqrt())
}
Caching Expensive Operations
use scirs2_core::cache::{CacheBuilder, TTLSizedCache};
use std::cell::RefCell;
struct DataLoader {
cache: RefCell<TTLSizedCache<String, Vec<f64>>>,
}
impl DataLoader {
pub fn new() -> Self {
let cache = RefCell::new(
CacheBuilder::new()
.with_size(100)
.with_ttl(3600) // 1 hour TTL
.build_sized_cache()
);
Self { cache }
}
pub fn load_data(&self, key: &str) -> Vec<f64> {
// Check cache first
if let Some(data) = self.cache.borrow().get(&key.to_string()) {
return data.clone();
}
// Expensive data loading operation
let data = vec![1.0, 2.0, 3.0]; // Placeholder
// Cache the result
self.cache.borrow_mut().insert(key.to_string(), data.clone());
data
}
}
Contributing
See the CONTRIBUTING.md file for contribution guidelines.
License
This project is licensed under the Apache License, Version 2.0 - see the LICENSE file for details.
Dependencies
~6–18MB
~265K SLoC