50 releases (18 breaking)
0.19.4 | Oct 23, 2023 |
---|---|
0.18.2 | Oct 19, 2023 |
#155 in Machine learning
427 downloads per month
28KB
525 lines
QOpt
A simple optimization package.
Optimization Paradigms
The latest version of QOpt supports the following paradigms.
- Steepest Descent (Gradient Descent)
- Newton's Method
- Genetic Optimization
- Simulated Annealing
Getting Started
Importing maria-linalg
You must import the latest version of the Rust crate maria-linalg
in order to use this package.
Creating an Objective Function
First, you must define an objective function struct
. This represents a function that accepts an N
-dimensional vector and outputs a scalar.
Optimizer
accepts up to three functions.
Function::objective
(required). Accepts continuous and discrete input. Evaluates to the function output (f64
).Function::gradient
(optional). Accepts continuous input. Evaluates to the function gradient (Vector<N>
).Function::hessian
(optional). Accepts continuous input. Evaluates to the function Hessian (Matrix<N>
).
See the example below. Note that you must also import maria_linalg::Vector
and (only if you implement the Hessian) maria_linalg::Matrix
.
use qopt::Function;
use maria_linalg::{Matrix, Veector};
/// Number of continuous variables.
const C: usize = 6;
/// Number of discrete variables.
const D: usize = 0;
fn objective(&self, continuous: Vector<C>, discrete: Vector<D>) -> f64 {
// Required
}
fn gradient(&self, continuous: Vector<C>) -> Vector<C> {
// Optional
}
fn hessian(&self, continuous: Vector<C>) -> Matrix<C> {
// Optional
}
Creating an Optimizer
Once you have an objective
function, you can create your Optimizer
.
use qopt::Optimizer;
/// Number of individuals per optimization iteration.
///
/// For deterministic methods (gradient descent or Newton's method), this should be 1.
/// For stochastic methods (genetic optimization or simulated annealing), this should be about 100.
const POPULATION: usize = 100;
fn main() {
let f = MyFunction::new();
let optimizer: Optimizer<C, D, POPULATION> = Optimizer::new(objective, Some (gradient), Some (hessian));
// An initial guess for our continuous variables
let c = Vector::zero();
// An initial guess for our discrete variables
let d = Vector::zero();
let output = optimizer.optimize(c, d, &[]);
println!("{}", output);
}
Running the Optimizer
You are now ready to run the optimizer using command-line arguments.
The structure for a command to execute the optimizer is as follows.
$ cargo run --release --quiet -- [--flag] [--parameter value]
Alternatively, if you have written a binary, you may run the binary according to the same rules. Suppose the binary is named myoptimizer
.
$ myoptimizer [--flag] [--parameter value]
Command-Line Arguments
The following are permitted command-line arguments and values. Note that all arguments are optional.
--opt-help
Prints a help menu.
--quiet
Does not print status updates.
--no-stop-early
Disables gradient-based convergence criterion.
--print-every [integer]
Number of iterations per status update.
Defaults to 0
. This is the "quiet" option. No status will be printed until the optimizer converges or the maximum iteration limit is reached.
Accepts an integer. For example, if this integer is 5
, then the optimizer prints a status update every fifth iteration.
--paradigm [string]
Optimization paradigm.
Defaults to steepest-descent
.
Accepts the following options.
steepest-descent
. Steepest (gradient) descent. It is recommended (but not required) to implementFunction::gradient
for this.newton
. Newton's method. It is recommended (but not required) to implementFunction::gradient
andFunction::hessian
for this.genetic
. Genetic algorithm.simulated-annealing
. Simulated annealing.
--criterion [float]
Gradient-based convergence criterion. When the gradient is less than this value, the optimizer halts. Note that this requires a locally convex function.
Defaults to 1.0e-3
.
Accepts a floating-point number.
--maxiter [integer]
Maximum number of optimization iterations.
Defaults to 100
.
Accepts an integer.
--maxtemp [float]
Maximum temperature. This is only used for the simulated annealing paradigm.
Defaults to 1.0
.
Accepts a floating-point number.
--stdev [float]
Standard deviation of mutations. This is only used for stochastic methods (genetic optimization and simulated annealing).
Defaults to 1.0
.
Accepts a floating-point number.
Dependencies
~1MB
~21K SLoC