9 releases
Uses old Rust 2015
0.1.8 | Feb 2, 2024 |
---|---|
0.1.7 | Oct 19, 2018 |
0.1.6 | Nov 7, 2015 |
0.1.5 | Oct 26, 2015 |
0.1.3 | Jun 29, 2015 |
#481 in Machine learning
93 downloads per month
Used in fann
98KB
551 lines
fann-sys-rs
Low-level Rust bindings to the Fast Artificial Neural Networks library. The wrapper fann-rs provides a safe interface on top of these.
Usage
Add fann-sys
and libc
to the list of dependencies in your Cargo.toml
:
[dependencies]
fann-sys = "*"
libc = "*"
and this to your crate root:
extern crate fann;
extern crate libc;
lib.rs
:
Raw bindings to C functions of the Fast Artificial Neural Network library
Creation/Execution
The FANN library is designed to be very easy to use.
A feedforward ANN can be created by a simple fann_create_standard
function, while
other ANNs can be created just as easily. The ANNs can be trained by fann_train_on_file
and executed by fann_run
.
All of this can be done without much knowledge of the internals of ANNs, although the ANNs created will still be powerful and effective. If you have more knowledge about ANNs, and desire more control, almost every part of the ANNs can be parametrized to create specialized and highly optimal ANNs.
Training
There are many different ways of training neural networks and the FANN library supports a number of different approaches.
Two fundementally different approaches are the most commonly used:
-
Fixed topology training - The size and topology of the ANN is determined in advance and the training alters the weights in order to minimize the difference between the desired output values and the actual output values. This kind of training is supported by
fann_train_on_data
. -
Evolving topology training - The training start out with an empty ANN, only consisting of input and output neurons. Hidden neurons and connections are added during training, in order to achieve the same goal as for fixed topology training. This kind of training is supported by FANN Cascade Training.
Cascade Training
Cascade training differs from ordinary training in the sense that it starts with an empty neural network and then adds neurons one by one, while it trains the neural network. The main benefit of this approach is that you do not have to guess the number of hidden layers and neurons prior to training, but cascade training has also proved better at solving some problems.
The basic idea of cascade training is that a number of candidate neurons are trained separate from the real network, then the most promising of these candidate neurons is inserted into the neural network. Then the output connections are trained and new candidate neurons are prepared. The candidate neurons are created as shortcut connected neurons in a new hidden layer, which means that the final neural network will consist of a number of hidden layers with one shortcut connected neuron in each.
File Input/Output
It is possible to save an entire ann to a file with fann_save
for future loading with
fann_create_from_file
.
Error Handling
Errors from the FANN library are usually reported on stderr
.
It is however possible to redirect these error messages to a file,
or completely ignore them with the fann_set_error_log
function.
It is also possible to inspect the last error message by using the
fann_get_errno
and fann_get_errstr
functions.
Datatypes
The two main datatypes used in the FANN library are fann
,
which represents an artificial neural network, and fann_train_data
,
which represents training data.