
The toolbox is based on the network object.
This object contains information about everything that concern the
neural network, e.g. the number and structure of its layers, the
conectivity between the layers, etc. Matlab provides highlevel
network creation functions, like newlin (create
a linear layer), newp (create a perceptron) or
newff (create a feedforward backpropagation
network) to allow an easy construction of. As an example we
construct a perceptron with two inputs ranging from 2 to
2:
>> net = newp([2 2;2 2],1)
First the architecture
parameters and the subobject structures
subobject structures:
inputs: {1x1 cell} of inputs
layers: {1x1 cell} of layers
outputs: {1x1 cell} containing 1 output
targets: {1x1 cell} containing 1 target
biases: {1x1 cell} containing 1 bias
inputWeights: {1x1 cell} containing 1 input weight
layerWeights: {1x1 cell} containing no layer weights
are shown. The latter contains information
about the individual objects of the network. Each layer consists of
neurons with the same transfer function net.transferFcn and net input function net.netInputFcn, which are in the case of perceptrons
hardlim and netsum. If
neurons should have different transfer functions then they have to
be arranged in different layers. The parameters net.inputWeights and net.layerWeights specify among other things the applied
learning functions and their parameters. The next paragraph
contains the training, initialization and performance
functions.
functions:
adaptFcn: 'trains'
initFcn: 'initlay'
performFcn: 'mae'
trainFcn: 'trainc'
The trainFcn and adaptFcn are used for the two different learning types
batch learning and incremental or online learning. By setting the
trainFcn parameter you tell Matlab which
training algorithm should be used, which is in our case the
cyclical order incremental training/learning function trainc. The ANN toolbox include almost 20 training
functions. The performance function is the function that determines
how well the ANN is doing it's task. For a perceptron it is the
mean absolute error performance function mae.
For linear regression usually the mean squared error performance
function mse is used. The initFcn is the function that initialized the weights and
biases of the network. To get a list of the functions that are
available type help nnet. To change one of these functions to
another one in the toolbox or one that you have created, just
assign the name of the function to the parameter, e.g.
>> net.trainFcn = 'mytrainingfun';
The parameters that concerns these functions
are listed in the next paragraph.
parameters:
adaptParam: .passes
initParam: (none)
performParam: (none)
trainParam: .epochs, .goal, .show, .time
By changing these parameters you can change
the default behavior of the functions mentioned above. The
parameters you will use the most are probably the components of
trainParam. The most used of these are
net.trainParam.epochs which tells the algorithm
the maximum number of epochs to train, and net.trainParam.show that tells the algorithm how many
epochs there should be between each presentation of the
performance. Type help train for more
information.
The weights and biases are also stored in the network
structure:
weight and bias values:
IW: {1x1 cell} containing 1 input weight matrix
LW: {1x1 cell} containing no layer weight matrices
b: {1x1 cell} containing 1 bias vector
The .IW(i,j) component is
a two dimensional cell matrix that holds the weights of the
connection between the input j and the network
layer i. The .LW(i,j)
component holds the weight matrix for the connection from the
network layer j to the layer i. The cell array b contains the
bias vector for each layer.
