Prediction with Multi-Layer Perceptrons
This applet illustrates the prediction capabilities of the
multi-layer perceptrons. It allows to define an input signal on
which prediction will be performed. The user can choose the number
of input units, hidden units and output units, as well as the delay
between the input series and the predicted output series. Then it
is possible to observe interesting prediction properties.
The original applet was written by
Choose a function for prediction with the popup menu:
- Complex is a sum of 4 sinus with differents periods.
- Chaos is the Ikeda chaotic function (i.e.,
Choose the number of input units, outputs units. Choose a delta
parameter for spacing the inputs and outputs units on the function.
Then, choose the delay between input units and output units.
Finally click on the "Init" button to see the positions of the
units centered on the graph. Input unit are plotted in blue while
output units are plotted in red.
During learning and testing, inputs and output are choosen
anywhere within the range [0;1]. The algorithm first compute the
range between the first input and last output. Then, this range is
gradually "slided" with 100 iterations on the [0;1] space starting
from 0 until the end of the range reaches 1.
You can play by clicking with the mouse on the graph and change
the position of the first input unit on the graph (this doesn't
change anything to the learning process, but only on the display).
Then, you will see the response of the neural network (red points)
to the specified input.
Learning is performed over the range [0;1]. The range [1;2] can
be used for testing the generalization capabilities of the
- Cosinus: try to find the minimal neural network able to
do a good prediction on the Cosinus function with a single output
- Sinus: same question as above but with the Sinus
function. Why a single input is not enough ?
- Complex: same question but with the Complex function
(which is a sum of 4 sinusoids with differents periods). Can you
define a law for computing the minimal number of inputs depending
on the shape of the function to be predicted ?
- Chaos:same question but with the Chaos function.
Congratulations if you can find a network doing such a prediction!
How do you explain that it is so difficult ?
- Two ouputs: try to find the minimal neural network able
to do a good prediction on the Cosinus function with a couple of
output neurons. Is it more difficult than question 1. Why ?
- Three outputs: Now try with 3 output neurons and with
the sinus function. Note: you may need to adjust the delta and
- Four outputs: Finally, try to design the smallest
network (however, it should be rather big), that can predict four
outputs on the Complex function. Does it need more learning than
other examples ?