Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Exams
Animated Algorithms
Artificial Neuron
Perceptron
Multi Layer Perceptron
Generalisation
RBF Networks
Optical Character Recognition
Prediction
Gaussian Mixture Model
Principal Component Analysis
Interactive Tests
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster

Prediction with Multi-Layer Perceptrons

Introduction

This applet illustrates the prediction capabilities of the multi-layer perceptrons. It allows to define an input signal on which prediction will be performed. The user can choose the number of input units, hidden units and output units, as well as the delay between the input series and the predicted output series. Then it is possible to observe interesting prediction properties.

Credits

The original applet was written by Olivier Michel.

Instructions

Choose a function for prediction with the popup menu:

  • Cosinus.
  • Sinus.
  • Complex is a sum of 4 sinus with differents periods.
  • Chaos is the Ikeda chaotic function (i.e., Benítez).

Choose the number of input units, outputs units. Choose a delta parameter for spacing the inputs and outputs units on the function. Then, choose the delay between input units and output units. Finally click on the "Init" button to see the positions of the units centered on the graph. Input unit are plotted in blue while output units are plotted in red.

During learning and testing, inputs and output are choosen anywhere within the range [0;1]. The algorithm first compute the range between the first input and last output. Then, this range is gradually "slided" with 100 iterations on the [0;1] space starting from 0 until the end of the range reaches 1.

You can play by clicking with the mouse on the graph and change the position of the first input unit on the graph (this doesn't change anything to the learning process, but only on the display). Then, you will see the response of the neural network (red points) to the specified input.

Learning is performed over the range [0;1]. The range [1;2] can be used for testing the generalization capabilities of the networks.

Applet

 

Questions

  1. Cosinus: try to find the minimal neural network able to do a good prediction on the Cosinus function with a single output neuron.
  2. Sinus: same question as above but with the Sinus function. Why a single input is not enough ?
  3. Complex: same question but with the Complex function (which is a sum of 4 sinusoids with differents periods). Can you define a law for computing the minimal number of inputs depending on the shape of the function to be predicted ?
  4. Chaos:same question but with the Chaos function. Congratulations if you can find a network doing such a prediction! How do you explain that it is so difficult ?
  5. Two ouputs: try to find the minimal neural network able to do a good prediction on the Cosinus function with a couple of output neurons. Is it more difficult than question 1. Why ?
  6. Three outputs: Now try with 3 output neurons and with the sinus function. Note: you may need to adjust the delta and delay parameters.
  7. Four outputs: Finally, try to design the smallest network (however, it should be rather big), that can predict four outputs on the Complex function. Does it need more learning than other examples ?