Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Exams
Animated Algorithms
Artificial Neuron
Perceptron
Multi Layer Perceptron
Generalisation
RBF Networks
Optical Character Recognition
Prediction
Gaussian Mixture Model
Principal Component Analysis
Interactive Tests
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster

Perceptron Learning Algorithm

The perceptron learning rule was originally developed by Frank Rosenblatt in the late 1950s.  Training patterns are presented to the network's inputs; the output is computed.  Then the connection weights wjare modified by an amount that is proportional to the product of
  • the difference between the actual output, y,  and the desired output, d, and 
  • the input pattern, x.
The algorithm is as follows:
  1. Initialize the weights and threshold to small random numbers.
  2. Present a vector x to the neuron inputs and calculate the output.
  3. Update the weights according to:
  4.  where
    • d is the desired output,
    • t is the iteration number, and
    • eta is the gain or step size, where 0.0 < n < 1.0



  5. Repeat steps 2 and 3 until:
    • the iteration error is less than a user-specified error threshold or
    • a predetermined number of iterations have been completed.
Notice that learning only occurs when an error is made, otherwise the weights are left unchanged.

This rule is thus a modified form of Hebb learning.

During training, it is often useful to measure the performance of the network as it attempts to find the optimal weight set. A common error measure or cost function used is sum-squared error. It is computed over all of the input vector/output vector pairs in the training set and is given by the equation below:

 

where p is the number of input/output vector pairs in the training set.
[Back to the Simple Perceptron Learning applet page ]