
SingleLayer Perceptron Neural Networks
A singlelayer perceptron network consists of one or more
artificial neurons in parallel. The neurons may be of the
same type we've seen in the Artificial Neuron Applet.
 Each neuron in the layer provides one network output, and is
usually connected to all of the external (or environmental)
inputs.
 The applet in this tutorial is an example of a singleneuron,
singlelayer perceptron network, with just two inputs.
The perceptron learning rule, which we study next, provides a
simple algorithm for training a perceptron neural network. However,
as we will see, singlelayer perceptron networks cannot learn
everything: they are not computationally complete. As
mentioned in the introduction, twoinput networks cannot
approximate the XOR (or XNOR) functions. Of the
(2^{2})^{n} or 16 possible functions, a twoinput
perceptron can only perform 14 functions. As the number of inputs,
n, increases, the proportion of functions that can be computed
decreases rapidly.
Later, we will investigate multilayer perceptrons.
[Back to the Simple Perceptron
Learning applet page ]
