Computational Intelligence, SS08 2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Exams
Animated Algorithms
 Artificial Neuron Perceptron Multi Layer Perceptron Generalisation RBF Networks Optical Character Recognition Prediction Gaussian Mixture Model Principal Component Analysis
Interactive Tests
Key Definitions
News
mailto:webmaster

## Components of an artificial neuron

#### Inputs, xi:

Typically, the input values are external stimulii from the environment or come from the outputs of other artificial neurons. They can be discrete values from a set, such as {0,1}, or real-valued numbers.

#### Weights, wi:

The first thing an artificial neuron does is to compute the weighted sum of its inputs (i.e., the inner product between the input pattern and the connection strengths). The weights are real-valued numbers that determine the contribution of each input.

The goal of neural network training algorithms is to determine the "best" possible set of weight values for the problem under consideration. Finding the optimal set is often a trade-off between computation time, minimizing the network error, and maintaining the network's ability to generalize.

#### Threshold, u:

The threshold is a real number that is subtracted from the weighted sum of the input values. Sometimes the threshold is called a bias value. In this case, the real number is added to the weighted sum. For simplicity, the theshold can be regarded as another input / weight pair, where x0 = 1.

#### Activation Function, f:

The activation function for the original McCulloch-Pitts neuron was the unit step function. However, the artificial neuron model has since been expanded to include other functions such as the sigmoid, piecewise linear, and Gaussian.

The identity function is the simplest possible activation function; the resulting unit is called a linear associator.

The activation functions available in this applet are shown in Table 1.

 Unit Step Sigmoid Piecewise Linear Gaussian Identity f (x) = x
[Back to the Artificial Neuron applet page

#### Neuron Output, y:

The artificial neuron computes its output according to the equation shown below. This is the result of applying the activation function to the weighted sum of the inputs, less the threshold. This value can be discrete or real depending on the activation function used.

Once the output has been calculated, it can be passed to another neuron (or group of neurons) or sampled by the external environment.   The interpretation of the neuron output depends upon the problem under consideration. For example, in pattern classification, an output of 1 might imply the input belongs to a certain class, whereas an output of 0 might mean that it doesn't.

Back to the Artificial Neuron applet page