Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Introduction to Matlab
Neural Network Toolbox
OCR with ANNs
Adaptive Filters
VC dimension
Gaussian Statistics
PCA, ICA, Blind Source Separation
Hidden Markov Models
Mixtures of Gaussians
Automatic Speech Recognition
Practical Course Slides
Homework
Exams
Animated Algorithms
Interactive Tests
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster

Estimating the System Performance

The reliability of the neural network pattern recognition system is measured by testing the network with hundreds of input vectors with varying quantities of noise.

For example we create a noisy version (SD 0.2) of the letter ``J'' and query the network:

   noisyJ = alphabet(:,10)+randn(35,1) * 0.2;
   plotchar(noisyJ);
   A2 = sim(net,noisyJ);
   A2 = compet(A2);
   answer = find(compet(A2) == 1);
   plotchar(alphabet(:,answer));

Here is the noisy letter and the letter the network picked (correctly).

Figure 3: The network is tested with a noisy version of the letter ``J''.
\includegraphics[height=2cm]{noisyJ}

The script file appcr1 tests the network at various noise levels, and then graphs the percentage of network errors versus noise. Noise with a mean of 0 and a standard deviation from 0 to 0.5 is added to input vectors. At each noise level, 100 presentations of different noisy versions of each letter are made and the network's output is calculated. The output is then passed through the competitive transfer function so that only one of the 26 outputs (representing the letters of the alphabet), has a value of 1.

The number of erroneous classifications is then added and percentages are obtained.

Figure 4: Performance for the network trained with and without noise.
\includegraphics[height=6cm]{result}

The solid line on the graph shows the reliability for the network trained with and without noise. The reliability of the same network when it had only been trained without noise is shown with a dashed line. Thus, training the network on noisy input vectors greatly reduces its errors when it has to classify noisy vectors.

The network did not make any errors for vectors with noise of std 0.00 or 0.05. When noise of std 0.2 was added to the vectors both networks began making errors.

If a higher accuracy is needed, the network can be trained for a longer time or retrained with more neurons in its hidden layer. Also, the resolution of the input vectors can be increased to a 10-by-14 grid. Finally, the network could be trained on input vectors with greater amounts of noise if greater reliability were needed for higher levels of noise.