next up previous
Next: Tangent propagation [2* P] Up: NNA_Exercises_2009 Previous: Digit Classification [3 P]

Decision Boundaries of Backprop [2* P]

Write a Matlab script which takes as input a two-dimensional input range and step-size and computes the output of a neural network at every point of the grid (e.g. the input could be -1:0.1:1 and 0:0.5:10, then the script would calculate the outputs in the interval $ [ 1, 1] \times [0, 10]$ with a step-size of 0.1 in x-direction and 0.5 in y-direction). From the output of this script create 2D plots where the colour of a pixel shows the output of the network at that point. By using a threshold (e.g. for a tansig output $ >0$ is class 1) for the output you can visualize the decision boundary of the network. (Useful commands: meshgrid, pcolor or imagesc, colormap).

a)
Visualize the decision boundary in the interval $ -5\leq x, y \leq 5$ for the following networks with given weights, where $ W_I$ gives the input-to-hidden weights (bias in the first row) and $ W_O$ is the hidden-to-output weight vector.

$\displaystyle W_I^1 = \left( \begin{array}{rr} 0.5 & -0.5  0.3 & -0.4  -0.1...
...ght)    W_O^1 = \left( \begin{array}{r} 1.0  -2.0  0.5 \end{array} \right)$ (5)

$\displaystyle W_I^2 = \left( \begin{array}{rr} -1.0 & 1.0  -0.5 & 1.5  1.5 ...
...ght)    W_O^2 = \left( \begin{array}{r} 0.5  -1.0  1.0 \end{array} \right)$ (6)

b)
Train neural networks with 2, 4 and 8 hidden neurons with standard backpropagation to classify the data in data1.mat5. After training plot the training points and the corresponding decision boundaries of each network and interpret what you find. How stable are your results if you repeat the experiment?


next up previous
Next: Tangent propagation [2* P] Up: NNA_Exercises_2009 Previous: Digit Classification [3 P]
Haeusler Stefan 2010-01-19