next up previous
Next: PCA vs. Fisher's Linear Up: NNA_Exercises_2009 Previous: Linear models for regression

Linear models for regression II [3 P]

Investigate the dependence of the function of a linear model consisting of radial (or Gaussian) basis functions (RBF) on the number of basis functions and the parameter value of the variance. Generate values for the input $ x$ in the interval $ [0.1, 1]$ and a target function $ t(x) = sin\left(\frac{4}{x} \right)$ . Add Gaussian noise with standard deviation $ 0.05$ to the target values and use this as your training data.

a)

Randomly select $ k$ points from the $ x$ -interval and the corresponding (noisy) target values. Train the linear model with exact interpolation and default width ( $ \sigma^2 = 1.0$ ) on these training points. Measure the mean squared error (MSE) between the original function (without noise) and the predictions of the linear model on the whole $ x$ -interval. Repeat the experiment at least 20 times for every value of $ k=20,...,50$ to obtain a more reliable estimate of the true MSE. Plot the MSE and the standard error of the MSE (with error bars) as a function of the number of basis functions $ k$ .

b)

Set the number of basis functions to the lowest value with which you still obtain a reliably good prediction. Manually select good positions for the RBF centers (explain how you chose them!) and train the weights. Plot the prediction of the network and the target curve in one figure.

c)

Use the same RBF centers as in b). Now change the width of the RBF and investigate how the prediction changes. Plot the MSE as a function of the RBF width $ \sigma^2$ and plot the predictions of the network when the width is either too small or to wide. Explain what you find.

Useful Matlab commands: randn and errorbar.


next up previous
Next: PCA vs. Fisher's Linear Up: NNA_Exercises_2009 Previous: Linear models for regression
Haeusler Stefan 2010-01-19