Next: Distributed computing: WTA [5 Up: MLB_Exercises_2008 Previous: Genetic Algorithms [5+2* P]

# Comparison of Learning Algorithms [5 P]

Compare the performance of the three learning algorithms back-propagation, genetic algorithms and simulated annealing that optimize the weights of a neural network to predict a nonlinear function of two variables. Use the dataset dataset.m as the source for training and test sets.

a)

b)
Modify the code in the file compare.m to train neural networks with the standard back-propagation algorithm from the MATLAB Neural Networks Toolbox. Train neural networks consisting of 1, 3, 5, 7, 10 and 15 hidden units and plot the training/test errors and the run times of the algorithm in dependence on the number of hidden units. Average the results over several runs for each network size. Apply a standard method of your choice to compensate for over-fitting (i.e. weight decay or early stopping).

c)
Repeat the analysis in b) with simulated annealing. Adjust the cooling schedule to achieve appropriate convergence properties. Set the maximum number of iterations (e.g. 1000) to avoid too long run times.

d)
Repeat the analysis in b) with genetic algorithms. Adjust the parameters population size (e.g. 100), number of generations (e.g. 1000) and mutation operations (as you like) to achieve appropriate convergence properties.

e)
Compare and interpret the results for all three learning algorithms.

f)
Repeat the analysis b) - d) with a modified mean squared error function where Gaussian noise with mean 0 and standard deviation 0.2 is added to the target values. Interpret the results for all three algorithms and compare it to the results obtained in e).

Present your results clearly, structured and legible. Document them in such a way that anybody can reproduce them effortless.

Next: Distributed computing: WTA [5 Up: MLB_Exercises_2008 Previous: Genetic Algorithms [5+2* P]
Haeusler Stefan 2009-01-19