Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Assignments
Scores
Guidelines
Archive
Exams
Animated Algorithms
Interactive Tests
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster

Homework 3: Using Backprop for a regression task



[Points: 10; Issued: 2003/03/14; Deadline: 2003/04/04; Tutor: Martin Ebner; Infohour: 2003/04/02, 14:00-15:00, Seminarraum IGI; Einsichtnahme: 2003/05/14, 14:00-15:00, Seminarraum IGI; Download: pdf; ps.gz]





This homework assignment asks you to apply backprop to a regression problem: the Boston Housing data set. The file housing.zip contains the data as the file housing.mat. See the file housing-description.txt which is also contained in housing.zip for more information on the data set.

  1. Initialize the random number generator using the Matlab commands rand('state',<MatrNmr>); and randn('state',<MatrNmr>);.
  2. Split the data randomly into a training set $ L$ (2/3) and a test set $ T$ (1/3) (use the function randperm).
  3. Normalize the data such that each attribute has a zero mean and a variance of 1 (use the function prestd).
  4. Train a 2 layer feed forward network with 2 hidden units and one output unit on the training set $ L$ (use proper activation functions at each layer) for net.trainParam.epochs=150 epochs and the training function trainbfg1 with standard parameters. Describe how the error on the training and on the test set changes with the number of epochs.
  5. Investigate how the error on the test set changes if one uses only the first 10%, 20%, 50%, or 75% of the set $ L$ for training (same network as in 3). In particular create a plot which shows how the error on the test set after 150 epochs depends on the size (10%, ..., 100%) of the training set. Interpret the resulting plot.
  6. Repeat step 5) but with a network with 20 hidden units.
  7. Compare the results obtained in 5) and 6).

Remarks

  • Present your results clearly, structured and legible. Document them in such a way that anybody can reproduce them effortless.
  • Please hand in the print out of the Matlab program you have used.




Fußnoten

... trainbfg1
trainbfg is an enhanced gradient descent method which usually converges faster than the default backprop algorithm.