Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Assignments
Scores
Guidelines
Archive
Exams
Animated Algorithms
Interactive Tests
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster

Homework 2: Gradient descent



[Points: 10; Issued: 2003/03/13; Deadline: 2003/03/28; Tutor: Gerhard Neumann; Infohour: 2003/03/26, 14:00-15:00, Seminarraum IGI; Einsichtnahme: 2003/04/09, 14:00-15:00, Seminarraum IGI; Download: pdf; ps.gz]





a)
Consider a feedforward network of depth 2 with 5 inputs and 2 sigmoidal gates at the hidden layer and 1 sigmoidal gate at the output layer. Find the learning rule for each of the weights in the network if you apply gradient descent (learning rate $ \eta$) to the MSE for a single training example $ \left<\vec{a},b\right>$.

Compare the learning rule to the general backprop rule. In particular you should explicitly state the value of the parameter $ \alpha$ for the considered network. [5 Points]

b)
Consider a Radial Basis Function (RBF) Network which is defined in section 1.6 of Supervised Learning for Neural Networks: a tutorial with JAVA exercises by W. Gerstner. Find the learning rule for the weights in layer 1 and 2 if you apply gradient descent (learning rate $ \eta$) to the MSE for a single training example $ \left<\vec{a},b\right>$. [5 Points]