Consider a feedforward network of depth 2 with 5 inputs and 2
sigmoidal gates at the hidden layer and 1 sigmoidal gate at the
output layer. Find the learning rule for each of the weights in the
network if you apply gradient descent (learning rate ) to the MSE for a single
training example
.

Compare the learning rule to the general backprop rule. In
particular you should explicitly state the value of the parameter
for the
considered network. [5 Points]

b)

Consider a Radial Basis Function (RBF) Network which is defined
in section 1.6 of Supervised Learning for Neural Networks: a tutorial
with JAVA exercises by W. Gerstner. Find the learning rule for
the weights in layer 1 and 2 if you apply gradient descent
(learning rate )
to the MSE for a single training example
. [5 Points]