We give batch and on-line learning algorithms for these classes of neural networks and prove bounds on the performance of our algorithms. The batch algorithms work for real valued inputs whereas the on-line algorithms require that the inputs are discretized. The hypotheses of our algorithms are essentially also neural networks of depth two. However, their number of hidden nodes might be much larger than the number of hidden nodes of the neural network that has to be learned.
Our algorithms can handle a large number of hidden nodes since they rely on multiplicative weight updates at the output node, and the performance of these algorithms scales only logarithmically with the number of hidden nodes used.