Computational Intelligence, SS08 2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
 Introduction to Matlab Neural Network Toolbox OCR with ANNs Adaptive Filters VC dimension Gaussian Statistics PCA, ICA, Blind Source Separation Hidden Markov Models Mixtures of Gaussians Automatic Speech Recognition
Practical Course Slides
Homework
Exams
Animated Algorithms
Interactive Tests
Key Definitions
News
mailto:webmaster

The RLS (recursive least squares) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector. To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used. This weighting factor is introduced in the cost function

 (10)

where the error signal is computed for all times using the current filter coefficients : .

When the squared error for all sample times up to current time is considered in the cost function equally. If the influence of past error values decays exponentially: method of exponentially weighted least squares. is called the forgetting factor.

Analogous to the derivation of the LMS algorithm we find the gradient of the cost function with respect to the current weights

 (11)

We now, however, do trust in the ability to estimate the expected values and with sufficient accuracy using all past samples, and do not use a gradient descent method, but immediately search for the minimum of the cost function by setting its gradient to zero . The resulting equation for the optimum filter coefficients at time is

 (12)

with , and .

Both and can be computed recursively:

 (13)

and
 (14)

To find the coefficient vector we, however, need the inverse matrix . Using a matrix inversion lemma [1] a recursive update equation for is found as:

 with (15)

Finally, the weights update equation is

 (16)

The equations to solve in the RLS algorithm at each time step are eqs. 15 and 16. The RLS algorithm is computationally more complex than the LMS algorithm. Note, however, that due the recursive updating the inversion of matrix is not necessary (which would be a considerably higher computational load). The RLS algorithm typically shows a faster convergence compared to the LMS algorithm.