Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Animated Algorithms
Interactive Tests
Machine Learning
Neural Networks
Classification Algorithms
Adaptive Filters
Gaussian Statistics
Hidden Markov Models
Key Definitions
Literature and Links
 Interactive Tests   previous   Contents  next

Adaptive Filtering

Increasing the step size $\mu$ generally results in faster convergence of the LMS algorithm.
True False
The goal of system identification is to build a model of an unknown system.
True False
The RLS algorithm usually converges faster than the LMS algorithm.
True False
Why is it (usually) not desirable to achieve the global minimum of the mean squared error (of the whole time-series) for an adaptive filter?
Because the filter should adapt to temporal variation of an unknown system.
Because the wanted signal (e.g., the signal of a local speaker for the application in echo-cancellation) would be suppressed.
An adaptive filter trained using the RLS algorithm with a forgetting factor $\rho = 1$
has constant coefficient values over time $\mathbf{w}[n] = \mathbf{w}$ (it does not adapt).
reaches the global minimum of the mean squared error for a time-series at the end of the time-series.
considers indirectly all past signal samples for the computation of the local error and the adaption of the coefficients.
displays an identical adaptation behavior as an adaptive filter trained using the LMS algorithm with $\mu = 0$.