Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Assignments
Scores
Guidelines
Archive
Exams
Animated Algorithms
Interactive Tests
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster

Homework 30: Adaptive Filters



[Points: 8; Issued: 2005/05/13; Deadline: 2005/06/15; Tutor: Lackner Günther; Infohour: 2005/06/14, 12:00-13:00, HS i11; Einsichtnahme: 2005/06/28, 12:00-13:00, HS i11; Download: pdf; ps.gz]





Your homework should contain a printout of your MATLAB function lms2, some informative plots (in particular the plots requested in the text below) in an appropriate scale and with the axes labeled, along with your observations written down (whole sentences).

In addition, each team should send all matlab scripts for generating the results of the different tasks to the tutor (lackyg@sbox.tugraz.at) with the subject CI. Please provide the Name and Matrikelnummer of each team member in the scripts and in the body of the email.



  • Write a MATLAB function [y,e,c]=lms2(x,d,N,mu) which implements an adaptive transversal filter using the LMS adaptation algorithm (see Tutorial). Start with the following header:
    function [y,e,c] = lms2(x,d,N,mu)
    % [y,e,c] = lms2(x,d,N,mu)
    %   Adaptive transversal filter using LMS (for algorithm analysis)
    % INPUT
    %   x ... vector containing the samples of the input signal x[n]
    %         size(x) = [xlen,1] ... column vector
    %   d ... vector containing the samples of the desired output signal d[n]
    %         size(d) = [xlen,1] ... column vector
    %   N ... number of coefficients
    %   mu .. step-size parameter
    % OUTPUT
    %   y ... vector containing the samples of the output signal y[n]
    %         size(y) = [xlen,1] ... column vector
    %   e ... vector containing the samples of the error signal e[n]
    %         size(y) = [xlen,1] ... column vector
    %   c ... matrix containing the coefficient vectors c[n]
    %         size(c) = [N,xlen+1]
    
  • For a system identification application, write a MATLAB script to visualize the adaptation process in the time domain. Use your function lms2() and let $ x[n]$ be normally distributed random numbers with zero mean value and variance one (use the MATLAB function randn()). For the unknown FIR filter, you should add to each digit of your Matrikelnummer $ 0.5$ and use this as impulse response $ \mathbf{h}$. Choose an appropriate filter length $ M$. To calculate $ d[n]$, use the MATLAB function d = filter(h,1,x). Take $ N=M$ and choose a proper value for the step-size parameter $ \mu$.

    Compare the output of the adaptive filter to the desired signal in one plot, compute the error signal, and plot the squared error signal (learning curve) using a logarithmic scale (e.g., using semilogy, or by converting the value of the squared error to dB). Be sure to use long enough signals to arrive at the minimum of the learning curve.

  • Modify your script and examine the cases $ N>M$ and $ N<M$. Plot the coefficients $ \mathbf{h}$ of your reference filter, along with the coefficients found by the LMS algorithm after several hundred sample times, e.g.:
    >> x = randn(1000,1);
    >> % set coefficients h of the reference filter to the digits of your Matrikelnummer
    >> % call reference filter and LMS algorithm here
    >> plot(h,'o');
    >> hold on
    >> plot(c(:,length(x)),'*g');
    

    Do the coefficients converge to fixed values for the case $ N<M$?

  • Add a random signal to the desired output $ d[n]$ (this would be the signal from a local speaker in an echo cancellation application):
    >> dn = d + 1e-6*randn(length(d),1);
    
    take again $ N=M$, and plot the learning curve for the LMS algorithm using the noisy desired output signal. Compare to the learning curve found before (without noise added). Do the coefficients converge?
  • Use the function rls1() implementing the recursive least squares adaptation algorithm (this function is provided) instead of lms2(), choose $ \rho$ appropriately, and compare the two algorithms concerning the time until the coefficients converge (plot the two learning curves).
  • For the two-dimensional case $ N=M=2$, plot the adaptation path of the coefficients in the $ \mathbf{c}$-plane: $ \mathbf{c}[n] = \left( c_0[n], c_1[n] \right)^{\mathsf{T}}$.

    Take the last two digits of your Matrikelnummer and add to each digit $ 0.5$. Take this as impulse response $ \mathbf{h} = \left(h_0,h_1\right)^{\mathsf{T}}$, with $ h_0,h_1 \ne 0$.

    Use both algorithms (lms2() and rls1()) and the following input signals:

    1. $ x[n] = \mathrm{sign}\left(\mathrm{rand}[n]-0.5\right)$   and$ \quad \mu=0.5, \rho=0.95$
    2. $ x[n] = \mathrm{randn}[n]$   and$ \quad \mu=0.5, \rho=0.95$
    3. $ x[n] = \cos[ \pi n ]$   and$ \quad \mu=0.5, \rho=0.95$
    4. $ x[n] = \cos[ \pi n ] + 2$   and$ \quad \mu=0.1, \rho=0.95$
    Do not forget to re-compute your reference signal $ d[n]$ for each input signal.

    Compare the results for the four input signals. Describe your observations.