Neural Networks B, SS06 1

UA Dr. Robert Legenstein, WiMAus Prashant Joshi, M.S.

Institute for Theoretical Computer Science
Technische Universität Graz
A-8010 Graz, Austria
{legi, joshi}@igi.tugraz.at

NACHNAME Vorname Matrikelnmr Team
       
       
       

Exercise 1: Pattern generator with perceptrons

NOTE: You can download this exercise in pdf or postscript(ps) format here.

In this exercise you are going to do some pattern generation using a network made of 4 perceptrons which receives output feedback. More precisely, the task is to generate two different periodic sequences as the network output, if the network is started from two different initial conditions.


Theory

A perceptron is a kind of threshold neuron, which receives $ N$ inputs $ \langle x_1, x_2, \ldots x_N \rangle$ and gives the output y. The relation between input and output is given by equation 1:

$\displaystyle y = hardlims(\sum_{i=1}^Nw_i \cdot x_i -\theta)$ (1)

where the neuron is receiving $ N$ inputs, and the $ {i^{th}}$ input is connected to this neuron by a synapse of weight $ w_i$, and $ \theta$ is the bias term. Intuitively, if the weighted sum of the inputs to a neuron is greater then the bias $ \theta$, then the neuron outputs a value $ 1$, otherwise it outputs a value $ -1$.

The hardlims function $ y = hardlims(x)$ is defined by equation 2:

$\displaystyle y = \left\{ \begin{array}{cc} -1 & \mbox{for $x<0$}  +1 & \mbox{for $x\geq 0$} \end{array} \right.$ (2)


Detailed task description

Figure: The network that is used for pattern generation.
\includegraphics[width=8cm]{fig1-ex1}
Suppose you have a network made of 4 perceptrons. Each of these perceptrons is receiving a 4 bit binary input (each bit is either $ -1$ or $ +1$). More precisely, the value of $ i_{th}$ input bit at time-step $ t$ is the output of the $ i_{th}$ neuron at time-step $ t - 1$ (please see figure 1). Also given are two periodic sequences $ A$, and $ B$.

Design a network whose $ i^{th}$ perceptron's output is given by equation 3

$\displaystyle x_i(t) = \emph{hardlims}(\sum_{j=1}^{N}W_{ij}X_j(t-1) -\Theta_i)$ (3)

or in matrix notation shown as equation 4:

$\displaystyle {\bf x}(t) = \emph{hardlims}({\bf W} \cdot {\bf x}(t-1) - {\bf\Theta})$ (4)


where $ {\bf W} = [w_{ij}]_{i,j = 1, \ldots ,4}, {\bf x} = \langle x_1,
\ldots , x_4 \rangle^T$ and $ \Theta = \langle \theta_1, \ldots ,\theta_4 \rangle^T$.



Find the weight matrix $ {\bf W}$ and the bias vector $ {\bf\Theta}$, such that when the network is started at time $ t = 0$ with input $ {\bf x}(0) = A(0)$, the network produces the sequence A, and when on the other hand the network is started at time $ t = 0$ with input $ {\bf x}(0) = B(0)$, the network produces the sequence B.

The two sequences and desired network behavior are given below:

$ t$ x(t) for sequence A x(t) for sequence B
0 -1 -1 -1 +1 -1 +1 +1 -1
1 +1 -1 -1 -1 +1 -1 -1 +1
2 -1 +1 -1 -1 -1 +1 +1 -1
3 -1 -1 +1 -1 +1 -1 -1 +1
4 -1 -1 -1 +1 -1 +1 +1 -1
5 +1 -1 -1 -1 +1 -1 -1 +1
6 -1 +1 -1 -1 -1 +1 +1 -1
7 -1 -1 +1 -1 +1 -1 -1 +1
: : :
: : :

About this document ...

Neural Networks B, SS06 1

This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.70)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split=0 -no_navigation ex1.tex

The translation was initiated by Joshi Prashant on 2006-03-13


Footnotes

... SS061
Class Website: http://www.igi.tugraz.at/lehre/NNB/SS06/


Joshi Prashant 2006-03-13