Seminar Computational Intelligence E (708.115)

WS 2005/06

Institut für Grundlagen der Informationsverarbeitung (708)
 

Lecturer: O.Univ.-Prof. Dr. Wolfgang Maass

Office hours: by appointment (via e-mail)

E-mail: maass@igi.tugraz.at
Homepage: www.igi.tugraz.at/maass/



Location: IGI-seminar room, Inffeldgasse 16b/I, 8010 Graz
Date: starting from Oct. 10, 2005 every Tuesday, 13:30 - 15.00 p.m.


Content of the seminar:

Autonomous Learning

We will discuss in this seminar  research articles that represent the current state-of-the-art in the design of autonomous learners in machine learning,  and the current state of knowledge regarding biological mechanisms that enable autonomous learning in biological organisms.

We will understand in this seminar autonomous learning in a broad sense, as any mechanisms and algorithms that support learning without an explicit
teacher  (supervisor). There exist some articles on autonomous learning of robots, which we will discuss (although they are not too instructive, as far as I can tell).
But also unsupervised learning is an essential component of any powerful autonomous learners, and will be review in this seminar the state-of-the-art regarding
unsupervised learning of independent components etc.

Apart from unsupervised learning, an autonomous learner can make use of freely available supervision for prediction learning (where the environment automatically serves as supervisor for learning predictions), and from rewards and punishments that the learner receives from its actions. The latter is usually studied in the context of Reinforcement Learning, which is covered in the course Machine Learning  B  http://www.igi.tugraz.at/maass/lehre.html (it will be taught again in the year 2006/07).
In this seminar we will focus on aspects of reinforcement learning that are usually not covered in books and courses on that topic, but which give a better idea of how reinforcement learning works in biological organisms.

In this context we will discuss also some of the research results of the Austrian-borne Nobel Price Winner Eric Kandel http://almaz.com/nobel/medicine/2000c.html
He and his coworkers have analyzed the learning algorithms in one particular biological autonomous learner (Aplysia), going down to the molecular biology of learning mechanisms and the signals that are involved in controling its learning. Of particular interest are there results on "heterosynaptic plasticity", which suggest that the commonly considered Hebbian learning rules are incomplete. We will also present the controversy about the putative role of dopamine for reward based learning in more complex biological organisms, and look at abstract models for the role neuromodulators in the control of learning.

The research results that are presented in this seminar  provide a good introduction to our currently beginning work for the new EU-project FACETS
http://www.kip.uni-heidelberg.de/facets/public/  There we will focus on the understanding of learning algorithms and learning mechanisms in increasingly more realistic (and larger) models of cortical microcircuits and cortical areas.

Required background for active participants of this seminar:
Courses on machine learning and neural networks.


Talks:

Talks about related current research by team members (most of these talks should come rather early in the seminar, if possible)

a)  Stefan Klampfl  on applications of the BCM rule for extracting independent components from a spiking network
(he could also consider to add material from the new book about BCM-learning: Cooper, Intrator, Blais, Shouval: Theory of Cortical Plasticity)

b)  Rafaela Hechl  on  applications of  learning rules for slow feature extraction to readouts from circuits of spiking neurons

c)  Robert Legenstein about  independent component analysis via rules for competitive Hebbian learning,  as well as by a rule proposed by Foel

Schedule of talks:

25.10.2005
Gerhard Neumann - "Report on the recent 3rd International Symposium on Adaptive Motion in Animals and Machines"
http://www.tu-ilmenau.de/site/amam/index.php?id=1893
Presentation: (PPT)

Amir Saffari - "overcomplete representations"
Learning higher-order structures in natural images
Y Karklin, MS Lewicki - Network Computation in Neural Systems, 2003
http://www.cnbc.cmu.edu/cplab/papers/Karklin-Lewicki-03-Network-reprint.pdf

A Hierarchical Bayesian Model for Learning Nonlinear Statistical Regularities in Nonstationary ...
Y Karklin, MS Lewicki - Neural Computation, 2005
http://www.cnbc.cmu.edu/cplab/papers/Karklin-Lewicki-NC05-preprint.pdf

08.11.2005
Prashant Joshi - Mini-Introduction to PCA:
http://www.igi.tugraz.at/lehre/CI/lectures/comp-intell6.pdf

More detailed material on PCA:
pp. 310 in the book Bishop:  Neural Networks for Pattern Recognition

Oja rule and Sanger rule as possible neural implementations of PCA:
pp.210-209  (in particular the methods for proving convergence of these rules should be presented in detail)

The discussion of a possible implementation of Oja's rule via synaptic scaling in
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=11127835&query_hl=20

Rogert Legenstein - "Some problems and results on nonlinear ICA in neural networks via rules for competitive Hebbian learning "

22.11.2005
Andreas Juffinger - "A review of linear ICA"
Reviews of classical ICA can be found on
http://www.igi.tugraz.at/lehre/WS01/seminarC.html
(the essential difficulty is to extract the most important ideas/algorithms for a SHORT presentation in the current seminar)

Martin Bachler - " Variations and applications of ICA for vision problems, and possible neural implementations"
The original paper by Herault and Jutten proposes at the end an implementation by a neural network (hardcopy available from me)

Recent work on ICA in the context with neural systems (especially biological vision):
http://www.cs.helsinki.fi/u/ahyvarin/papers/index.shtml

29.11.2005
Malte Rasch - "Nonnegative matrix factorization"
Learning the parts of objects by non-negative matrix factorization
DD Lee, HS Seung - Nature, 1999
http://adsabs.harvard.edu/abs/1999Natur.401..788L

Unsupervised Learning by Convex and Conic Coding
DD Lee, HS Seung - NIPS, 1996
http://hebb.mit.edu/people/seung/papers/convex.ps

Algorithms for Non-negative Matrix Factorization
DD Lee, HS Seung - NIPS, 2000
http://www.nips.snl.salk.edu/NIPS2000/00papers-pub-on-web/LeeSeung.ps.gz

Ashley Mills - "Gating of Hebbian learning by reinforcement signals in biological organisms"
Is heterosynaptic modulation essential for stabilizing Hebbian plasticity and memory ?
CH Bailey, M Giustetto, YY Huang, RD Hawkins, ER, Kandel,  Nat Rev Neurosci, 2000
http://itb.biologie.hu-berlin.de/~kempter/Hippocampus_Journal_Club/Articles/bailey00.pdf
Presentation for internal use only: file:///home/mammoth/ashley/presentations/ap/ap.html

Possibly one could add mterial from the book
Squire, Kandel: Memory:  From Mind to Molecules

10.01.2006
Jonathan Gutschi - "Bootstrap learning for object discovery"
J. Modayil and B. Kuipers. 2004.
Bootstrap learning for object discovery.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-04)
http://www.cs.utexas.edu/users/qr/papers/Modayil-iros-04-obj.html

24.01.2006
Martin Ebner - "Learning to autonomously select landmarks for navigation and communication"
by J. Fleischer and S Marsland
http://www.nsi.edu/users/fleischer/sab02.pdf

and

Stefan Häusler - "Cascade models of synaptically stored memories"
Fusi, Drew, Abbott, Neuron, 2005, 599-611
fusi_etal_2005a.pdf

31.01.2006
Michael Pfeiffer: "Dopamine and other neuromodulators as signals for reinforcement learning in
biological organisms (and alternative interpretations of their functional role)"




                                                                                                                                                                                                                        2006-01-20, daniela