Seminar Computational Intelligence A (708.111)

WS 2007/08

Institut für Grundlagen der Informationsverarbeitung (708)  

O.Univ.-Prof. Dr. Wolfgang Maass

Office hours: by appointment (via e-mail)

E-mail: maass@igi.tugraz.at
Homepage: www.igi.tugraz.at/maass/



Location: IGI-seminar room, Inffeldgasse 16b/I, 8010 Graz
Date:
starting on 8th of Oct. 2007, 16.15 p.m.


Content of the seminar:

We will discuss new research results and methods in computational neuroscience, machine learning and robotics, with emphasis on new ideas that are relevant for at least two of these areas. The topics have been chosen so that they can provide the basis and inspiration for work on a project, a master thesis, or a phd thesis.

Some of the topics of this seminar have been chosen because they present the state-of-the-art on which we want to build in the EU-Research Project SECO (which starts this WS), where we will work jointly with the ETH Zürich and other partners on biologically inspired ideas and methods for self-generating and self-repairing computing systems and artificial organisms (currently there are still slots available for funding new students for research in this project).

Other talks will be related to new research directions on which we will work in the context of the joint research projects Cognitive Vision and
FACETS (for both see http://www.igi.tugraz.at/maass/jobs.html), where also student research on Master- and Phd-theses can be funded. In both of these projects the processing of visual information in hierarchically organized networks (analogous to a network of visual areas in the cortex) is of particular interest, especially with regard to learning questions.

In several of these research projects the use and neural implementation of graphical models for probabilistic inference (as introduced in ML A during October 2007) is a key problem.

Each talk will have a lengt of 40 minutes (in order to serve as an opportunity to extract and describe the most salient aspects of a paper; so please think what the most important aspects are !).
Please prepare your talk so that it fits within the time length of 40 minutes
(time overruns will no longer be tolerated, like on real conferences, i.e. cutoff after 40 minutes). The good point is, that you do not have prepare so many slides for such shorter talk !

Those papers and dissertations below which are not publicly online available are in our internal pdf archive;
new students are welcome to ask
Angelika Zehetner <Angelika.Zehetner@igi.tugraz.at> to send them pdf.

List of topics for this seminar

(some of these topics contain material for more than a single talk; perhaps we can pair less experienced students with Phd students, who can help a bit):

1. Bongard, J., Zykov, V., Lipson, H. (2006). Resilient machines through continuous self-modeling. Science, 314: 1118-1121.

2. Bongard J. and Lipson H.(2007). Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24): 9943-9948

[both of the preceding papers are available from
http://www.cs.uvm.edu/~jbongard/  ]

3. Lungarella, M., and Sporns, O. (2006) Mapping information flow in sensorimotor networks. PLoS Comp. Biol. 2, 1301-1312.
http://www.indiana.edu/~psych/faculty/sporns.html

4. Honey CJ, Kotter R, Breakspear M, Sporns O.   
Network structure of cerebral cortex shapes functional connectivity on multiple time scales.
Proc Natl Acad Sci U S A. 2007 Jun 12;104(24):10240-5. Epub 2007 Jun 4.

5. Fabian Roth. Explicit Design, and Adaptation in Self-Construction. PhD thesis, Swiss Federal Institute of Technology (ETH) Zürich, 2007

6. Jason T. Rolfe, The Cortex as a Graphical Model, Master thesis, CALTECH 2006

7. Leonid Karlinsky,  The Learning and Use of Graphical Models for Image Interpretation, M.Sc. Thesis, Weizmann Institute of Science, Rehovot, 2004
http://www.wisdom.weizmann.ac.il/~shimon/students.html#thesis

8. Ita Lifshitz,  Image Interpretation using Bottom-up Top-down Cycle on Fragment Trees, M.Sc. Thesis, Weizmann Institute of Science, Rehovot, 2005
http://www.wisdom.weizmann.ac.il/~shimon/students.html#thesis

9. Hinton, G. E. and Salakhutdinov, R. R.
Reducing the dimensionality of data with neural networks.
Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
http://www.cs.toronto.edu/~hinton/
(note that Hinton will give a tutorial on this materal in December at NIPS 2007, for which the ppt will probably be made public)

10. Hinton, G. E., Osindero, S. and Teh, Y.
A fast learning algorithm for deep belief nets, Neural Computation 2006
http://www.cs.toronto.edu/~hinton/

11. Y. Bengio, Y. LeCun : Scaling Learning Algorithms Towards AI: (in Bottou et al. (Eds) "Large-Scale Kernel Machines", MIT Press 2007).
http://yann.lecun.com/exdb/publis/index.html

12. I. R. Fiete, M.S. Fee, H.S. Seung, Model of birdsong learning based on gradient estimates by dynamic perturbation of neural conductances,
J. Neurophysiol. 2007 Jul 25; [Epub ahead of print]


Talks:

29.10.2007
Gerhard Neumann
"The Estimation-Exploration Algorithm"
Presentation: PDF

12.11.2007
Bernhard Nessler
"Bayesian Hebb Rule"
Presentation: PDF

19.11.2007
Gregor Hörzer
"Network structure of cerebral cortex shapes functional connectivity on multiple time scales"
Presentation: PDF

Andreas Töscher
"Restricted Boltzmann Machines for Collaborative Filtering"
Presentation: PDF

14.01.2008
Lars Büsing
"Monotone and near-monotone networks"
Presentation: PDF

28.01.2008
Stefan Klampfl
"Mapping Information Flow in Sensorimotor Networks"
Presentation: PDF

Klaus Schuch
"Stimulus representation in primary somatosensory cortex"
Presentation: PDF

30.01.2008
Dejan Pecevski
"Self-Construction and Self-Repair of an Artificial Foraging Organism"
Presentation: PDF

Michael Pfeiffer
"Local Rules optimize the Organization of Processes in Networks"
Presentation: PDF


                                                                                                                                                                                                                        2008-02-13, a.zehetner