Seminar Computational Intelligence A (708.111)
Grundlagen der Informationsverarbeitung (708)
O.Univ.-Prof. Dr. Wolfgang Maass
Office hours: by appointment (via e-mail)
Location: IGI-seminar room,
Inffeldgasse 16b/I, 8010 Graz
Date: starting on 8th of Oct. 2007, 16.15 p.m.
Content of the seminar:
We will discuss new research results and methods in computational
neuroscience, machine learning and robotics, with emphasis
on new ideas that are relevant for at least two of these areas. The
topics have been chosen so that they can provide the basis and
inspiration for work on a project, a master thesis, or a phd thesis.
Some of the topics of this seminar have been chosen because they
present the state-of-the-art on which we want to build in the
EU-Research Project SECO (which starts this WS), where we will work
jointly with the ETH Zürich and other partners on biologically
inspired ideas and methods for self-generating and self-repairing
computing systems and artificial organisms (currently there are still
slots available for funding new students for research in this project).
Other talks will be related to new research directions on which we will
work in the context of the joint research projects Cognitive Vision and
FACETS (for both see http://www.igi.tugraz.at/maass/jobs.html),
where also student research on Master- and Phd-theses can be funded. In
both of these projects the processing of visual information in
hierarchically organized networks (analogous to a network of visual
areas in the cortex) is of particular interest, especially with regard
to learning questions.
In several of these research projects the use and neural implementation
of graphical models for probabilistic inference (as introduced in ML A
during October 2007) is a key problem.
Each talk will have a lengt of 40 minutes (in order to serve as an
opportunity to extract and describe the most salient aspects of a
paper; so please think what the most important aspects are !).
Please prepare your talk so that it fits within the time length of 40
(time overruns will no longer be tolerated, like on real conferences,
i.e. cutoff after 40 minutes). The good point is, that you do not have
prepare so many slides for such shorter talk !
Those papers and dissertations below which are not publicly online
available are in our internal pdf archive;
new students are welcome to ask
Angelika Zehetner <Angelika.Zehetner@igi.tugraz.at>
to send them pdf.
List of topics for this seminar
(some of these topics contain material for more than a single talk;
perhaps we can pair less experienced students with Phd students, who
can help a bit):
1. Bongard, J., Zykov, V., Lipson, H. (2006). Resilient machines
through continuous self-modeling. Science, 314: 1118-1121.
2. Bongard J. and Lipson H.(2007). Automated reverse engineering of
nonlinear dynamical systems. Proceedings of the National Academy of
Sciences, 104(24): 9943-9948
[both of the preceding papers are available from
3. Lungarella, M., and Sporns, O. (2006) Mapping information flow in
sensorimotor networks. PLoS Comp. Biol. 2, 1301-1312.
4. Honey CJ, Kotter R, Breakspear M, Sporns O.
Network structure of cerebral cortex shapes functional connectivity on
multiple time scales.
Proc Natl Acad Sci U S A. 2007 Jun 12;104(24):10240-5. Epub 2007 Jun 4.
5. Fabian Roth. Explicit Design, and Adaptation in Self-Construction.
PhD thesis, Swiss Federal Institute of Technology (ETH) Zürich,
6. Jason T. Rolfe, The Cortex as a Graphical Model, Master thesis,
7. Leonid Karlinsky, The Learning and Use of Graphical Models for
Image Interpretation, M.Sc. Thesis, Weizmann Institute of Science,
8. Ita Lifshitz, Image Interpretation using Bottom-up Top-down
Cycle on Fragment Trees, M.Sc. Thesis, Weizmann Institute of Science,
9. Hinton, G. E. and Salakhutdinov, R. R.
Reducing the dimensionality of data with neural networks.
Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
(note that Hinton will give a tutorial on this materal in December at
NIPS 2007, for which the ppt will probably be made public)
10. Hinton, G. E., Osindero, S. and Teh, Y.
A fast learning algorithm for deep belief nets, Neural Computation 2006
11. Y. Bengio, Y. LeCun : Scaling Learning Algorithms Towards AI: (in
Bottou et al. (Eds) "Large-Scale Kernel Machines", MIT Press 2007).
12. I. R. Fiete, M.S. Fee, H.S. Seung, Model of birdsong learning based
on gradient estimates by dynamic perturbation of neural conductances,
J. Neurophysiol. 2007 Jul 25; [Epub ahead of print]
"The Estimation-Exploration Algorithm"
"Bayesian Hebb Rule"
"Network structure of cerebral cortex shapes functional connectivity on
multiple time scales"
"Restricted Boltzmann Machines for Collaborative Filtering"
"Monotone and near-monotone networks"
"Mapping Information Flow in Sensorimotor Networks"
"Stimulus representation in primary somatosensory cortex"
"Self-Construction and Self-Repair of an Artificial Foraging Organism"
"Local Rules optimize the Organization of Processes in Networks"