Maschinelles Lernen B, WS 2006
Institut für Grundlagen der Informationsverarbeitung (708) last changes:

ML A versus ML B:

Each of these two courses is offered every second year. You can take ML B without having taken before ML A, since their content is independent and complementary to each other (see course descriptions on http://www.igi.tugraz.at/maass/lehre.html). Both courses count as core courses for the Computational Intelligence Catalogue, and belong to the course catalogue for Doctoral Students.

Course Content of ML B:

This course presents the most promising ideas and methods for designing systems that learn autonomously, i.e., without a supervisor who tells the system at every trial what the “right” answer or action would have been. It is not surprising that most currently existing methods for autonomous learning are inspired by various learning capabilities of biological organism, since most of their learning has to take place without a supervisor. One long-range goal of machine learning and artificial intelligence is to design artificial agents (e.g. robots) that are able to configure themselves for a given range of tasks, to learn to carry out the right action in a given situation in order to minimize a certain long-range cost (or maximize some external reward), and to acquire cognitive capabilities that enable them to detect on their own which features of their environment are relevant for them, to discover causal relationships between relevant phenomena, and to discover rules and simple theories that explain these phenomena on a more abstract level. The course will present the best currently existing mathematical models and algorithmic solutions for solving these problems. We will start with Genetic Algorithm that mimick learning on the time-scale of evolution, present the main concepts and results for learning strategies for acting within an unknown environment that maximize external rewards (Reinforcement Learning), and we present recent results from Cognitive Science that explain through precise algorithmic models how humans can learn new concepts from very few examples, and how infants can discover salient causal relationships in their environment, and form simple theories. These results from Cognitive Science that were discovered during the last decade at MIT by Josh Tenenbaum, at Berkeley by Tom Griffiths, and many others, use the framework of probabilistic inference to explain human reasoning, and because of their mathematical precision these methods can immediately be ported to artificial computing systems (which our Institute is going to carry out in the new EU-project BRAINSCALES that begins on January 2011). In particular we will discuss how probabilistic inference provides more flexible methods for learning motor control strategies in robotics, and enables all artificial agents to learn faster by learning simultaneously on several levels of abstraction . No prior knowledge will be assumed on concepts and methods for probabilistic inference (the treatment in this course will be complementary to that in ML A). We believe that is is very important that all our master students become familiar with the basic concepts and methods of probabilistic inference, since this framework is emerging as a new standard approach in many areas of artificial intelligence, robot control, signal processing, cognitive science, and computational neuroscience.

News

This page lists all updates of this course homepage. It will be kept up-to-date during the semester.

14.09.10 Welcome


Assignment cover

Please use for all assignments the following cover.pdf.

Tasks

On this site you find the problem sets and projects for the practicals.

Problem Sets

Nr.IssuedQuestion timeDeadlineLinkAdditional Material
119.10.2010-2.11.2010Comparison of Optimization Algorithms task1.zip
219.10.2010-2.11.2010Cart-Pole Controller Optimization task2.zip
319.10.2010-2.11.2010Genetic Algorithms
49.11.201016.11.201023.11.2010RL theory I MDP_Theory.pdf
59.11.201016.11.201023.11.2010RL theory II
69.11.201016.11.201023.11.2010RL application I: On- and off-policy learning MountainCarDemo.zip
79.11.201016.11.201023.11.2010RL application II: Function approximation CartPoleDemo.zip
89.11.201016.11.201023.11.2010RL application III: Self-play
930.11.201014.12.201011.1.2011Policy Gradient Methods: Swimmer MLB_PolicyGradient.zip
1030.11.201014.12.201011.1.2011Reward Weighted Regression: Cannon Warfare cannon.zip
1130.11.201014.12.201011.1.2011Bayesian networks
1230.11.201014.12.201011.1.2011Approximate inference in Bayesian networks Gibbs.zip
1325.1.2011-22.2.2011Learning overhypotheses
1425.1.2011-22.2.2011Planning with Approximate Inference


Slides from Practicals

LectureDateTopicSlides
119.10.2010 Organization and Search Algorithms Slides (PDF)
29.11.2010 Reinforcement Learning Slides (PDF)
330.11.2010 RL in Robotics and Probabilistic Inference Slides (PDF)
425.1.2011 Learning and Planning with Probabilistic Inference Slides (PDF)


Please post your questions concerning the problem sets to the MLB Newsgroup, or send them directly to Stefan Häusler.

People Involved

This course is being organized by Institut für Grundlagen der Informationsverarbeitung, Inffeldgasse 16b/1. Stock, A-8010 Graz.

Lecturer / Instructor

Teaching assistant

Office

If you have any questions or problems, please do not hesitate to contact one of the above persons.


Place and Date

Lectures:

Time: Monday, 13:00-15:00
Location: SIEMENS TS Hörsaal (HS i11), Inffeldgasse 16b
First lecture: 11.10.2010


Exercises:

Time: Tuesday, 11:00 - 12:00
Location: SIEMENS TS Hörsaal (HS i11), Inffeldgasse 16b
First lecture: 19.10.2010


Literature

  • Wikipedia genetic algorithms
  • Karl Sims (1994) Evolving Virtual Creatures
  • Sutton/Barto: Reinforcement Learning: An Introduction, MIT Press,free web version
  • Andrieu et al. (2003) An Introduction to MCMC for Machine Learning
  • Neal (1993) Probabilistic Inference Using Markov Chain Monte Carlo Methods
  • Kemp, Perfors and Tenenbaum (2007) Learning overhypotheses with hierarchical Bayesian models
  • Video lectures, Cognitive Science and Machine Learning Summer School 2010 - Sardinia
  • Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17, 767-773
  • Vul, E & Pashler, H (2008) "Measuring the Crowd Within: Probabilistic representations Within individuals" Psychological Science. 19(7) 645-647
  • Vul, E., Goodman, N.D., Griffiths, T.L. & Tenenbaum, J.B. (2009) "One and Done? Optimal decisions from very few samples." 31st Annual Meeting of the Cognitive Science Society, 2009.
  • Denison, S., Bonawitz, E. B., Gopnik, A., & Griffiths, T. L. (in press). Preschoolers sample from probability distributions. Proceedings of the 32nd Annual Conference of the Cognitive Science Society.
  • S.L. Gershman, Y. Niv, Learning latent structure: carving nature at its joints. Current Opinion in Neurobiology, 2010
  • D.A. Braun, C. Mehring, D.M. Wolpert, Structure learning in action, Behavioural brain research, 2010
  • A Aertsen, DM Wolpert, C Mehring, Motor task variation induces structural learning, Current Biology, 2009
  • Marc Toussaint
  • Teaching Resources of Marc Toussaint
  • Lecture 2 in Lissabon (2009) Marc Toussaint


  • Course Material

  • Lecture 1
  • Lecture 2
  • Lecture 3 (updated, 8.11.2010)
  • Lecture 4
  • Lecture 5
  • Lecture 6
  • Lecture 7 (updated, 7.12.2010)
  • Lecture 8 (updated, 13.12.2010)
  • Lecture 9
  • Lecture 10
  • Lecture 11


  • Links

  • Wikipedia genetic algorithms
  • Karl Sims (1994) Evolving Virtual Creatures
  • Siggraph animations (1994) Evolved Virtual Creatures