Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Exams
Animated Algorithms
Interactive Tests
Machine Learning
Neural Networks
Classification Algorithms
Adaptive Filters
Gaussian Statistics
Hidden Markov Models
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster
 Interactive Tests   previous   Contents  next

Introduction to Machine Learning

A learning algorithm is a function which maps each attribute vector $\vec{a} = \left<a_1, \ldots, a_d\right>$ to a target value $b$.
True False
The empirical error on the training set is always lower than the empirical error on the test set.
True False
The true error $error_P(H)$ of the hypothesis $H$ is necessarily larger then the empirical error $error_{T_k}(H)$ measured on the test set $T_k$.
True False
If the training set $L$ and the test set $T$ are generated by two totally different distributions, then
the larger the test set $T_k$ the closer the empirical error $error_{T_k}(H)$ is to the true error $error_P(H)$ of the hypothesis $H$.
$\lim_{k\rightarrow\infty} error_{T_k}(H) = error_P(H)$ where $k$ is the size of the test set $T_k$.
it may happen that also for very large test sets $T_k$ the empirical error does not approximate the true error very well.
Generalization has to do
with the ability of a learning algorithm to find a hypothesis which has a low error on the training set.
with the ability of a learning algorithm to find a hypothesis which has a low error on the test set.
with the ability of a learning algorithm to find a hypothesis which performs well on examples $\left<\vec{a},b\right>$ which were not used for training.
A ``good'' learning algorithm is a learning algorithm which
has good generalization capabilities.
can find for the training set $L$ a Hypotheses $H$ with $error_L(H) = 0$.
finds for rather small training sets $L$ a Hypothesis $H_L$ with a small true error.