Computational Intelligence, SS08
2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Practical Course Slides
Homework
Exams
Animated Algorithms
Interactive Tests
Machine Learning
Neural Networks
Classification Algorithms
Adaptive Filters
Gaussian Statistics
Hidden Markov Models
Key Definitions
Downloads
Literature and Links
News
mailto:webmaster
 Interactive Tests   previous   Contents  next

Gaussian Statistics

Consider a 2-dimensional Gaussian Process. Find the correct answers:
If the first and second dimension are independent, the cloud of points $(x_i,y_i)_{i=1,\ldots,N}$ and the pdf contour has necessarily the shape of a circle.
If the first and second dimension are independent, the cloud of points and pdf contour has to be elliptic with the principle axes of the ellipse aligned with the abscissa and ordinate axes (consider a circle as a special form of an ellipsis).
The covariance matrix $\Sigma$ is symetric. That is for $i,j=1,\ldots,d$ it holds that $c_{ij} = c_{ji}$
Estimation of the parameters of a 2-dimensional normal distribution. Find the correct answers.
An accurate mean estimate requires more samples than an accurate variance estimate.
Using more data results in an more accurate estimate of the parameters of the normal distribution.
Computation of the log-likelihood for classification instead of the likelihood
gives the same classification results since the logarithm is a monotonically increasing function.
is computationally beneficial, since we do not have to deal with very small numbers.
turns products for the computation of the likelihood into sums for the computation of the log-likelihood.
For the computation of the log-likelihood for observations $\ensuremath\mathbf{x}$ with respect to Gaussian models according to $\log p(\ensuremath\mathbf{x}\vert\ensuremath\boldsymbol{\Theta}) = \frac{1}{2} ... ...th\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})]$ , we may (for all Gaussian models)
drop the division by 2.
drop the term $d \log(2\pi)$.
drop the term $\log (\det(\ensuremath\boldsymbol{\Sigma}))$.
drop the term $(\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})^{\mathsf T}\ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})$ .
pre-compute the term $\log (\det(\ensuremath\boldsymbol{\Sigma}))$ for each of the Gaussian models.