Probabilistic Modelling
Table of Contents
A probabilistic model is a mathematical model that uses probability distributions to represent uncertainty about data and parameters.
Both the data \(x\) and parameters \(z\) of the model are treated as random variables.
1. Inference
The process of finding hidden (latent) variables or parameters, under a probabilistic model given the observed data is Inference.
MLE (Maximum likelihood estimates)
\[ \hat{z}_{MLE} = \underset{z}{\arg \max} P(x | z) \]
Here \(z\) is treated as a fixed but unknown constant. Not a random variable like in Bayesian Inference.
MAP (Maximum A Posteriori Estimate)
\[ \hat{z}_{MAP} = \underset{z}{\arg \max} P(z | x) = \underset{z}{\arg \max} P(x | z) P(z) \]
Gives a point estimate of the latent variable i.e. mode of the distribution. While Bayesian Inference gives the whole distribution.
Bayesian Inference
Finding the posterior distribution of parameters \(z\) (aka latent variables) given the observed data \(x\) is called bayesian inference.
\[ P(z | x) = \frac {P(x, z)} {P(x)} \]
But computing \(P(x) = \int P(x,z) dz\) is intractable.