|Project name||<html><img src="/images/9/94/Report.png" border="0" /></html> Main project page|
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html> </html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>
Variational Bayes approach for the mixture of Normals
The latent variables induce dependencies between all the parameters of the model. This makes it difficult to find the parameters that maximize the likelihood. An elegant solution is to introduce a variational distribution of parameters and latent variables, which leads to a re-formulation of the classical EM algorithm. But let's show it directly in the Bayesian paradigm.
The constantis here to remind us that has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.
We can then use the concavity of the logarithm (Jensen's inequality) to derive a lower bound of the marginal log-likelihood:
Let's call this lower bound functional, ie. a function of functions. To gain some intuition about the impact of introducing , let's expand :as it is a
From this, it is clear that Kullback-Leibler divergence between the variational distribution and the joint posterior of latent variables and parameters.(ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the
In practice, we have to make the following crucial assumption of independence onin order for the calculations to be analytically tractable:
This means thatapproximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero.
As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the calculus of variations to find the functions and that maximize the functional .
This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributionsand , and, at the M step, we recompute the variational distributions over the parameters.
We start by writing the functional derivative ofwith respect to :
Then we set this functional derivative to zero. We also make use of a frequent assumption, namely that the variational distribution fully factorizes over each individual latent variables (mean-field assumption):