User:Timothee Flutre/Notebook/Postdoc/2012/08/16: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(→‎Variational Bayes approach for the mixture of Normals: shorten + add principle variational bayes)
Line 11: Line 11:




* '''Data''': we have N univariate observations, <math>y_1, \ldots, y_N</math>, gathered into the vector <math>\mathbf{y}</math>.
* '''Data''': N univariate observations, <math>y_1, \ldots, y_N</math>, gathered into the vector <math>\mathbf{y}</math>


* '''Model''': mixture of K Normal distributions


* '''Assumptions''': we assume the observations to be exchangeable and distributed according to a mixture of K Normal distributions. The parameters of this model are the mixture weights (<math>w_k</math>), the means (<math>\mu_k</math>) and the [http://en.wikipedia.org/wiki/Precision_%28statistics%29 precisions] (<math>\tau_k</math>) of each mixture components, all gathered into <math>\Theta = \{w_1,\ldots,w_K,\mu_1,\ldots,\mu_K,\tau_1,\ldots,\tau_K\}</math>. There are two constraints: <math>\sum_{k=1}^K w_k = 1</math> and <math>\forall k \; w_k > 0</math>.
* '''Parameters''': K mixture weights (<math>w_k</math>), K means (<math>\mu_k</math>) and K [http://en.wikipedia.org/wiki/Precision_%28statistics%29 precisions] (<math>\tau_k</math>), one per mixture component


<math>\Theta = \{w_1,\ldots,w_K,\mu_1,\ldots,\mu_K,\tau_1,\ldots,\tau_K\}</math>


* '''Observed likelihood''': <math>p(\mathbf{y} | \Theta, K) = \prod_{n=1}^N p(y_n|\Theta,K) = \prod_{n=1}^N \sum_{k=1}^K w_k Normal(y_n;\mu_k,\tau_k^{-1})</math>
* '''Constraints''': <math>\sum_{k=1}^K w_k = 1</math> and <math>\forall k \; w_k > 0</math>.


* '''Observed likelihood''':  observations assumed exchangeable (independent and identically distributed given the parameters)


* '''Maximizing the observed log-likelihood''': as shown [http://openwetware.org/wiki/User:Timothee_Flutre/Notebook/Postdoc/2011/12/14 here], maximizing the likelihood of a mixture model is like doing a weighted likelihood maximization. However, these weights depend on the parameters we want to estimate! That's why we now switch to the missing-data formulation of the mixture model.
<math>p(\mathbf{y} | \Theta, K) = \prod_{n=1}^N p(y_n|\Theta,K) = \prod_{n=1}^N \sum_{k=1}^K w_k Normal(y_n;\mu_k,\tau_k^{-1})</math>


* '''Latent variables''': N hidden variables, <math>z_1,\ldots,z_N</math>, each being a vector of length K with a single 1 indicating the component to which the <math>n^{th}</math> observation belongs, and K-1 zeroes.


* '''Latent variables''': let's introduce N latent variables, <math>z_1,\ldots,z_N</math>, gathered into the vector <math>\mathbf{z}</math>. Each <math>z_n</math> is a vector of length K with a single 1 indicating the component to which the <math>n^{th}</math> observation belongs, and K-1 zeroes.
* '''Augmented likelihood''':


<math>p(\mathbf{y},\mathbf{z}|\Theta,K) = \prod_{n=1}^N p(y_n,z_n|\Theta,K) = \prod_{n=1}^N p(z_n|\Theta,K) p(y_n|z_n,\Theta,K) = \prod_{n=1}^N \prod_{k=1}^K w_k^{z_{nk}} Normal(y_n;\mu_k,\tau_k^{-1})^{z_{nk}}</math>


* '''Augmented likelihood''': <math>p(\mathbf{y},\mathbf{z}|\Theta,K) = \prod_{n=1}^N p(y_n,z_n|\Theta,K) = \prod_{n=1}^N p(z_n|\Theta,K) p(y_n|z_n,\Theta,K) = \prod_{n=1}^N \prod_{k=1}^K w_k^{z_{nk}} Normal(y_n;\mu_k,\tau_k^{-1})^{z_{nk}}</math>
* '''Maximum-likelihood estimation''': integrate out the latent variables


<math>\mathrm{ln} \, p(\mathbf{y} | \Theta, K) = \sum_{n=1}^N \mathrm{ln} \, \int_{z_n} \mathrm{d}{z_n} \; p(y_n, z_n | \Theta, K)</math>


* '''Priors''': in the Bayesian paradigm, parameters and latent variables are random variables for which we want to infer the posterior distribution. To make the calculations possible, we choose for them prior distributions that are conjuguate with the form of the likelihood.
The latent variables induce dependencies between all the parameters of the model.
** for the parameters: <math>\forall k \; \mu_k | \tau_k \sim Normal(\mu_0,(\tau_0 \tau_k)^{-1})</math> and <math>\forall k \; \tau_k \sim Gamma(\alpha,\beta)</math>
This makes it difficult to find the parameters that maximize the likelihood.
** for the latent variables: <math>\forall n \; z_n \sim Multinomial_K(1,\mathbf{w})</math> and <math>\mathbf{w} \sim Dirichlet(\gamma)</math>
An elegant solution is to introduce a variational distribution of parameters and latent variables, which leads to a re-formulation of the classical EM algorithm.
But let's show it directly in the Bayesian paradigm.


* '''Priors''': conjuguate
** <math>\forall k \; \mu_k | \tau_k \sim Normal(\mu_0,(\tau_0 \tau_k)^{-1})</math> and <math>\forall k \; \tau_k \sim Gamma(\alpha,\beta)</math>
** <math>\forall n \; z_n \sim Multinomial_K(1,\mathbf{w})</math> and <math>\mathbf{w} \sim Dirichlet(\gamma)</math>


* '''Variational Bayes''': our primary goal here is to calculate the marginal log-likelihood of our data set:
 
* '''Variational Bayes''': let's focus here on calculating the marginal log-likelihood of our data set in order to perform model comparison:


<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; p(\mathbf{y}, \mathbf{z}, \Theta | K)</math>
<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; p(\mathbf{y}, \mathbf{z}, \Theta | K)</math>
However the fact that there are latent variables induce dependencies between all the parameters of the model.
This makes it difficult to find the parameters that maximize the marginal likelihood.
An elegant solution is to introduce a "variational distribution" <math>q</math> of the parameters and the latent variables


<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \left( \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} \right) + C_{\mathbf{z}, \Theta}</math>
<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \left( \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} \right) + C_{\mathbf{z}, \Theta}</math>
Line 46: Line 53:
The constant <math>C</math> is here to remind us that <math>q</math> has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.
The constant <math>C</math> is here to remind us that <math>q</math> has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.


The '''crucial assumption''' is to assume the independence of the parameters and the latent variables:
We can then use the concavity of the logarithm ([http://en.wikipedia.org/wiki/Jensen%27s_inequality Jensen's inequality]) to derive a lower bound of the marginal log-likelihood:
 
<math>\mathrm{ln} \, p(\mathbf{y} | K) \ge \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} + C_{\mathbf{z}, \Theta} = \mathcal{F}(q)</math>
 
Let's call this lower bound <math>\mathcal{F}(q)</math> as it is a [http://en.wikipedia.org/wiki/Functional_%28mathematics%29 functional], ie. a ''function of functions''. To gain some intuition about the impact of introducing <math>q</math>, let's expand <math>\mathcal{F}</math>:
 
<math>\mathcal{F}(q) = \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) + \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} + C_{\mathbf{z}, \Theta}</math>
 
<math>\mathcal{F}(q) = \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) - D_{KL}(q || p)</math>
 
From this, it is clear that <math>\mathcal{F}</math> (ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the [http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Kullback-Leibler divergence] between the variational distribution <math>q</math> and the joint posterior of latent variables and parameters.
 
In practice, we have to make the following crucial assumption of independence on <math>q</math> in order for the calculations to be analytically tractable:
 
<math>q(\mathbf{z}, \Theta) = q_{\mathbf{z}}(\mathbf{z}) q_\Theta(\Theta)</math>
 
This means that <math>q_\mathbf{z} q_\Theta</math> approximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero.
 
As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the [http://en.wikipedia.org/wiki/Calculus_of_variations calculus of variations] to find the functions <math>q_\mathbf{z}</math> and <math>q_\Theta</math> that maximize the functional <math>\mathcal{F}</math>.
 
<math>\mathcal{F}(q_\mathbf{z}, q_\Theta) = \int_\Theta \, \mathrm{d}\Theta \; \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; q(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q(\mathbf{z})} + \mathrm{ln} \, \frac{p(\Theta | K)}{q(\Theta)} \right) + C_{\mathbf{z}} + C_{\Theta}</math>
 
This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions <math>q_\mathbf{z}</math> and <math>q_\Theta</math>, and, at the M step, we recompute the variational distributions over the parameters.
 
 
* '''Updates for <math>q_\mathbf{z}</math>''':


<math>q(\mathbf{z}, \Theta) = q(\mathbf{z}) q(\Theta)</math>
TODO


We can then use the concavity of the logarithm and Jensen's inequality to optimize a lower bound of the marginal log-likelihood:


<math>\mathrm{ln} \, p(\mathbf{y} | K) \ge \int_\Theta \, \mathrm{d}\Theta \; \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; q(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q(\mathbf{z})} + \mathrm{ln} \, \frac{p(\Theta | K)}{q(\Theta)} \right) + C_{\mathbf{z}} + C_{\Theta}</math>
* '''Updates for <math>q_\Theta</math>''':


Now we have to optimize the right-hand side of the inequality. Let's name it <math>\mathcal{F}</math> as it is a [http://en.wikipedia.org/wiki/Functional_%28mathematics%29 functional], ie. a ''function of functions''. Using the [http://en.wikipedia.org/wiki/Calculus_of_variations calculus of variations], we'll find the function <math>q</math> that maximizes it.
TODO


<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->
<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->

Revision as of 16:11, 2 September 2012

Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Variational Bayes approach for the mixture of Normals

  • Motivation: I have described on another page the basics of mixture models and the EM algorithm in a frequentist context. It is worth reading before continuing. Here I am interested in the Bayesian approach as well as in a specific variational method (nicknamed "Variational Bayes").


  • Data: N univariate observations, [math]\displaystyle{ y_1, \ldots, y_N }[/math], gathered into the vector [math]\displaystyle{ \mathbf{y} }[/math]
  • Model: mixture of K Normal distributions
  • Parameters: K mixture weights ([math]\displaystyle{ w_k }[/math]), K means ([math]\displaystyle{ \mu_k }[/math]) and K precisions ([math]\displaystyle{ \tau_k }[/math]), one per mixture component

[math]\displaystyle{ \Theta = \{w_1,\ldots,w_K,\mu_1,\ldots,\mu_K,\tau_1,\ldots,\tau_K\} }[/math]

  • Constraints: [math]\displaystyle{ \sum_{k=1}^K w_k = 1 }[/math] and [math]\displaystyle{ \forall k \; w_k \gt 0 }[/math].
  • Observed likelihood: observations assumed exchangeable (independent and identically distributed given the parameters)

[math]\displaystyle{ p(\mathbf{y} | \Theta, K) = \prod_{n=1}^N p(y_n|\Theta,K) = \prod_{n=1}^N \sum_{k=1}^K w_k Normal(y_n;\mu_k,\tau_k^{-1}) }[/math]

  • Latent variables: N hidden variables, [math]\displaystyle{ z_1,\ldots,z_N }[/math], each being a vector of length K with a single 1 indicating the component to which the [math]\displaystyle{ n^{th} }[/math] observation belongs, and K-1 zeroes.
  • Augmented likelihood:

[math]\displaystyle{ p(\mathbf{y},\mathbf{z}|\Theta,K) = \prod_{n=1}^N p(y_n,z_n|\Theta,K) = \prod_{n=1}^N p(z_n|\Theta,K) p(y_n|z_n,\Theta,K) = \prod_{n=1}^N \prod_{k=1}^K w_k^{z_{nk}} Normal(y_n;\mu_k,\tau_k^{-1})^{z_{nk}} }[/math]

  • Maximum-likelihood estimation: integrate out the latent variables

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | \Theta, K) = \sum_{n=1}^N \mathrm{ln} \, \int_{z_n} \mathrm{d}{z_n} \; p(y_n, z_n | \Theta, K) }[/math]

The latent variables induce dependencies between all the parameters of the model. This makes it difficult to find the parameters that maximize the likelihood. An elegant solution is to introduce a variational distribution of parameters and latent variables, which leads to a re-formulation of the classical EM algorithm. But let's show it directly in the Bayesian paradigm.

  • Priors: conjuguate
    • [math]\displaystyle{ \forall k \; \mu_k | \tau_k \sim Normal(\mu_0,(\tau_0 \tau_k)^{-1}) }[/math] and [math]\displaystyle{ \forall k \; \tau_k \sim Gamma(\alpha,\beta) }[/math]
    • [math]\displaystyle{ \forall n \; z_n \sim Multinomial_K(1,\mathbf{w}) }[/math] and [math]\displaystyle{ \mathbf{w} \sim Dirichlet(\gamma) }[/math]


  • Variational Bayes: let's focus here on calculating the marginal log-likelihood of our data set in order to perform model comparison:

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; p(\mathbf{y}, \mathbf{z}, \Theta | K) }[/math]

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \left( \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} \right) + C_{\mathbf{z}, \Theta} }[/math]

The constant [math]\displaystyle{ C }[/math] is here to remind us that [math]\displaystyle{ q }[/math] has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.

We can then use the concavity of the logarithm (Jensen's inequality) to derive a lower bound of the marginal log-likelihood:

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | K) \ge \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} + C_{\mathbf{z}, \Theta} = \mathcal{F}(q) }[/math]

Let's call this lower bound [math]\displaystyle{ \mathcal{F}(q) }[/math] as it is a functional, ie. a function of functions. To gain some intuition about the impact of introducing [math]\displaystyle{ q }[/math], let's expand [math]\displaystyle{ \mathcal{F} }[/math]:

[math]\displaystyle{ \mathcal{F}(q) = \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) + \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} + C_{\mathbf{z}, \Theta} }[/math]

[math]\displaystyle{ \mathcal{F}(q) = \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) - D_{KL}(q || p) }[/math]

From this, it is clear that [math]\displaystyle{ \mathcal{F} }[/math] (ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the Kullback-Leibler divergence between the variational distribution [math]\displaystyle{ q }[/math] and the joint posterior of latent variables and parameters.

In practice, we have to make the following crucial assumption of independence on [math]\displaystyle{ q }[/math] in order for the calculations to be analytically tractable:

[math]\displaystyle{ q(\mathbf{z}, \Theta) = q_{\mathbf{z}}(\mathbf{z}) q_\Theta(\Theta) }[/math]

This means that [math]\displaystyle{ q_\mathbf{z} q_\Theta }[/math] approximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero.

As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the calculus of variations to find the functions [math]\displaystyle{ q_\mathbf{z} }[/math] and [math]\displaystyle{ q_\Theta }[/math] that maximize the functional [math]\displaystyle{ \mathcal{F} }[/math].

[math]\displaystyle{ \mathcal{F}(q_\mathbf{z}, q_\Theta) = \int_\Theta \, \mathrm{d}\Theta \; \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; q(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q(\mathbf{z})} + \mathrm{ln} \, \frac{p(\Theta | K)}{q(\Theta)} \right) + C_{\mathbf{z}} + C_{\Theta} }[/math]

This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions [math]\displaystyle{ q_\mathbf{z} }[/math] and [math]\displaystyle{ q_\Theta }[/math], and, at the M step, we recompute the variational distributions over the parameters.


  • Updates for [math]\displaystyle{ q_\mathbf{z} }[/math]:

TODO


  • Updates for [math]\displaystyle{ q_\Theta }[/math]:

TODO