User:Timothee Flutre/Notebook/Postdoc/2012/08/16: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(6 intermediate revisions by the same user not shown)
Line 26: Line 26:


* '''Latent variables''': N hidden variables, <math>z_1,\ldots,z_N</math>, each being a vector of length K with a single 1 indicating the component to which the <math>n^{th}</math> observation belongs, and K-1 zeroes.
* '''Latent variables''': N hidden variables, <math>z_1,\ldots,z_N</math>, each being a vector of length K with a single 1 indicating the component to which the <math>n^{th}</math> observation belongs, and K-1 zeroes.
<math>p(\mathbf{z}|\mathbf{w},K) = \prod_{n=1}^N p(z_n|\mathbf{w},K) = \prod_{n=1}^N \prod_{k=1}^K w_k^{z_{nk}}</math>


* '''Augmented likelihood''':
* '''Augmented likelihood''':
Line 33: Line 35:
* '''Maximum-likelihood estimation''': integrate out the latent variables
* '''Maximum-likelihood estimation''': integrate out the latent variables


<math>\mathrm{ln} \, p(\mathbf{y} | \Theta, K) = \sum_{n=1}^N \mathrm{ln} \, \int_{z_n} \mathrm{d}{z_n} \; p(y_n, z_n | \Theta, K)</math>
<math>\mathrm{ln} \, p(\mathbf{y} | \Theta, K) = \sum_{n=1}^N \mathrm{ln} \, \int_{z_n} p(y_n, z_n | \Theta, K) \, \mathrm{d}{z_n}</math>


The latent variables induce dependencies between all the parameters of the model.
The latent variables induce dependencies between all the parameters of the model.
Line 41: Line 43:


* '''Priors''': conjuguate
* '''Priors''': conjuguate
** <math>\forall k \; \mu_k | \tau_k \sim \mathcal{N}(\mu_0,(\tau_0 \tau_k)^{-1})</math> and <math>\forall k \; \tau_k \sim \mathcal{G}a(\alpha,\beta)</math>
** <math>p(\Theta | K) = p(\mathbf{w} | K) \prod_k p(\tau_k) p(\mu_k | \tau_k)</math>
** <math>\forall n \; z_n \sim \mathcal{M}ult_K(1,\mathbf{w})</math> and <math>\mathbf{w} \sim \mathcal{D}ir(\gamma)</math>
** <math>\forall k \; \tau_k \sim \mathcal{G}a(\alpha,\beta)</math> and <math>\forall k \; \mu_k | \tau_k \sim \mathcal{N}(\mu_0,(\tau_0 \tau_k)^{-1})</math>
** <math>\forall n \; z_n \sim \mathcal{M}ult_K(1,\mathbf{w})</math> and <math>\mathbf{w} \sim \mathcal{D}ir(\gamma_0)</math>




* '''Variational Bayes''': let's focus here on calculating the marginal log-likelihood of our data set in order to perform model comparison:
* '''Variational Bayes''': let's focus here on calculating the marginal log-likelihood of our data set in order to perform model comparison:


<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; p(\mathbf{y}, \mathbf{z}, \Theta | K)</math>
<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \int_\mathbf{z} \int_\Theta \; p(\mathbf{y}, \mathbf{z}, \Theta | K) \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta</math>


<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \left( \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} \right) + C_{\mathbf{z}, \Theta}</math>
We can now introduce a distribution <math>q_{\mathbf{z}, \Theta}</math>:


The constant <math>C</math> is here to remind us that <math>q</math> has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.
<math>\mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \left( \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta)} \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \right) + C_{\mathbf{z}, \Theta}</math>
 
The constant <math>C_{\mathbf{z}, \Theta}</math> is here to remind us that <math>q_{\mathbf{z}, \Theta}</math> has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.


We can then use the concavity of the logarithm ([http://en.wikipedia.org/wiki/Jensen%27s_inequality Jensen's inequality]) to derive a lower bound of the marginal log-likelihood:
We can then use the concavity of the logarithm ([http://en.wikipedia.org/wiki/Jensen%27s_inequality Jensen's inequality]) to derive a lower bound of the marginal log-likelihood:


<math>\mathrm{ln} \, p(\mathbf{y} | K) \ge \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} + C_{\mathbf{z}, \Theta} = \mathcal{F}_K(q)</math>
<math>\mathrm{ln} \, p(\mathbf{y} | K) \ge \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta)} \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta + C_{\mathbf{z}, \Theta} = \mathcal{F}_K(q)</math>


Let's call this lower bound <math>\mathcal{F}_K(q)</math> as it is a [http://en.wikipedia.org/wiki/Functional_%28mathematics%29 functional], ie. a ''function of functions''. To gain some intuition about the impact of introducing <math>q</math>, let's expand <math>\mathcal{F}_K</math>:
Let's call this lower bound <math>\mathcal{F}_K(q)</math> as it is a [http://en.wikipedia.org/wiki/Functional_%28mathematics%29 functional], ie. a ''function of functions''. To gain some intuition about the impact of introducing <math>q</math>, let's expand <math>\mathcal{F}_K</math>:


<math>\mathcal{F}_K(q) = \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) + \int_\mathbf{z} \int_\Theta \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; q(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{z}, \Theta | K)}{q(\mathbf{z}, \Theta)} + C_{\mathbf{z}, \Theta}</math>
<math>\mathcal{F}_K(q) = \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; + \; \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{z}, \Theta | K)}{q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta)} \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}, \Theta}</math>


<math>\mathcal{F}_K(q) = \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) - D_{KL}(q || p)</math>
<math>\mathcal{F}_K(q) = \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) - D_{KL}(q || p)</math>


From this, it is clear that <math>\mathcal{F}_K</math> (ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the [http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Kullback-Leibler divergence] between the variational distribution <math>q</math> and the joint posterior of latent variables and parameters.
From this, it is clear that <math>\mathcal{F}_K</math> (ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the [http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Kullback-Leibler divergence] between the variational distribution <math>q</math> and the joint posterior of latent variables and parameters.
As a side note, minimizing <math>D_{KL}(p || q)</math> is used in the [http://en.wikipedia.org/wiki/Expectation_propagation expectation-propagation] technique.


In practice, we have to make the following crucial assumption of independence on <math>q</math> in order for the calculations to be analytically tractable:
In practice, we have to make the following crucial assumption of independence on <math>q_{\mathbf{z}, \Theta}</math> in order for the calculations to be analytically tractable:


<math>q(\mathbf{z}, \Theta) = q_{\mathbf{z}}(\mathbf{z}) q_\Theta(\Theta)</math>
<math>q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) = q_{\mathbf{z}}(\mathbf{z}) q_\Theta(\Theta)</math>


This means that <math>q_\mathbf{z} q_\Theta</math> approximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero.
This means that <math>q_\mathbf{z} q_\Theta</math> approximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero.
Line 73: Line 79:
As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the [http://en.wikipedia.org/wiki/Calculus_of_variations calculus of variations] to find the functions <math>q_\mathbf{z}</math> and <math>q_\Theta</math> that maximize the functional <math>\mathcal{F}_K</math>.
As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the [http://en.wikipedia.org/wiki/Calculus_of_variations calculus of variations] to find the functions <math>q_\mathbf{z}</math> and <math>q_\Theta</math> that maximize the functional <math>\mathcal{F}_K</math>.


<math>\mathcal{F}_K(q_\mathbf{z}, q_\Theta) = \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q_\mathbf{z}(\mathbf{z})} + \mathrm{ln} \, \frac{p(\Theta | K)}{q_\Theta(\Theta)} \right) + C_{\mathbf{z}} + C_{\Theta}</math>
<math>\mathcal{F}_K(q_\mathbf{z}, q_\Theta) = \int_\Theta \; q_\Theta(\Theta) \; \left( \int_\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q_\mathbf{z}(\mathbf{z})} \, \mathrm{d}\mathbf{z} + \mathrm{ln} \, \frac{p(\Theta | K)}{q_\Theta(\Theta)} \right) \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}} \; + \; C_{\Theta}</math>


This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions <math>q_\mathbf{z}</math> and <math>q_\Theta</math>, and, at the M step, we recompute the variational distributions over the parameters.
This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions <math>q_\mathbf{z}</math> and <math>q_\Theta</math>, and, at the M step, we recompute the variational distributions over the parameters.
Line 82: Line 88:
We start by writing the functional derivative of <math>\mathcal{F}_K</math> with respect to <math>q_{\mathbf{z}}</math>:
We start by writing the functional derivative of <math>\mathcal{F}_K</math> with respect to <math>q_{\mathbf{z}}</math>:


<math>\frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} = \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \left[ \frac{\partial}{\partial q_{\mathbf{z}}} \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; \left( q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) \right) \right) \right] + C_{\mathbf{z}}</math>
<math>\frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} = \int_\Theta \; q_\Theta(\Theta) \; \frac{\partial}{\partial q_{\mathbf{z}}} \left( \int_\mathbf{z} \; \left( q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) \right) \, \mathrm{d}\mathbf{z} \right) \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}}</math>


<math>\frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} = \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \left[ \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) - 1 \right] + C_{\mathbf{z}}</math>
<math>\frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} = \int_\Theta \; q_\Theta(\Theta) \; \left( \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) - 1 \right) \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}}</math>


Then we set this functional derivative to zero. We also make use of a frequent assumption, namely that the variational distribution fully factorizes over each individual latent variables ([http://en.wikipedia.org/wiki/Mean_field_theory mean-field assumption]):
Then we set this functional derivative to zero. We also make use of a frequent assumption, namely that the variational distribution fully factorizes over each individual latent variables ([http://en.wikipedia.org/wiki/Mean_field_theory mean-field assumption]):


<math>\frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} \bigg|_{q_{\mathbf{z}}^{(t+1)}} = 0 \Longleftrightarrow \forall \, n \; \mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \left( \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) \right) - 1 + C_{z_n}</math>
<math>\frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} \bigg|_{q_{\mathbf{z}}^{(t+1)}} = 0 \Longleftrightarrow \forall \, n \; \mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \int_\Theta \; q_\Theta(\Theta) \; \mathrm{ln} \, p(y_n,z_n|\Theta,K) \, \mathrm{d}\Theta \; + \; C_{z_n}</math>


TODO
Recognizing the expectation and factorizing <math>q_\Theta(\Theta)</math> into <math>q_\mathbf{w}(\mathbf{w})q_\mathbf{\mu,\tau}(\mathbf{\mu,\tau})</math>, we get:
 
<math>\mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \mathbb{E}_\mathbf{w}[\mathrm{ln} \, p(z_n|\mathbf{w},K)] + \mathbb{E}_{\mathbf{\mu,\tau}}[\mathrm{ln} \, p(y_n|z_n,\mathbf{\mu},\mathbf{\tau},K)] \; + \; \text{constant}</math>
 
<math>\mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \sum_{k=1}^K ( z_{nk} \; \mathrm{ln} \, \rho_{nk} ) \; + \; \text{constant}</math> where <math>\mathrm{ln} \, \rho_{nk} = \mathbb{E}[\mathrm{ln} \, w_k] + \frac{1}{2} \mathbb{E}[\mathrm{ln} \, \tau_k] - \frac{1}{2} \mathrm{ln} \, 2\pi - \frac{1}{2} \mathbb{E}[\tau_k (y_n-\mu_k)^2]</math>
 
Taking the exponential: <math>q_{z_n}^{(t+1)}(z_n) \propto \prod_k \rho_{nk}^{z_{nk}}</math>
 
As this should be a distribution, it should sum to one, and therefore:
 
<math>q_{z_n}^{(t+1)}(z_n) = \prod_k r_{nk}^{z_{nk}}</math> where <math>r_{nk} = \frac{\rho_{nk}}{\sum_{k'=1}^K \rho_{nk'}}</math> ("r" stands for "reponsability")
 
Interestingly, even though we haven't specified anything yet about <math>q_{z_n}</math>, we can see that it is of the same form as the prior on <math>z_n</math>, a Multinomial distribution.




* '''Updates for <math>q_\Theta</math>''':
* '''Updates for <math>q_\Theta</math>''':


TODO
We start by writing the functional derivative of <math>\mathcal{F}_K</math> with respect to <math>q_{\Theta}</math>:
 
<math>\frac{\partial \mathcal{F}_K}{\partial q_\Theta} = \frac{\partial}{\partial q_\Theta} \left( \int_\Theta \; q_\Theta(\Theta) \; \left( \int_\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, p(\mathbf{y}, \mathbf{z} | \Theta, K) \, \mathrm{d}\mathbf{z} + \mathrm{ln} \, \frac{p(\Theta | K)}{q_\Theta(\Theta)} \right) \, \mathrm{d}\Theta \right) \; + \; C_{\Theta}</math>
 
<math>\frac{\partial \mathcal{F}_K}{\partial q_\Theta} = \int_\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, p(\mathbf{z} | \Theta, K) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) \, \mathrm{d}\mathbf{z} \; + \; \mathrm{ln} \, p(\Theta | K) - \mathrm{ln} \, q_\Theta(\Theta) \; - 1 \; + \; C_{\Theta}</math>
 
Then, when setting this functional derivative to zero, we obtain:
 
<math>\mathrm{ln} \, q_\Theta(\Theta)^{(t+1)} = \mathbb{E}_\mathbf{z}[\mathrm{ln} \, p(\mathbf{z} | \mathbf{w}, K)] \; + \; \sum_n \sum_k \mathbb{E}[z_{nk}] \mathrm{ln} \, p(y_n | \mu_k, \tau_k) \; + \; \mathrm{ln} \, p(\mathbf{w} | K) \; + \; \sum_k \mathrm{ln} \, p(\mu_k, \tau_k | K) \; + \; \text{constant}</math>
 
Note how no term involves both <math>\mathbf{w}</math> and <math>\mu_k,\tau_k</math>.
This naturally implies the factorization <math>q_\Theta(\Theta) = q_\mathbf{w}(\mathbf{w})q_\mathbf{\mu,\tau}(\mathbf{\mu,\tau})</math>.
And we can also notice the following factorization <math>q_\mathbf{\mu,\tau}(\mathbf{\mu,\tau}) = \prod_k q(\mu_k, \tau_k)</math>.
 
Starting with <math>\mathbf{w}</math>:
 
<math>\mathrm{ln} \, q_\mathbf{w}^{(t+1)}(\mathbf{w}) = \sum_n \sum_k \mathbb{E}[z_{nk}] \, \mathrm{ln} \, w_k \; + \; (\gamma_0 - 1) \sum_k \mathrm{ln} \, w_k \; + \; \text{constant}</math>
 
Recognizing <math>\mathbb{E}[z_{nk}] = r_{nk}</math> and taking the exponential, we get another Dirichlet distribution:
 
<math>q_\mathbf{w}^{(t+1)}(\mathbf{w}) \propto \prod_k w_k^{\gamma_0-1+\sum_n r_{nk}}</math>
 
that is <math>q_\mathbf{w}^{(t+1)} \sim \mathcal{D}ir(\gamma^{(t+1)})</math> with <math>\gamma_k^{(t+1)}=\gamma_0-1+\sum_n r_{nk}</math>
 
And now, about the other parameters, we recognize a Normal-Gamma distribution.
 
<math>q(\mu_k,\tau_k) = q(\tau_k) q(\mu_k | \tau_k) = \mathcal{N}\mathcal{G}a(\tilde{\mu}_k, \tilde{\tau}_k^{-1}, \tilde{\alpha}_k, \tilde{\beta}_k)</math>
 
It's easier to first define three statistics of the data with respect to the responsabilities:
 
<math>N_k = \sum_n r_{nk}</math>
 
<math>\bar{y}_k = \frac{1}{N_k} \sum_n r_{nk} y_n</math>
 
<math>S_k = \frac{1}{N_k} \sum_n r_{nk} (y_n - \bar{y}_k)^2</math>.
 
This allows us to concisely write (TODO: check the algebra...):
 
<math>\tilde{\tau}_k = \tau_0 + N_k</math>
 
<math>\tilde{\mu}_k = \frac{1}{\tilde{\tau}_k} (\tau_0 \mu_0 + N_k \bar{y}_k)</math>
 
<math>\tilde{\alpha}_k^{-1} = \alpha^{-1} + N_k S_k + \frac{\tau_0 N_k}{\tau_0 + N_k}(\bar{y}_k - \mu_0)^2</math>
 
<math>\tilde{\beta}_k = \beta + N_k + 1</math>




Line 101: Line 163:


TODO
TODO
* '''References''':
** book "Pattern Recognition and Machine Learning" from Christopher Bishop


<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->
<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->

Revision as of 17:28, 5 August 2013

Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Variational Bayes approach for the mixture of Normals

  • Motivation: I have described on another page the basics of mixture models and the EM algorithm in a frequentist context. It is worth reading before continuing. Here I am interested in the Bayesian approach as well as in a specific variational method (nicknamed "Variational Bayes").


  • Data: N univariate observations, [math]\displaystyle{ y_1, \ldots, y_N }[/math], gathered into the vector [math]\displaystyle{ \mathbf{y} }[/math]
  • Model: mixture of K Normal distributions
  • Parameters: K mixture weights ([math]\displaystyle{ w_k }[/math]), K means ([math]\displaystyle{ \mu_k }[/math]) and K precisions ([math]\displaystyle{ \tau_k }[/math]), one per mixture component

[math]\displaystyle{ \Theta = \{w_1,\ldots,w_K,\mu_1,\ldots,\mu_K,\tau_1,\ldots,\tau_K\} }[/math]

  • Constraints: [math]\displaystyle{ \sum_{k=1}^K w_k = 1 }[/math] and [math]\displaystyle{ \forall k \; w_k \gt 0 }[/math].
  • Observed likelihood: observations assumed exchangeable (independent and identically distributed given the parameters)

[math]\displaystyle{ p(\mathbf{y} | \Theta, K) = \prod_{n=1}^N p(y_n|\Theta,K) = \prod_{n=1}^N \sum_{k=1}^K w_k \; \mathcal{N}(y_n;\mu_k,\tau_k^{-1}) }[/math]

  • Latent variables: N hidden variables, [math]\displaystyle{ z_1,\ldots,z_N }[/math], each being a vector of length K with a single 1 indicating the component to which the [math]\displaystyle{ n^{th} }[/math] observation belongs, and K-1 zeroes.

[math]\displaystyle{ p(\mathbf{z}|\mathbf{w},K) = \prod_{n=1}^N p(z_n|\mathbf{w},K) = \prod_{n=1}^N \prod_{k=1}^K w_k^{z_{nk}} }[/math]

  • Augmented likelihood:

[math]\displaystyle{ p(\mathbf{y},\mathbf{z}|\Theta,K) = \prod_{n=1}^N p(y_n,z_n|\Theta,K) = \prod_{n=1}^N p(z_n|\Theta,K) p(y_n|z_n,\Theta,K) = \prod_{n=1}^N \prod_{k=1}^K w_k^{z_{nk}} \; \mathcal{N}(y_n;\mu_k,\tau_k^{-1})^{z_{nk}} }[/math]

  • Maximum-likelihood estimation: integrate out the latent variables

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | \Theta, K) = \sum_{n=1}^N \mathrm{ln} \, \int_{z_n} p(y_n, z_n | \Theta, K) \, \mathrm{d}{z_n} }[/math]

The latent variables induce dependencies between all the parameters of the model. This makes it difficult to find the parameters that maximize the likelihood. An elegant solution is to introduce a variational distribution of parameters and latent variables, which leads to a re-formulation of the classical EM algorithm. But let's show it directly in the Bayesian paradigm.

  • Priors: conjuguate
    • [math]\displaystyle{ p(\Theta | K) = p(\mathbf{w} | K) \prod_k p(\tau_k) p(\mu_k | \tau_k) }[/math]
    • [math]\displaystyle{ \forall k \; \tau_k \sim \mathcal{G}a(\alpha,\beta) }[/math] and [math]\displaystyle{ \forall k \; \mu_k | \tau_k \sim \mathcal{N}(\mu_0,(\tau_0 \tau_k)^{-1}) }[/math]
    • [math]\displaystyle{ \forall n \; z_n \sim \mathcal{M}ult_K(1,\mathbf{w}) }[/math] and [math]\displaystyle{ \mathbf{w} \sim \mathcal{D}ir(\gamma_0) }[/math]


  • Variational Bayes: let's focus here on calculating the marginal log-likelihood of our data set in order to perform model comparison:

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \int_\mathbf{z} \int_\Theta \; p(\mathbf{y}, \mathbf{z}, \Theta | K) \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta }[/math]

We can now introduce a distribution [math]\displaystyle{ q_{\mathbf{z}, \Theta} }[/math]:

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | K) = \mathrm{ln} \, \left( \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta)} \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \right) + C_{\mathbf{z}, \Theta} }[/math]

The constant [math]\displaystyle{ C_{\mathbf{z}, \Theta} }[/math] is here to remind us that [math]\displaystyle{ q_{\mathbf{z}, \Theta} }[/math] has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier.

We can then use the concavity of the logarithm (Jensen's inequality) to derive a lower bound of the marginal log-likelihood:

[math]\displaystyle{ \mathrm{ln} \, p(\mathbf{y} | K) \ge \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z}, \Theta | K)}{q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta)} \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta + C_{\mathbf{z}, \Theta} = \mathcal{F}_K(q) }[/math]

Let's call this lower bound [math]\displaystyle{ \mathcal{F}_K(q) }[/math] as it is a functional, ie. a function of functions. To gain some intuition about the impact of introducing [math]\displaystyle{ q }[/math], let's expand [math]\displaystyle{ \mathcal{F}_K }[/math]:

[math]\displaystyle{ \mathcal{F}_K(q) = \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; + \; \int_\mathbf{z} \int_\Theta \; q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) \; \mathrm{ln} \, \frac{p(\mathbf{z}, \Theta | K)}{q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta)} \, \mathrm{d}\mathbf{z} \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}, \Theta} }[/math]

[math]\displaystyle{ \mathcal{F}_K(q) = \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) - D_{KL}(q || p) }[/math]

From this, it is clear that [math]\displaystyle{ \mathcal{F}_K }[/math] (ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the Kullback-Leibler divergence between the variational distribution [math]\displaystyle{ q }[/math] and the joint posterior of latent variables and parameters. As a side note, minimizing [math]\displaystyle{ D_{KL}(p || q) }[/math] is used in the expectation-propagation technique.

In practice, we have to make the following crucial assumption of independence on [math]\displaystyle{ q_{\mathbf{z}, \Theta} }[/math] in order for the calculations to be analytically tractable:

[math]\displaystyle{ q_{\mathbf{z}, \Theta}(\mathbf{z}, \Theta) = q_{\mathbf{z}}(\mathbf{z}) q_\Theta(\Theta) }[/math]

This means that [math]\displaystyle{ q_\mathbf{z} q_\Theta }[/math] approximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero.

As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the calculus of variations to find the functions [math]\displaystyle{ q_\mathbf{z} }[/math] and [math]\displaystyle{ q_\Theta }[/math] that maximize the functional [math]\displaystyle{ \mathcal{F}_K }[/math].

[math]\displaystyle{ \mathcal{F}_K(q_\mathbf{z}, q_\Theta) = \int_\Theta \; q_\Theta(\Theta) \; \left( \int_\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q_\mathbf{z}(\mathbf{z})} \, \mathrm{d}\mathbf{z} + \mathrm{ln} \, \frac{p(\Theta | K)}{q_\Theta(\Theta)} \right) \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}} \; + \; C_{\Theta} }[/math]

This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions [math]\displaystyle{ q_\mathbf{z} }[/math] and [math]\displaystyle{ q_\Theta }[/math], and, at the M step, we recompute the variational distributions over the parameters.


  • Updates for [math]\displaystyle{ q_\mathbf{z} }[/math]:

We start by writing the functional derivative of [math]\displaystyle{ \mathcal{F}_K }[/math] with respect to [math]\displaystyle{ q_{\mathbf{z}} }[/math]:

[math]\displaystyle{ \frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} = \int_\Theta \; q_\Theta(\Theta) \; \frac{\partial}{\partial q_{\mathbf{z}}} \left( \int_\mathbf{z} \; \left( q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) \right) \, \mathrm{d}\mathbf{z} \right) \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}} }[/math]

[math]\displaystyle{ \frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} = \int_\Theta \; q_\Theta(\Theta) \; \left( \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) - 1 \right) \, \mathrm{d}\Theta \; + \; C_{\mathbf{z}} }[/math]

Then we set this functional derivative to zero. We also make use of a frequent assumption, namely that the variational distribution fully factorizes over each individual latent variables (mean-field assumption):

[math]\displaystyle{ \frac{\partial \mathcal{F}_K}{\partial q_{\mathbf{z}}} \bigg|_{q_{\mathbf{z}}^{(t+1)}} = 0 \Longleftrightarrow \forall \, n \; \mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \int_\Theta \; q_\Theta(\Theta) \; \mathrm{ln} \, p(y_n,z_n|\Theta,K) \, \mathrm{d}\Theta \; + \; C_{z_n} }[/math]

Recognizing the expectation and factorizing [math]\displaystyle{ q_\Theta(\Theta) }[/math] into [math]\displaystyle{ q_\mathbf{w}(\mathbf{w})q_\mathbf{\mu,\tau}(\mathbf{\mu,\tau}) }[/math], we get:

[math]\displaystyle{ \mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \mathbb{E}_\mathbf{w}[\mathrm{ln} \, p(z_n|\mathbf{w},K)] + \mathbb{E}_{\mathbf{\mu,\tau}}[\mathrm{ln} \, p(y_n|z_n,\mathbf{\mu},\mathbf{\tau},K)] \; + \; \text{constant} }[/math]

[math]\displaystyle{ \mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \sum_{k=1}^K ( z_{nk} \; \mathrm{ln} \, \rho_{nk} ) \; + \; \text{constant} }[/math] where [math]\displaystyle{ \mathrm{ln} \, \rho_{nk} = \mathbb{E}[\mathrm{ln} \, w_k] + \frac{1}{2} \mathbb{E}[\mathrm{ln} \, \tau_k] - \frac{1}{2} \mathrm{ln} \, 2\pi - \frac{1}{2} \mathbb{E}[\tau_k (y_n-\mu_k)^2] }[/math]

Taking the exponential: [math]\displaystyle{ q_{z_n}^{(t+1)}(z_n) \propto \prod_k \rho_{nk}^{z_{nk}} }[/math]

As this should be a distribution, it should sum to one, and therefore:

[math]\displaystyle{ q_{z_n}^{(t+1)}(z_n) = \prod_k r_{nk}^{z_{nk}} }[/math] where [math]\displaystyle{ r_{nk} = \frac{\rho_{nk}}{\sum_{k'=1}^K \rho_{nk'}} }[/math] ("r" stands for "reponsability")

Interestingly, even though we haven't specified anything yet about [math]\displaystyle{ q_{z_n} }[/math], we can see that it is of the same form as the prior on [math]\displaystyle{ z_n }[/math], a Multinomial distribution.


  • Updates for [math]\displaystyle{ q_\Theta }[/math]:

We start by writing the functional derivative of [math]\displaystyle{ \mathcal{F}_K }[/math] with respect to [math]\displaystyle{ q_{\Theta} }[/math]:

[math]\displaystyle{ \frac{\partial \mathcal{F}_K}{\partial q_\Theta} = \frac{\partial}{\partial q_\Theta} \left( \int_\Theta \; q_\Theta(\Theta) \; \left( \int_\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, p(\mathbf{y}, \mathbf{z} | \Theta, K) \, \mathrm{d}\mathbf{z} + \mathrm{ln} \, \frac{p(\Theta | K)}{q_\Theta(\Theta)} \right) \, \mathrm{d}\Theta \right) \; + \; C_{\Theta} }[/math]

[math]\displaystyle{ \frac{\partial \mathcal{F}_K}{\partial q_\Theta} = \int_\mathbf{z} \; q_\mathbf{z}(\mathbf{z}) \; \mathrm{ln} \, p(\mathbf{z} | \Theta, K) \; \mathrm{ln} \, p(\mathbf{y} | \mathbf{z}, \Theta, K) \, \mathrm{d}\mathbf{z} \; + \; \mathrm{ln} \, p(\Theta | K) - \mathrm{ln} \, q_\Theta(\Theta) \; - 1 \; + \; C_{\Theta} }[/math]

Then, when setting this functional derivative to zero, we obtain:

[math]\displaystyle{ \mathrm{ln} \, q_\Theta(\Theta)^{(t+1)} = \mathbb{E}_\mathbf{z}[\mathrm{ln} \, p(\mathbf{z} | \mathbf{w}, K)] \; + \; \sum_n \sum_k \mathbb{E}[z_{nk}] \mathrm{ln} \, p(y_n | \mu_k, \tau_k) \; + \; \mathrm{ln} \, p(\mathbf{w} | K) \; + \; \sum_k \mathrm{ln} \, p(\mu_k, \tau_k | K) \; + \; \text{constant} }[/math]

Note how no term involves both [math]\displaystyle{ \mathbf{w} }[/math] and [math]\displaystyle{ \mu_k,\tau_k }[/math]. This naturally implies the factorization [math]\displaystyle{ q_\Theta(\Theta) = q_\mathbf{w}(\mathbf{w})q_\mathbf{\mu,\tau}(\mathbf{\mu,\tau}) }[/math]. And we can also notice the following factorization [math]\displaystyle{ q_\mathbf{\mu,\tau}(\mathbf{\mu,\tau}) = \prod_k q(\mu_k, \tau_k) }[/math].

Starting with [math]\displaystyle{ \mathbf{w} }[/math]:

[math]\displaystyle{ \mathrm{ln} \, q_\mathbf{w}^{(t+1)}(\mathbf{w}) = \sum_n \sum_k \mathbb{E}[z_{nk}] \, \mathrm{ln} \, w_k \; + \; (\gamma_0 - 1) \sum_k \mathrm{ln} \, w_k \; + \; \text{constant} }[/math]

Recognizing [math]\displaystyle{ \mathbb{E}[z_{nk}] = r_{nk} }[/math] and taking the exponential, we get another Dirichlet distribution:

[math]\displaystyle{ q_\mathbf{w}^{(t+1)}(\mathbf{w}) \propto \prod_k w_k^{\gamma_0-1+\sum_n r_{nk}} }[/math]

that is [math]\displaystyle{ q_\mathbf{w}^{(t+1)} \sim \mathcal{D}ir(\gamma^{(t+1)}) }[/math] with [math]\displaystyle{ \gamma_k^{(t+1)}=\gamma_0-1+\sum_n r_{nk} }[/math]

And now, about the other parameters, we recognize a Normal-Gamma distribution.

[math]\displaystyle{ q(\mu_k,\tau_k) = q(\tau_k) q(\mu_k | \tau_k) = \mathcal{N}\mathcal{G}a(\tilde{\mu}_k, \tilde{\tau}_k^{-1}, \tilde{\alpha}_k, \tilde{\beta}_k) }[/math]

It's easier to first define three statistics of the data with respect to the responsabilities:

[math]\displaystyle{ N_k = \sum_n r_{nk} }[/math]

[math]\displaystyle{ \bar{y}_k = \frac{1}{N_k} \sum_n r_{nk} y_n }[/math]

[math]\displaystyle{ S_k = \frac{1}{N_k} \sum_n r_{nk} (y_n - \bar{y}_k)^2 }[/math].

This allows us to concisely write (TODO: check the algebra...):

[math]\displaystyle{ \tilde{\tau}_k = \tau_0 + N_k }[/math]

[math]\displaystyle{ \tilde{\mu}_k = \frac{1}{\tilde{\tau}_k} (\tau_0 \mu_0 + N_k \bar{y}_k) }[/math]

[math]\displaystyle{ \tilde{\alpha}_k^{-1} = \alpha^{-1} + N_k S_k + \frac{\tau_0 N_k}{\tau_0 + N_k}(\bar{y}_k - \mu_0)^2 }[/math]

[math]\displaystyle{ \tilde{\beta}_k = \beta + N_k + 1 }[/math]


  • Choice of K:

TODO


  • References:
    • book "Pattern Recognition and Machine Learning" from Christopher Bishop