User:Timothee Flutre/Notebook/Postdoc/2011/12/14
From OpenWetWare
(Difference between revisions)
(→Learn about mixture models and the EM algorithm: add ML estimates for sigma_k and w_k) 
(→Learn about mixture models and the EM algorithm: add ref tuto Tomasi) 

Line 129:  Line 129:  
}  }  
  * '''References''': Diebolt and Robert (1994)  +  * '''References''': 
+  ** tutorial: document from Carlo Tomasi (Duke University)  
+  ** introduction to mixture models: PhD thesis from Matthew Stephens (Oxford, 2000)  
+  ** articles on the Bayesian approach: Diebolt and Robert (1994); Richardson and Green (1997); Jasra, Holmes and Stephens (2005)  
<! ##### DO NOT edit below this line unless you know what you are doing. ##### >  <! ##### DO NOT edit below this line unless you know what you are doing. ##### > 
Revision as of 17:53, 29 December 2011
Project name  Main project page Previous entry Next entry 
Learn about mixture models and the EM algorithm(Caution, this is my own quickanddirty tutorial, see the references at the end for presentations by professional statisticians.)
As we derive with respect to μ_{k}, all the others means μ_{l} with are constant, and thus disappear:
And finally:
Once we put all together, we end up with:
By convention, we note the maximumlikelihood estimate of μ_{k}:
Therefore, we finally obtain:
By doing the same kind of algebra, we derive the loglikelihood w.r.t. σ_{k}:
And then we obtain the ML estimates for the standard deviation of each cluster:
The partial derivative of l(θ) w.r.t. w_{k} is tricky. ... <TO DO> ...
Finally, here are the ML estimates for the mixture proportions:
#' Generate univariate observations from a mixture of Normals #' #' @param K number of components #' @param N number of observations GetUnivariateSimulatedData < function(K=2, N=100){ mus < seq(0, 6*(K1), 6) sigmas < runif(n=K, min=0.5, max=1.5) tmp < floor(rnorm(n=K1, mean=floor(N/K), sd=5)) ns < c(tmp, N  sum(tmp)) clusters < as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})), ncol=1)) obs < matrix(unlist(lapply(1:K, function(k){ rnorm(n=ns[k], mean=mus[k], sd=sigmas[k]) }))) new.order < sample(1:N, N) obs < obs[new.order] rownames(obs) < NULL clusters < clusters[new.order] return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas, mix.probas=ns/N)) }
#' Return probas of latent variables given data and parameters from previous iteration #' #' @param data Nx1 vector of observations #' @param params list which components are mus, sigmas and mix.probas Estep < function(data, params){ GetMembershipProbas(data, params$mus, params$sigmas, params$mix.probas) } #' Return the membership probabilities P(zi=k/xi,theta) #' #' @param data Nx1 vector of observations #' @param mus Kx1 vector of means #' @param sigmas Kx1 vector of std deviations #' @param mix.probas Kx1 vector of mixing probas P(zi=k/theta) #' @return NxK matrix of membership probas GetMembershipProbas < function(data, mus, sigmas, mix.probas){ N < length(data) K < length(mus) tmp < matrix(unlist(lapply(1:N, function(i){ x < data[i] norm.const < sum(unlist(Map(function(mu, sigma, mix.proba){ mix.proba * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.probas))) unlist(Map(function(mu, sigma, mix.proba){ mix.proba * GetUnivariateNormalDensity(x, mu, sigma) / norm.const }, mus[K], sigmas[K], mix.probas[K])) })), ncol=K1, byrow=TRUE) membership.probas < cbind(tmp, apply(tmp, 1, function(x){1  sum(x)})) names(membership.probas) < NULL return(membership.probas) } #' Univariate Normal density GetUnivariateNormalDensity < function(x, mu, sigma){ return( 1/(sigma * sqrt(2*pi)) * exp(1/(2*sigma^2)*(xmu)^2) ) }
