User:Timothee Flutre/Notebook/Postdoc/2011/06/28
Project name | <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html> |
Linear regression by ordinary least squares
[math]\displaystyle{ \forall n \in {1,\ldots,N}, \; y_n = \mu + \beta g_n + \epsilon_n \text{ with } \epsilon_n \sim N(0,\sigma^2) }[/math] In matrix notation: [math]\displaystyle{ y = X \theta + \epsilon }[/math] with [math]\displaystyle{ \epsilon \sim N_N(0,\sigma^2 I_N) }[/math] and [math]\displaystyle{ \theta^T = (\mu, \beta) }[/math]
Here is the ordinary-least-square (OLS) estimator of [math]\displaystyle{ \theta }[/math]: [math]\displaystyle{ \hat{\theta} = (X^T X)^{-1} X^T Y }[/math] [math]\displaystyle{ \begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \left( \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} \begin{bmatrix} 1 & g_1 \\ \vdots & \vdots \\ 1 & g_N \end{bmatrix} \right)^{-1} \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} \begin{bmatrix} y_1 \\ \vdots \\ y_N \end{bmatrix} }[/math] [math]\displaystyle{ \begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \begin{bmatrix} N & \sum_n g_n \\ \sum_n g_n & \sum_n g_n^2 \end{bmatrix}^{-1} \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix} }[/math] [math]\displaystyle{ \begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} \begin{bmatrix} \sum_n g_n^2 & - \sum_n g_n \\ - \sum_n g_n & N \end{bmatrix} \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix} }[/math] [math]\displaystyle{ \begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} \begin{bmatrix} \sum_n g_n^2 \sum_n y_n - \sum_n g_n \sum_n g_n y_n \\ - \sum_n g_n \sum_n y_n + N \sum_n g_n y_n \end{bmatrix} }[/math] Let's now define 4 summary statistics, very easy to compute: [math]\displaystyle{ \bar{y} = \frac{1}{N} \sum_{n=1}^N y_n }[/math] [math]\displaystyle{ \bar{g} = \frac{1}{N} \sum_{n=1}^N g_n }[/math] [math]\displaystyle{ g^T g = \sum_{n=1}^N g_n^2 }[/math] [math]\displaystyle{ g^T y = \sum_{n=1}^N g_n y_n }[/math] This allows to obtain the estimate of the effect size only by having the summary statistics available: [math]\displaystyle{ \hat{\beta} = \frac{g^T y - N \bar{g} \bar{y}}{g^T g - N \bar{g}^2} }[/math] The same works for the estimate of the standard deviation of the errors: [math]\displaystyle{ \hat{\sigma}^2 = \frac{1}{N-r}(y - X\hat{\theta})^T(y - X\hat{\theta}) }[/math] We can also benefit from this for the standard error of the parameters: [math]\displaystyle{ V(\hat{\theta}) = \hat{\sigma}^2 (X^T X)^{-1} }[/math] [math]\displaystyle{ V(\hat{\theta}) = \hat{\sigma}^2 \frac{1}{N g^T g - N^2 \bar{g}^2} \begin{bmatrix} g^Tg & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix} }[/math] [math]\displaystyle{ V(\hat{\beta}) = \frac{\hat{\sigma}^2}{g^Tg - N\bar{g}^2} }[/math]
[math]\displaystyle{ V(y) = V(\mu + \beta g + \epsilon) = V(\mu) + V(\beta g) + V(\epsilon) = \beta^2 V(g) + \sigma^2 }[/math] The most intuitive way to simulate data is therefore to fix the proportion of variance in [math]\displaystyle{ y }[/math] explained by the genotype, for instance [math]\displaystyle{ PVE=60% }[/math], as well as the standard deviation of the errors, typically [math]\displaystyle{ \sigma=1 }[/math]. From this, we can calculate the corresponding effect size [math]\displaystyle{ \beta }[/math] of the genotype: [math]\displaystyle{ PVE = \frac{V(\beta g)}{V(y)} }[/math] Therefore: [math]\displaystyle{ \beta = \pm \sigma \sqrt{\frac{PVE}{(1 - PVE) * V(g)}} }[/math] Note that [math]\displaystyle{ g }[/math] is the random variable corresponding to the genotype encoded in allele dose, such that it is equal to 0, 1 or 2 copies of the minor allele. For our simulation, we will fix the minor allele frequency [math]\displaystyle{ f }[/math] (eg. [math]\displaystyle{ f=0.3 }[/math]) and we will assume Hardy-Weinberg equilibrium. Then [math]\displaystyle{ g }[/math] is distributed according to a binomial distribution with 2 trials for which the probability of success is [math]\displaystyle{ f }[/math]. As a consequence, its variance is [math]\displaystyle{ V(g)=2f(1-f) }[/math]. Here is some R code implementing all this: set.seed(1859) N <- 100 # sample size mu <- 4 pve <- 0.6 sigma <- 1 maf <- 0.3 beta <- sigma * sqrt(pve / ((1 - pve) * 2 * maf * (1 - maf))) # 1.88 g <- sample(x=0:2, size=N, replace=TRUE, prob=c(maf^2, 2*maf*(1-maf), (1-maf)^2)) y <- mu + beta * g + rnorm(n=N, mean=0, sd=sigma) ols <- lm(y ~ g) summary(ols) # muhat=3.5, betahat=2.1, R2=0.64 sqrt(mean(ols$residuals^2)) # sigmahat = 0.98 plot(x=0, type="n", xlim=range(g), ylim=range(y), xlab="genotypes", ylab="phenotypes", main="Simple linear regression") for(i in unique(g)) points(x=jitter(g[g == i]), y=y[g == i], col=i+1, pch=19) abline(a=coefficients(ols)[1], b=coefficients(ols)[2])
As above, we want [math]\displaystyle{ \hat{B} }[/math], [math]\displaystyle{ \hat{\sigma} }[/math] and [math]\displaystyle{ V(\hat{B}) }[/math]. To efficiently get them, we start with the singular value decomposition of X: [math]\displaystyle{ X = U D V^T }[/math] This allows us to get the Moore-Penrose pseudoinverse matrix of X: [math]\displaystyle{ X^+ = (X^TX)^{-1}X^T }[/math] [math]\displaystyle{ X^+ = V D^{-1} U^T }[/math] From this, we get the OLS estimate of the effect sizes: [math]\displaystyle{ \hat{B} = X^+ Y }[/math] Then it's straightforward to get the residuals: [math]\displaystyle{ \hat{E} = Y - X \hat{B} }[/math] With them we can calculate the estimate of the error variance: [math]\displaystyle{ \hat{\sigma} = \sqrt{\frac{1}{N-3} \hat{E}^T \hat{E}} }[/math] And finally the standard errors of the estimates of the effect sizes: [math]\displaystyle{ V(\hat{B}) = \hat{\sigma}^2 V D^{-2} V^T }[/math] We can check this with some R code: ## simulate the data set.seed(1859) N <- 100 mu <- 5 Xg <- sample(x=0:2, size=N, replace=TRUE, prob=c(0.5, 0.3, 0.2)) # genotypes beta.g <- 0.5 Xc <- sample(x=0:1, size=N, replace=TRUE, prob=c(0.7, 0.3)) # gender beta.c <- 0.3 pve <- 0.8 betas.gc.bar <- mean(beta.g * Xg + beta.c * Xc) # 0.405 sigma <- sqrt((1/N) * sum((beta.g * Xg + beta.c * Xc - betas.gc.bar)^2) * (1-pve) / pve) # 0.2 y <- mu + beta.g * Xg + beta.c * Xc + rnorm(n=N, mean=0, sd=sigma) ## perform the OLS analysis with the SVD of X X <- cbind(rep(1,N), Xg, Xc) Xp <- svd(x=X) B.hat <- Xp$v %*% diag(1/Xp$d) %*% t(Xp$u) %*% y E.hat <- y - X %*% B.hat sigma.hat <- as.numeric(sqrt((1/(N-3)) * t(E.hat) %*% E.hat)) # 0.211 var.theta.hat <- sigma.hat^2 * Xp$v %*% diag((1/Xp$d)^2) %*% t(Xp$v) sqrt(diag(var.theta.hat)) # 0.0304 0.0290 0.0463 ## check all this ols <- lm(y ~ Xg + Xc) summary(ols) # muhat=4.99+-0.03, beta.g.hat=0.52+--.-29, beta.c.hat=0.24+-0.046, R2=0.789 Such an analysis can also be done easily in a custom C/C++ program thanks to the GSL (here). |