# User:Timothee Flutre/Notebook/Postdoc/2011/06/28

(Difference between revisions)
 Revision as of 21:57, 14 August 2012 (view source) (→Linear regression by ordinary least squares: improv simul with PVE + expand variance geno)← Previous diff Revision as of 11:03, 21 November 2012 (view source) (→Linear regression by ordinary least squares: improve notation)Next diff → (One intermediate revision not shown.) Line 8: Line 8: ==Linear regression by ordinary least squares== ==Linear regression by ordinary least squares== - * '''Data''': let's assume that we obtained data from $N$ individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (eg. expression level at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP. We want to assess their linear relationship. + * '''Data''': let's assume that we obtained data from $N$ individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (e.g. expression level at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP. We want to assess their linear relationship. * '''Model''': to start with, we use a simple linear regression (univariate phenotype, single predictor). * '''Model''': to start with, we use a simple linear regression (univariate phenotype, single predictor). - $\forall n \in {1,\ldots,N}, \; y_n = \mu + \beta g_n + \epsilon_n \text{ with } \epsilon_n \sim N(0,\sigma^2)$ + $\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta g_i + \epsilon_i \text{ with } \epsilon_i \sim \mathcal{N}(0,\sigma^2)$ - In matrix notation: + In vector-matrix notation: - $y = X \theta + \epsilon$ with $\epsilon \sim N_N(0,\sigma^2 I_N)$ and $\theta^T = (\mu, \beta)$ + $\vec{y} = X B + \vec{e}$ with $\vec{e} \sim \mathcal{N}_N(0,\sigma^2 I_N)$ and $B^T = [\mu \beta]$ - * '''Use only summary statistics''': most importantly, we want the following estimates: $\hat{\beta}$, $se(\hat{\beta})$ (its standard error) and $\hat{\sigma}$. In the case where we don't have access to the original data (eg. because genotypes are confidential) but only to some summary statistics (see below), it is still possible to calculate the estimates. + The parameters of the model are: $\Theta = \{\mu, \beta, \sigma\}$ - Here is the ordinary-least-square (OLS) estimator of $\theta$: + * '''Use only summary statistics''': most importantly, we want the following estimates: $\hat{\beta}$, $\hat{\sigma}$ and $se(\hat{\beta})$ (its standard error). In the case where we don't have access to the original data (e.g. because genotypes are confidential) but only to some summary statistics (see below), it is still possible to calculate the estimates. - $\hat{\theta} = (X^T X)^{-1} X^T Y$ + The well-known ordinary-least-square ([http://en.wikipedia.org/wiki/Ordinary_least_squares#Estimation OLS]) estimator of $B$ is: + + $\hat{B} = (X^T X)^{-1} X^T \vec{y}$ $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = Line 32: Line 34: [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \begin{bmatrix} N & \sum_n g_n \\ \sum_n g_n & \sum_n g_n^2 \end{bmatrix}^{-1} + \begin{bmatrix} N & \sum_i g_i \\ \sum_i g_i & \sum_i g_i^2 \end{bmatrix}^{-1} - \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix} + \begin{bmatrix} \sum_i y_i \\ \sum_i g_i y_i \end{bmatrix}$ [/itex] $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} + \frac{1}{N \sum_i g_i^2 - (\sum_i g_i)^2} - \begin{bmatrix} \sum_n g_n^2 & - \sum_n g_n \\ - \sum_n g_n & N \end{bmatrix} + \begin{bmatrix} \sum_i g_i^2 & - \sum_i g_i \\ - \sum_i g_i & N \end{bmatrix} - \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix} + \begin{bmatrix} \sum_i y_i \\ \sum_i g_i y_i \end{bmatrix}$ [/itex] $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} + \frac{1}{N \sum_i g_i^2 - (\sum_i g_i)^2} - \begin{bmatrix} \sum_n g_n^2 \sum_n y_n - \sum_n g_n \sum_n g_n y_n \\ - \sum_n g_n \sum_n y_n + N \sum_n g_n y_n \end{bmatrix} + \begin{bmatrix} \sum_i g_i^2 \sum_i y_i - \sum_i g_i \sum_i g_i y_i \\ - \sum_i g_i \sum_i y_i + N \sum_i g_i y_i \end{bmatrix}$ [/itex] - Let's now define 4 summary statistics, very easy to compute: + Let's now define 4 '''summary statistics''', very easy to compute: - $\bar{y} = \frac{1}{N} \sum_{n=1}^N y_n$ + $\bar{y} = \frac{1}{N} \sum_{i=1}^N y_i$ - $\bar{g} = \frac{1}{N} \sum_{n=1}^N g_n$ + $\bar{g} = \frac{1}{N} \sum_{i=1}^N g_i$ - $g^T g = \sum_{n=1}^N g_n^2$ + $\vec{g}^T \vec{g} = \sum_{i=1}^N g_i^2$ - $g^T y = \sum_{n=1}^N g_n y_n$ + $\vec{g}^T \vec{y} = \sum_{i=1}^N g_i y_i$ This allows to obtain the estimate of the effect size only by having the summary statistics available: This allows to obtain the estimate of the effect size only by having the summary statistics available: - $\hat{\beta} = \frac{g^T y - N \bar{g} \bar{y}}{g^T g - N \bar{g}^2}$ + $\hat{\beta} = \frac{\vec{g}^T \vec{y} - N \bar{g} \bar{y}}{\vec{g}^T \vec{g} - N \bar{g}^2}$ + + By the way, in this case (i.e. simple linear regression, a single predictor), it's easy to see that: + + $\hat{\beta} = \frac{Cov[\vec{y},\vec{g}]}{Var[\vec{g}]}$ The same works for the estimate of the standard deviation of the errors: The same works for the estimate of the standard deviation of the errors: - $\hat{\sigma}^2 = \frac{1}{N-r}(y - X\hat{\theta})^T(y - X\hat{\theta})$ + $\hat{\sigma}^2 = \frac{1}{N-r}(\vec{y} - X\hat{B})^T(\vec{y} - X\hat{B})$ We can also benefit from this for the standard error of the parameters: We can also benefit from this for the standard error of the parameters: - $V(\hat{\theta}) = \hat{\sigma}^2 (X^T X)^{-1}$ + $V(\hat{B}) = \hat{\sigma}^2 (X^T X)^{-1}$ - $V(\hat{\theta}) = \hat{\sigma}^2 \frac{1}{N g^T g - N^2 \bar{g}^2} + [itex]V(\hat{B}) = \hat{\sigma}^2 \frac{1}{N \vec{g}^T \vec{g} - N^2 \bar{g}^2} - \begin{bmatrix} g^Tg & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix} + \begin{bmatrix} \vec{g}^T\vec{g} & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix}$ [/itex] - $V(\hat{\beta}) = \frac{\hat{\sigma}^2}{g^Tg - N\bar{g}^2}$ + $V(\hat{\beta}) = \frac{\hat{\sigma}^2}{\vec{g}^T\vec{g} - N\bar{g}^2}$ + which corresponds to: - * '''Simulation with a given PVE''': when testing an inference model, the first step is usually to simulate data. However, how do we choose the parameters? In our case, the model is $y = \mu + \beta g + \epsilon$). Therefore, the variance of $y$ can be decomposed like this: + $V(\hat{\beta}) = \frac{1}{N} \frac{Var[\vec{e}]}{Var[\vec{g}]}$ + + + * '''Simulation with a given PVE''': when testing an inference model, the first step is usually to simulate data. However, how do we choose the parameters? In our case, the model is $y = \mu + \beta g + \epsilon$. Therefore, the variance of $y$ can be decomposed like this: $V(y) = V(\mu + \beta g + \epsilon) = V(\mu) + V(\beta g) + V(\epsilon) = \beta^2 V(g) + \sigma^2$ $V(y) = V(\mu + \beta g + \epsilon) = V(\mu) + V(\beta g) + V(\epsilon) = \beta^2 V(g) + \sigma^2$ Line 97: Line 107: pve <- 0.6 pve <- 0.6 sigma <- 1 sigma <- 1 - maf <- 0.3 + maf <- 0.3 # minor allele frequency - beta <- sigma * sqrt(pve / ((1 - pve) * 2 * maf * (1 - maf))) # 1.88 + beta <- sigma * sqrt(pve / ((1 - pve) * 2 * maf * (1 - maf))) # 1.89 - g <- sample(x=0:2, size=N, replace=TRUE, prob=c(maf^2, 2*maf*(1-maf), (1-maf)^2)) + g <- rbinom(n=N, size=2, prob=maf) # assuming Hardy-Weinberg equilibrium y <- mu + beta * g + rnorm(n=N, mean=0, sd=sigma) y <- mu + beta * g + rnorm(n=N, mean=0, sd=sigma) ols <- lm(y ~ g) ols <- lm(y ~ g) - summary(ols) # muhat=3.5, betahat=2.1, R2=0.64 + summary(ols) # muhat=4.1+-0.13, betahat=1.6+-0.16, R2=0.49 - sqrt(mean(ols$residuals^2)) # sigmahat = 0.98 + sqrt((1/(N-2) * sum(ols$residuals^2))) # sigmahat=0.99 plot(x=0, type="n", xlim=range(g), ylim=range(y), plot(x=0, type="n", xlim=range(g), ylim=range(y), - xlab="genotypes", ylab="phenotypes", + xlab="genotypes (allele counts)", ylab="phenotypes", main="Simple linear regression") main="Simple linear regression") for(i in unique(g)) for(i in unique(g)) Line 113: Line 123: - * '''Several predictors''': let's now imagine that we also know the gender of the N sampled individuals. We hence want to account for that in our estimate of the genotypic effect. In matrix notation, we still have the same model Y = XB + E with Y an Nx1 vector, X an Nx3 matrix with 1's in the first column, the genotypes in the second and the genders in the third, B a 3x1 vector and E an Nx1 vector following a multivariate Normal distribution centered on 0 and with covariance matrix $\sigma^2 I_N$. + * '''Several predictors''': let's now imagine that we also know the gender of the N sampled individuals. We hence want to account for that in our estimate of the effect of the genotype. In matrix notation, we still have the same model, $\vec{y} = XB + \vec{e}$ with $\vec{y}$ an Nx1 vector, X an Nx3 matrix with 1's in the first column, the genotypes in the second and the genders in the third, B a 3x1 vector and $\vec{e}$ an Nx1 vector following a multivariate Normal distribution centered on 0 and with covariance matrix $\sigma^2 I_N$. As above, we want $\hat{B}$, $\hat{\sigma}$ and $V(\hat{B})$. To efficiently get them, we start with the [http://en.wikipedia.org/wiki/Singular_value_decomposition singular value decomposition] of X: As above, we want $\hat{B}$, $\hat{\sigma}$ and $V(\hat{B})$. To efficiently get them, we start with the [http://en.wikipedia.org/wiki/Singular_value_decomposition singular value decomposition] of X: Line 121: Line 131: This allows us to get the [http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse Moore-Penrose pseudoinverse] matrix of X: This allows us to get the [http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse Moore-Penrose pseudoinverse] matrix of X: - $X^+ = (X^TX)^{-1}X^T$ + $X^+ = (X^TX)^{-1}X^T = V D^{-1} U^T$ - + - $X^+ = V D^{-1} U^T$ + From this, we get the OLS estimate of the effect sizes: From this, we get the OLS estimate of the effect sizes: - $\hat{B} = X^+ Y$ + $\hat{B} = X^+ \vec{y}$ Then it's straightforward to get the residuals: Then it's straightforward to get the residuals: - $\hat{E} = Y - X \hat{B}$ + $\hat{\vec{e}} = \vec{y} - X \hat{B}$ With them we can calculate the estimate of the error variance: With them we can calculate the estimate of the error variance: - $\hat{\sigma} = \sqrt{\frac{1}{N-3} \hat{E}^T \hat{E}}$ + $\hat{\sigma}^2 = \frac{1}{N-3} \hat{\vec{e}}^T \hat{\vec{e}}$ And finally the standard errors of the estimates of the effect sizes: And finally the standard errors of the estimates of the effect sizes: Line 147: Line 155: set.seed(1859) set.seed(1859) N <- 100 N <- 100 - mu <- 5 + mu <- 4 - Xg <- sample(x=0:2, size=N, replace=TRUE, prob=c(0.5, 0.3, 0.2)) # genotypes + pve.g <- 0.4 # genotype - beta.g <- 0.5 + pve.c <- 0.2 # other covariate, eg. gender - Xc <- sample(x=0:1, size=N, replace=TRUE, prob=c(0.7, 0.3)) # gender + sigma <- 1 - beta.c <- 0.3 + maf <- 0.3 - pve <- 0.8 + sex.ratio <- 0.5 - betas.gc.bar <- mean(beta.g * Xg + beta.c * Xc) # 0.405 + beta.g <- sigma * sqrt((1 / (2 * maf * (1 - maf))) * (pve.g / (1 - pve.g - pve.c))) # 1.543 - sigma <- sqrt((1/N) * sum((beta.g * Xg + beta.c * Xc - betas.gc.bar)^2) * + beta.c <- beta.g * sqrt((pve.c / pve.g) * (2 * maf * (1 - maf) / sex.ratio * (1 - sex.ratio))) # 0.707 - (1-pve) / pve) # 0.2 + x.g <- rbinom(n=N, size=2, prob=maf) - y <- mu + beta.g * Xg + beta.c * Xc + rnorm(n=N, mean=0, sd=sigma) + x.c <- rbinom(n=N, size=1, prob=sex.ratio) + y <- mu + beta.g * x.g + beta.c * x.c + rnorm(n=N, mean=0, sd=sigma) + ols <- lm(y ~ x.g + x.c) + summary(ols) # muhat=3.9+-0.17, beta.g.hat=1.6+-0.17, beta.c.hat=0.58+-0.21, R2=0.51 + sqrt((1/(N-3)) * sum(ols$residuals^2)) # sigma.hat = 1.058 ## perform the OLS analysis with the SVD of X ## perform the OLS analysis with the SVD of X - X <- cbind(rep(1,N), Xg, Xc) + X <- cbind(rep(1,N), x.g, x.c) Xp <- svd(x=X) Xp <- svd(x=X) B.hat <- Xp$v %*% diag(1/Xp$d) %*% t(Xp$u) %*% y B.hat <- Xp$v %*% diag(1/Xp$d) %*% t(Xp$u) %*% y E.hat <- y - X %*% B.hat E.hat <- y - X %*% B.hat - sigma.hat <- as.numeric(sqrt((1/(N-3)) * t(E.hat) %*% E.hat)) # 0.211 + sigma.hat <- as.numeric(sqrt((1/(N-3)) * t(E.hat) %*% E.hat)) # 1.058 var.theta.hat <- sigma.hat^2 * Xp$v %*% diag((1/Xp$d)^2) %*% t(Xp$v) var.theta.hat <- sigma.hat^2 * Xp$v %*% diag((1/Xp$d)^2) %*% t(Xp$v) - sqrt(diag(var.theta.hat)) # 0.0304 0.0290 0.0463 + sqrt(diag(var.theta.hat)) # 0.168 0.175 0.212 - + - ## check all this + - ols <- lm(y ~ Xg + Xc) + - summary(ols) # muhat=4.99+-0.03, beta.g.hat=0.52+--.-29, beta.c.hat=0.24+-0.046, R2=0.789 + ## Revision as of 11:03, 21 November 2012 Project name Main project page Next entry ## Linear regression by ordinary least squares • Data: let's assume that we obtained data from N individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (e.g. expression level at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP. We want to assess their linear relationship. • Model: to start with, we use a simple linear regression (univariate phenotype, single predictor). $\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta g_i + \epsilon_i \text{ with } \epsilon_i \sim \mathcal{N}(0,\sigma^2)$ In vector-matrix notation: $\vec{y} = X B + \vec{e}$ with $\vec{e} \sim \mathcal{N}_N(0,\sigma^2 I_N)$ and BT = [μβ] The parameters of the model are: Θ = {μ,β,σ} • Use only summary statistics: most importantly, we want the following estimates: $\hat{\beta}$, $\hat{\sigma}$ and $se(\hat{\beta})$ (its standard error). In the case where we don't have access to the original data (e.g. because genotypes are confidential) but only to some summary statistics (see below), it is still possible to calculate the estimates. The well-known ordinary-least-square (OLS) estimator of B is: $\hat{B} = (X^T X)^{-1} X^T \vec{y}$ $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \left( \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} \begin{bmatrix} 1 & g_1 \\ \vdots & \vdots \\ 1 & g_N \end{bmatrix} \right)^{-1} \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} \begin{bmatrix} y_1 \\ \vdots \\ y_N \end{bmatrix}$ $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \begin{bmatrix} N & \sum_i g_i \\ \sum_i g_i & \sum_i g_i^2 \end{bmatrix}^{-1} \begin{bmatrix} \sum_i y_i \\ \sum_i g_i y_i \end{bmatrix}$ $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \frac{1}{N \sum_i g_i^2 - (\sum_i g_i)^2} \begin{bmatrix} \sum_i g_i^2 & - \sum_i g_i \\ - \sum_i g_i & N \end{bmatrix} \begin{bmatrix} \sum_i y_i \\ \sum_i g_i y_i \end{bmatrix}$ $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \frac{1}{N \sum_i g_i^2 - (\sum_i g_i)^2} \begin{bmatrix} \sum_i g_i^2 \sum_i y_i - \sum_i g_i \sum_i g_i y_i \\ - \sum_i g_i \sum_i y_i + N \sum_i g_i y_i \end{bmatrix}$ Let's now define 4 summary statistics, very easy to compute: $\bar{y} = \frac{1}{N} \sum_{i=1}^N y_i$ $\bar{g} = \frac{1}{N} \sum_{i=1}^N g_i$ $\vec{g}^T \vec{g} = \sum_{i=1}^N g_i^2$ $\vec{g}^T \vec{y} = \sum_{i=1}^N g_i y_i$ This allows to obtain the estimate of the effect size only by having the summary statistics available: $\hat{\beta} = \frac{\vec{g}^T \vec{y} - N \bar{g} \bar{y}}{\vec{g}^T \vec{g} - N \bar{g}^2}$ By the way, in this case (i.e. simple linear regression, a single predictor), it's easy to see that: $\hat{\beta} = \frac{Cov[\vec{y},\vec{g}]}{Var[\vec{g}]}$ The same works for the estimate of the standard deviation of the errors: $\hat{\sigma}^2 = \frac{1}{N-r}(\vec{y} - X\hat{B})^T(\vec{y} - X\hat{B})$ We can also benefit from this for the standard error of the parameters: $V(\hat{B}) = \hat{\sigma}^2 (X^T X)^{-1}$ $V(\hat{B}) = \hat{\sigma}^2 \frac{1}{N \vec{g}^T \vec{g} - N^2 \bar{g}^2} \begin{bmatrix} \vec{g}^T\vec{g} & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix}$ $V(\hat{\beta}) = \frac{\hat{\sigma}^2}{\vec{g}^T\vec{g} - N\bar{g}^2}$ which corresponds to: $V(\hat{\beta}) = \frac{1}{N} \frac{Var[\vec{e}]}{Var[\vec{g}]}$ • Simulation with a given PVE: when testing an inference model, the first step is usually to simulate data. However, how do we choose the parameters? In our case, the model is y = μ + βg + ε. Therefore, the variance of y can be decomposed like this: V(y) = V(μ + βg + ε) = V(μ) + Vg) + V(ε) = β2V(g) + σ2 The most intuitive way to simulate data is therefore to fix the proportion of variance in y explained by the genotype, for instance PVE = 60%, as well as the standard deviation of the errors, typically σ = 1. From this, we can calculate the corresponding effect size β of the genotype: $PVE = \frac{V(\beta g)}{V(y)}$ Therefore: $\beta = \pm \sigma \sqrt{\frac{PVE}{(1 - PVE) * V(g)}}$ Note that g is the random variable corresponding to the genotype encoded in allele dose, such that it is equal to 0, 1 or 2 copies of the minor allele. For our simulation, we will fix the minor allele frequency f (eg. f = 0.3) and we will assume Hardy-Weinberg equilibrium. Then g is distributed according to a binomial distribution with 2 trials for which the probability of success is f. As a consequence, its variance is V(g) = 2f(1 − f). Here is some R code implementing all this: set.seed(1859) N <- 100 # sample size mu <- 4 pve <- 0.6 sigma <- 1 maf <- 0.3 # minor allele frequency beta <- sigma * sqrt(pve / ((1 - pve) * 2 * maf * (1 - maf))) # 1.89 g <- rbinom(n=N, size=2, prob=maf) # assuming Hardy-Weinberg equilibrium y <- mu + beta * g + rnorm(n=N, mean=0, sd=sigma) ols <- lm(y ~ g) summary(ols) # muhat=4.1+-0.13, betahat=1.6+-0.16, R2=0.49 sqrt((1/(N-2) * sum(ols$residuals^2))) # sigmahat=0.99
plot(x=0, type="n", xlim=range(g), ylim=range(y),
xlab="genotypes (allele counts)", ylab="phenotypes",
main="Simple linear regression")
for(i in unique(g))
points(x=jitter(g[g == i]), y=y[g == i], col=i+1, pch=19)
abline(a=coefficients(ols)[1], b=coefficients(ols)[2])



• Several predictors: let's now imagine that we also know the gender of the N sampled individuals. We hence want to account for that in our estimate of the effect of the genotype. In matrix notation, we still have the same model, $\vec{y} = XB + \vec{e}$ with $\vec{y}$ an Nx1 vector, X an Nx3 matrix with 1's in the first column, the genotypes in the second and the genders in the third, B a 3x1 vector and $\vec{e}$ an Nx1 vector following a multivariate Normal distribution centered on 0 and with covariance matrix σ2IN.

As above, we want $\hat{B}$, $\hat{\sigma}$ and $V(\hat{B})$. To efficiently get them, we start with the singular value decomposition of X:

X = UDVT

This allows us to get the Moore-Penrose pseudoinverse matrix of X:

X + = (XTX) − 1XT = VD − 1UT

From this, we get the OLS estimate of the effect sizes:

$\hat{B} = X^+ \vec{y}$

Then it's straightforward to get the residuals:

$\hat{\vec{e}} = \vec{y} - X \hat{B}$

With them we can calculate the estimate of the error variance:

$\hat{\sigma}^2 = \frac{1}{N-3} \hat{\vec{e}}^T \hat{\vec{e}}$

And finally the standard errors of the estimates of the effect sizes:

$V(\hat{B}) = \hat{\sigma}^2 V D^{-2} V^T$

We can check this with some R code:

## simulate the data
set.seed(1859)
N <- 100
mu <- 4
pve.g <- 0.4 # genotype
pve.c <- 0.2 # other covariate, eg. gender
sigma <- 1
maf <- 0.3
sex.ratio <- 0.5
beta.g <- sigma * sqrt((1 / (2 * maf * (1 - maf))) * (pve.g / (1 - pve.g - pve.c))) # 1.543
beta.c <- beta.g * sqrt((pve.c / pve.g) * (2 * maf * (1 - maf) / sex.ratio * (1 - sex.ratio))) # 0.707
x.g <- rbinom(n=N, size=2, prob=maf)
x.c <- rbinom(n=N, size=1, prob=sex.ratio)
y <- mu + beta.g * x.g + beta.c * x.c + rnorm(n=N, mean=0, sd=sigma)
ols <- lm(y ~ x.g + x.c)
summary(ols) # muhat=3.9+-0.17, beta.g.hat=1.6+-0.17, beta.c.hat=0.58+-0.21, R2=0.51
sqrt((1/(N-3)) * sum(ols$residuals^2)) # sigma.hat = 1.058 ## perform the OLS analysis with the SVD of X X <- cbind(rep(1,N), x.g, x.c) Xp <- svd(x=X) B.hat <- Xp$v %*% diag(1/Xp$d) %*% t(Xp$u) %*% y
E.hat <- y - X %*% B.hat
sigma.hat <- as.numeric(sqrt((1/(N-3)) * t(E.hat) %*% E.hat)) # 1.058
var.theta.hat <- sigma.hat^2 * Xp$v %*% diag((1/Xp$d)^2) %*% t(Xp\$v)
sqrt(diag(var.theta.hat)) # 0.168 0.175 0.212



Such an analysis can also be done easily in a custom C/C++ program thanks to the GSL (here).