User:Timothee Flutre/Notebook/Postdoc/2011/06/28

From OpenWetWare

Jump to: navigation, search
Project name Main project page
Next entry

Linear regression by ordinary least squares

  • Data: let's assume that we obtained data from N individuals. We note y_1,\ldots,y_N the (quantitative) phenotypes (eg. expression level at a given gene), and g_1,\ldots,g_N the genotypes at a given SNP. We want to assess their linear relationship.
  • Model: to start with, we use a simple linear regression (univariate phenotype, single predictor).

\forall n \in {1,\ldots,N}, \; y_n = \mu + \beta g_n + \epsilon_n \text{ with } \epsilon_n \sim N(0,\sigma^2)

In matrix notation:

y = Xθ + ε with ε˜NN(0,σ2IN) and θT = (μ,β)

  • Use only summary statistics: most importantly, we want the following estimates: \hat{\beta}, se(\hat{\beta}) (its standard error) and \hat{\sigma}. In the case where we don't have access to the original data (eg. because genotypes are confidential) but only to some summary statistics (see below), it is still possible to calculate the estimates.

Here is the ordinary-least-square (OLS) estimator of θ:

\hat{\theta} = (X^T X)^{-1} X^T Y

\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} =
\left( \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix}
\begin{bmatrix} 1 & g_1 \\ \vdots & \vdots \\ 1 & g_N \end{bmatrix} \right)^{-1}
\begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix}
\begin{bmatrix} y_1 \\ \vdots \\ y_N \end{bmatrix}

\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} =
\begin{bmatrix} N & \sum_n g_n \\ \sum_n g_n & \sum_n g_n^2 \end{bmatrix}^{-1}
\begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix}

\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} =
\frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2}
\begin{bmatrix} \sum_n g_n^2 & - \sum_n g_n \\ - \sum_n g_n & N \end{bmatrix}
\begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix}

\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} =
\frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2}
\begin{bmatrix} \sum_n g_n^2 \sum_n y_n - \sum_n g_n \sum_n g_n y_n \\ - \sum_n g_n \sum_n y_n + N \sum_n g_n y_n \end{bmatrix}

Let's now define 4 summary statistics, very easy to compute:

\bar{y} = \frac{1}{N} \sum_{n=1}^N y_n

\bar{g} = \frac{1}{N} \sum_{n=1}^N g_n

g^T g = \sum_{n=1}^N g_n^2

g^T y = \sum_{n=1}^N g_n y_n

This allows to obtain the estimate of the effect size only by having the summary statistics available:

\hat{\beta} = \frac{g^T y - N \bar{g} \bar{y}}{g^T g - N \bar{g}^2}

The same works for the estimate of the standard deviation of the errors:

\hat{\sigma}^2 = \frac{1}{N-r}(y - X\hat{\theta})^T(y - X\hat{\theta})

We can also benefit from this for the standard error of the parameters:

V(\hat{\theta}) = \hat{\sigma}^2 (X^T X)^{-1}

V(\hat{\theta}) = \hat{\sigma}^2 \frac{1}{N g^T g - N^2 \bar{g}^2}
\begin{bmatrix} g^Tg & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix}

V(\hat{\beta}) = \frac{\hat{\sigma}^2}{g^Tg - N\bar{g}^2}


  • Simulation with a given PVE: when testing an inference model, the first step is usually to simulate data. However, how do we choose the parameters? In our case (linear regression: y = μ + βg + ε), it is frequent to fix the proportion of variance in y explained by βg:

PVE = \frac{V(\beta g)}{V(y)} = \frac{V(\beta g)}{V(\beta g) + V(\epsilon)} with V(\beta g) = \frac{1}{N}\sum_{n=1}^N (\beta g_n - \bar{\beta g})^2 and V(ε) = σ2

This way, by also fixing β, it is easy to calculate the corresponding σ:

\sigma = \sqrt{\frac{1}{N}\sum_{n=1}^N (\beta g_n - \bar{\beta g})^2 \frac{1 - PVE}{PVE}}

Here is some R code implementing this:

set.seed(1859)
N <- 100
mu <- 5
g <- sample(x=0:2, size=N, replace=TRUE, prob=c(0.5, 0.3, 0.2)) # MAF=0.2
beta <- 0.5
pve <- 0.8
beta.g.bar <- mean(beta * g)
sigma <- sqrt((1/N) * sum((beta * g - beta.g.bar)^2) * (1-pve) / pve) # 0.18
y <- mu + beta * g + rnorm(n=N, mean=0, sd=sigma)
plot(x=0, type="n", xlim=range(g), ylim=range(y),
     xlab="genotypes (allele dose)", ylab="phenotypes",
     main="Simple linear regression")
for(i in unique(g))
  points(x=jitter(g[g == i]), y=y[g == i], col=i+1, pch=19)
ols <- lm(y ~ g)
summary(ols) # muhat=5.01, betahat=0.46, R2=0.779
abline(a=coefficients(ols)[1], b=coefficients(ols)[2])


  • Several predictors: let's now imagine that we also know the gender of the N sampled individuals. We hence want to account for that in our estimate of the genotypic effect. In matrix notation, we still have the same model Y = XB + E with Y an Nx1 vector, X an Nx3 matrix with 1's in the first column, the genotypes in the second and the genders in the third, B a 3x1 vector and E an Nx1 vector following a multivariate Normal distribution centered on 0 and with covariance matrix σ2IN.

As above, we want \hat{B}, \hat{\sigma} and V(\hat{B}). To efficiently get them, we start with the singular value decomposition of X:

X = UDVT

This allows us to get the Moore-Penrose pseudoinverse matrix of X:

X + = (XTX) − 1XT

X + = VD − 1UT

From this, we get the OLS estimate of the effect sizes:

\hat{B} = X^+ Y

Then it's straightforward to get the residuals:

\hat{E} = Y - X \hat{B}

With them we can calculate the estimate of the error variance:

\hat{\sigma} = \sqrt{\frac{1}{N-3} \hat{E}^T \hat{E}}

And finally the standard errors of the estimates of the effect sizes:

V(\hat{B}) = \hat{\sigma}^2 V D^{-2} V^T

We can check this with some R code:

## simulate the data
set.seed(1859)
N <- 100
mu <- 5
Xg <- sample(x=0:2, size=N, replace=TRUE, prob=c(0.5, 0.3, 0.2)) # genotypes
beta.g <- 0.5
Xc <- sample(x=0:1, size=N, replace=TRUE, prob=c(0.7, 0.3)) # gender
beta.c <- 0.3
pve <- 0.8
betas.gc.bar <- mean(beta.g * Xg + beta.c * Xc) # 0.405
sigma <- sqrt((1/N) * sum((beta.g * Xg + beta.c * Xc - betas.gc.bar)^2) *
              (1-pve) / pve) # 0.2
y <- mu + beta.g * Xg + beta.c * Xc + rnorm(n=N, mean=0, sd=sigma)

## perform the OLS analysis with the SVD of X
X <- cbind(rep(1,N), Xg, Xc)
Xp <- svd(x=X)
B.hat <- Xp$v %*% diag(1/Xp$d) %*% t(Xp$u) %*% y
E.hat <- y - X %*% B.hat
sigma.hat <- as.numeric(sqrt((1/(N-3)) * t(E.hat) %*% E.hat)) # 0.211
var.theta.hat <- sigma.hat^2 * Xp$v %*% diag((1/Xp$d)^2) %*% t(Xp$v)
sqrt(diag(var.theta.hat)) # 0.0304 0.0290 0.0463

## check all this
ols <- lm(y ~ Xg + Xc)
summary(ols) # muhat=4.99+-0.03, beta.g.hat=0.52+--.-29, beta.c.hat=0.24+-0.046, R2=0.789

Such an analysis can also be done easily in a custom C/C++ program thanks to the GSL (here).


Personal tools