# User:Timothee Flutre/Notebook/Postdoc/2011/06/28

(Difference between revisions)
 Revision as of 18:17, 28 March 2012 (view source) (→Calculate OLS estimates with summary statistics for simple linear regression: add std err of beta)← Previous diff Revision as of 15:04, 19 June 2012 (view source)m (→Calculate OLS estimates with summary statistics for simple linear regression)Next diff → Line 6: Line 6: | colspan="2"| | colspan="2"| - ==Calculate OLS estimates with summary statistics for simple linear regression== + ==Simple linear regression== - We obtained data from $n$ individuals. Let be $y_1,\ldots,y_n$ the (quantitative) phenotypes (eg. expression level at a given gene), and $g_1,\ldots,g_n$ the genotypes at a given SNP. + * '''Data''': Let's assume that we obtained data from $N$ individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (eg. expression level at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP. We want to assess their linear relationship. - We want to assess the linear relationship between phenotype and genotype. For this with use a simple linear regression: + * '''Model''': for this we use a simple linear regression (univariate phenotype, single predictor). - $y_i = \mu + \beta x_i + \epsilon_i$ with $\epsilon_i \rightarrow N(0,\sigma^2)$ and for $i \in {1,\ldots,n}$ + $\forall n \in {1,\ldots,N}, \; y_n = \mu + \beta g_n + \epsilon_n \text{ with } \epsilon_n \sim N(0,\sigma^2)$ - In vector-matrix notation: + In matrix notation: - $y = X \theta + \epsilon$ with $\epsilon \rightarrow N_n(0,\sigma^2 I)$ and $\theta^T = (\mu, \beta)$ + $y = X \theta + \epsilon$ with $\epsilon \sim N_N(0,\sigma^2 I_N)$ and $\theta^T = (\mu, \beta)$ + + Most importantly, we want the following estimates: $\hat{\beta}$, $se(\hat{\beta})$ (its standard error) and $\hat{\sigma}$. In the case where we don't have access to the original data (eg. because genotypes are confidential) but only to some summary statistics (see below), it is still possible to calculate the estimates. Here is the ordinary-least-square (OLS) estimator of $\theta$: Here is the ordinary-least-square (OLS) estimator of $\theta$: Line 23: Line 25: $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \left( \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_n \end{bmatrix} + \left( \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} - \begin{bmatrix} 1 & g_1 \\ \vdots & \vdots \\ 1 & g_n \end{bmatrix} \right)^{-1} + \begin{bmatrix} 1 & g_1 \\ \vdots & \vdots \\ 1 & g_N \end{bmatrix} \right)^{-1} - \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_n \end{bmatrix} + \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} - \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix} + \begin{bmatrix} y_1 \\ \vdots \\ y_N \end{bmatrix}$ [/itex] $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \begin{bmatrix} n & \sum_i g_i \\ \sum_i g_i & \sum_i g_i^2 \end{bmatrix}^{-1} + \begin{bmatrix} N & \sum_n g_n \\ \sum_n g_n & \sum_n g_n^2 \end{bmatrix}^{-1} - \begin{bmatrix} \sum_i y_i \\ \sum_i g_i y_i \end{bmatrix} + \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix}$ [/itex] $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \frac{1}{n \sum_i g_i^2 - (\sum_i g_i)^2} + \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} - \begin{bmatrix} \sum_i g_i^2 & - \sum_i g_i \\ - \sum_i g_i & n \end{bmatrix} + \begin{bmatrix} \sum_n g_n^2 & - \sum_n g_n \\ - \sum_n g_n & N \end{bmatrix} - \begin{bmatrix} \sum_i y_i \\ \sum_i g_i y_i \end{bmatrix} + \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix}$ [/itex] $\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = [itex]\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = - \frac{1}{n \sum_i g_i^2 - (\sum_i g_i)^2} + \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} - \begin{bmatrix} \sum_i g_i^2 \sum_i y_i - \sum_i g_i \sum_i g_i y_i \\ - \sum_i g_i \sum_i y_i + n \sum_i g_i y_i \end{bmatrix} + \begin{bmatrix} \sum_n g_n^2 \sum_n y_n - \sum_n g_n \sum_n g_n y_n \\ - \sum_n g_n \sum_n y_n + N \sum_n g_n y_n \end{bmatrix}$ [/itex] Let's now define 4 summary statistics, very easy to compute: Let's now define 4 summary statistics, very easy to compute: - $\bar{y} = \frac{1}{n} \sum_{i=1}^n y_i$ + $\bar{y} = \frac{1}{N} \sum_{n=1}^N y_n$ - $\bar{g} = \frac{1}{n} \sum_{i=1}^n g_i$ + $\bar{g} = \frac{1}{N} \sum_{n=1}^N g_n$ - $g^T g = \sum_{i=1}^n g_i^2$ + $g^T g = \sum_{n=1}^N g_n^2$ - $g^T y = \sum_{i=1}^n g_i y_i$ + $g^T y = \sum_{n=1}^N g_n y_n$ This allows to obtain the estimate of the effect size only by having the summary statistics available: This allows to obtain the estimate of the effect size only by having the summary statistics available: - $\hat{\beta} = \frac{g^T y - n \bar{g} \bar{y}}{g^T g - n \bar{g}^2}$ + $\hat{\beta} = \frac{g^T y - N \bar{g} \bar{y}}{g^T g - N \bar{g}^2}$ The same works for the estimate of the standard deviation of the errors: The same works for the estimate of the standard deviation of the errors: - $\hat{\sigma}^2 = \frac{1}{n-r}(y - X\hat{\theta})^T(y - X\hat{\theta})$ + $\hat{\sigma}^2 = \frac{1}{N-r}(y - X\hat{\theta})^T(y - X\hat{\theta})$ We can also benefit from this for the standard error of the parameters: We can also benefit from this for the standard error of the parameters: Line 67: Line 69: $V(\hat{\theta}) = \hat{\sigma}^2 (X^T X)^{-1}$ $V(\hat{\theta}) = \hat{\sigma}^2 (X^T X)^{-1}$ - $V(\hat{\theta}) = \hat{\sigma}^2 \frac{1}{n g^T g - n^2 \bar{g}^2} + [itex]V(\hat{\theta}) = \hat{\sigma}^2 \frac{1}{N g^T g - N^2 \bar{g}^2} - \begin{bmatrix} g^Tg & -n\bar{g} \\ -n\bar{g} & n \end{bmatrix} + \begin{bmatrix} g^Tg & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix}$ [/itex] - $V(\hat{\beta}) = \frac{\hat{\sigma}^2}{g^Tg - n\bar{g}^2}$ + $V(\hat{\beta}) = \frac{\hat{\sigma}^2}{g^Tg - N\bar{g}^2}$

## Revision as of 15:04, 19 June 2012

Project name Main project page
Next entry

## Simple linear regression

• Data: Let's assume that we obtained data from N individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (eg. expression level at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP. We want to assess their linear relationship.
• Model: for this we use a simple linear regression (univariate phenotype, single predictor).

$\forall n \in {1,\ldots,N}, \; y_n = \mu + \beta g_n + \epsilon_n \text{ with } \epsilon_n \sim N(0,\sigma^2)$

In matrix notation:

y = Xθ + ε with ε˜NN(0,σ2IN) and θT = (μ,β)

Most importantly, we want the following estimates: $\hat{\beta}$, $se(\hat{\beta})$ (its standard error) and $\hat{\sigma}$. In the case where we don't have access to the original data (eg. because genotypes are confidential) but only to some summary statistics (see below), it is still possible to calculate the estimates.

Here is the ordinary-least-square (OLS) estimator of θ:

$\hat{\theta} = (X^T X)^{-1} X^T Y$

$\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \left( \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} \begin{bmatrix} 1 & g_1 \\ \vdots & \vdots \\ 1 & g_N \end{bmatrix} \right)^{-1} \begin{bmatrix} 1 & \ldots & 1 \\ g_1 & \ldots & g_N \end{bmatrix} \begin{bmatrix} y_1 \\ \vdots \\ y_N \end{bmatrix}$

$\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \begin{bmatrix} N & \sum_n g_n \\ \sum_n g_n & \sum_n g_n^2 \end{bmatrix}^{-1} \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix}$

$\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} \begin{bmatrix} \sum_n g_n^2 & - \sum_n g_n \\ - \sum_n g_n & N \end{bmatrix} \begin{bmatrix} \sum_n y_n \\ \sum_n g_n y_n \end{bmatrix}$

$\begin{bmatrix} \hat{\mu} \\ \hat{\beta} \end{bmatrix} = \frac{1}{N \sum_n g_n^2 - (\sum_n g_n)^2} \begin{bmatrix} \sum_n g_n^2 \sum_n y_n - \sum_n g_n \sum_n g_n y_n \\ - \sum_n g_n \sum_n y_n + N \sum_n g_n y_n \end{bmatrix}$

Let's now define 4 summary statistics, very easy to compute:

$\bar{y} = \frac{1}{N} \sum_{n=1}^N y_n$

$\bar{g} = \frac{1}{N} \sum_{n=1}^N g_n$

$g^T g = \sum_{n=1}^N g_n^2$

$g^T y = \sum_{n=1}^N g_n y_n$

This allows to obtain the estimate of the effect size only by having the summary statistics available:

$\hat{\beta} = \frac{g^T y - N \bar{g} \bar{y}}{g^T g - N \bar{g}^2}$

The same works for the estimate of the standard deviation of the errors:

$\hat{\sigma}^2 = \frac{1}{N-r}(y - X\hat{\theta})^T(y - X\hat{\theta})$

We can also benefit from this for the standard error of the parameters:

$V(\hat{\theta}) = \hat{\sigma}^2 (X^T X)^{-1}$

$V(\hat{\theta}) = \hat{\sigma}^2 \frac{1}{N g^T g - N^2 \bar{g}^2} \begin{bmatrix} g^Tg & -N\bar{g} \\ -N\bar{g} & N \end{bmatrix}$

$V(\hat{\beta}) = \frac{\hat{\sigma}^2}{g^Tg - N\bar{g}^2}$