# User:Timothee Flutre/Notebook/Postdoc/2012/03/04

(Difference between revisions)
 Revision as of 20:17, 4 March 2012 (view source) (Autocreate 2012/03/04 Entry for User:Timothee_Flutre/Notebook/Postdoc)← Previous diff Current revision (23:12, 6 October 2013) (view source) (→"Advanced Data Analysis from an Elementary Point of View" by Cosma Shalizi: update link to book) (One intermediate revision not shown.) Line 6: Line 6: | colspan="2"| | colspan="2"| - ==Entry title== + =="Advanced Data Analysis from an Elementary Point of View" by Cosma Shalizi== - * Insert content here... + + ''(This page summarizes my notes about this great course. All the course is available [http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV/ online], so you're likely to prefer to refer to it directly.)'' + + * '''Concepts to know:''' + ** Random variable; population, sample. Cumulative distribution function, probability mass function, probability density function. Specific distributions: Bernoulli, binomial, Poisson, geometric, Gaussian, exponential, t , Gamma. Expectation value. Variance, standard deviation. Sample mean, sample variance. Median, mode. Quartile, percentile, quantile. Inter-quartile range. Histograms. + ** Joint distribution functions. Conditional distributions; conditional expectations and variances. Statistical independence and dependence. Covariance and correlation; why dependence is not the same thing as correlation. Rules for arithmetic with expectations, variances and covariances. Laws of total probability, [http://en.wikipedia.org/wiki/Law_of_total_expectation total expectation], total variation. Contingency tables; odds ratio, log odds ratio. + ** Sequences of random variables. Stochastic process. [http://en.wikipedia.org/wiki/Law_of_large_numbers Law of large numbers]. [http://en.wikipedia.org/wiki/Central_limit_theorem Central limit theorem]. + ** Parameters; estimator functions and point estimates. Sampling distribution. Bias of an estimator. Standard error of an estimate; standard error of the mean; how and why the standard error of the mean differs from the standard deviation. Confidence intervals and interval estimates. + ** Hypothesis tests. Tests for differences in means and in proportions; Z and t tests; degrees of freedom. Size, significance, power. Relation between hypothesis tests and confidence intervals. χ 2 test of independence for contingency tables; degrees of freedom. KS test for goodness-of-fit to distributions. + ** Linear regression. Meaning of the linear regression function. Fitted values and residuals of a regression. Interpretation of regression coefficients. Least-squares estimate of coefficients. Matrix formula for estimating the coefficients; the hat matrix. R2 ; why adding more predictor variables never reduces R2 . The t -test for the significance of individual coefficients given other coefficients. The F -test and partial F -test for the significance of regression models. Degrees of freedom for residuals. Examination of residuals. Confidence intervals for parameters. Confidence intervals for fitted values. Prediction intervals. + ** Likelihood. Likelihood functions. Maximum likelihood estimates. Relation between maximum likelihood, least squares, and Gaussian distributions. Relation between confidence intervals and the likelihood function. Likelihood ratio test. + + * '''I. Regression and Its Generalizations. 1. Regression basics''' + + '''1.1 Statistics, Data Analysis, Regression''' + + '''1.2 Guessing the Value of a Random Variable''' + + Use mean squared error to see how bad we are doing when guessing value of Y by using a: + + $MSE(a) = E[(Y-a)^2]$ + + $MSE(a) = (E[Y-a])^2 + V[Y-a]$ + + $MSE(a) = (E[Y]-a)^2 + V[Y]$ + + $\frac{dMSE}{da}(a) = 2(E[Y]-a)$ + + $\frac{dMSE}{da}(r) = 0 \Leftrightarrow r = E[Y]$ + + '''1.2.1 Estimating the Expected Value''' + + Sample mean: $\hat{r} = \frac{1}{n} \sum_{i=1}^n y_i$ + + If the $(y_i)$ are iid, law of large numbers says $\hat{r} \rightarrow E[Y] = r$ and central limit theorem indicates how fast convergence is (squared error is about $V(Y) / n$). + + '''1.3 The Regression Function''' + + Use X (predictor or independent variable or covariate or input) to predict Y (dependent or variable or output or response). How bad are we doing when using f(X) to predict Y? + + $MSE(f(X)) = E[(Y-f(X))^2]$ + + Use law of total expectation ($E[U]=E[E[U|V]]$): + + $MSE(f(X)) = E[E[(Y-f(X))^2|X]]$ + + $MSE(f(X)) = E[V[Y|X] + (E[Y-f(X)|X])^2]$ + + Regression function: $r(x) = E[Y|X=x]$ + + '''1.3.1 Some Disclaimers''' + + Usually we observe $Y|X = r(X) + \eta(X)$, ie. $\eta$ (noise variable with mean 0 and variance $\sigma_X^2$) depends on X... + + '''1.4 Estimating the Regression Function''' + + Use conditional sample means: $\hat{r}(x) = \frac{1}{\sharp \{i:x_i=x\}} \sum_{i:x_i=x} y_i$ + + Works only when X is discrete. + + '''1.4.1 The Bias-Variance Tradeoff''' + + $MSE(\hat{r}(x)) = E[(Y-\hat{r}(x))^2]$ + + $MSE(\hat{r}(x)) = E[(Y-r(x) + r(x)-\hat{r}(x))^2]$ + + $MSE(\hat{r}(x)) = E[(Y-r(x))^2 + 2(Y-r(x))(r(x)-\hat{r}(x)) + (r(x)-\hat{r}(x))^2]$ + + $MSE(\hat{r}(x)) = \sigma_x^2 + (r(x)-\hat{r}(x))^2$ + + In fact, we have analyzed $MSE(\hat{R}_n(x)|\hat{R}_n=\hat{r})$ where $\hat{R}_n$ is a random regression function estimated using n random pairs $(x_i,y_i)$. + + $MSE(\hat{R}_n(x)) = E[(Y-\hat{R}_n(X))^2|X=x]$ + + $MSE(\hat{R}_n(x)) = E[E[(Y-\hat{R}_n(X))^2|X=x,\hat{R}_n=\hat{r}]|X=x]$ + + $MSE(\hat{R}_n(x)) = E[\sigma_x^2 + (r(x)-\hat{R}_n(x))^2]$ + + $MSE(\hat{R}_n(x)) = \sigma_x^2 + E[(r(x)-E[\hat{R}_n(x)]+E[\hat{R}_n(x)]-\hat{R}_n(x))^2]$ + + $MSE(\hat{R}_n(x)) = \sigma_x^2 + (r(x)-E[\hat{R}_n(x)])^2 + V[\hat{R}_n(x)]$ + + Even if our method is unbiased ($r(x) = E[\hat{R}_n(x)]$, no approximation bias), we can still have a lot of variance in our estimates ($V[\hat{R}_n(x)]$ large). + + A method is '''consistent''' (for r) when both the approximation bias and the estimation variance go to 0 when we get more and more data. + + '''1.4.2 The Bias-Variance Trade-Off in Action''' + + '''1.4.3 Ordinary Least Squares Linear Regression as Smoothing''' + + Assume X is one-dimensional and both X and Y are centered. Choose to approximate r(x) by $\alpha+\beta x$. Need to find their values a and b minimizing the MSE. + + $MSE(\alpha,\beta) = E[(Y-\alpha-\beta X)^2]$ + + $MSE(\alpha,\beta) = E[E[(Y-\alpha-\beta X)^2|X]]$ + + $MSE(\alpha,\beta) = E[V[Y|X] + (E[Y-\alpha-\beta X)|X])^2]$ + + $MSE(\alpha,\beta) = E[V[Y|X]] + E[(E[Y-\alpha-\beta X)|X])^2]$ + + $\frac{\partial MSE}{\partial \alpha} = E[2(-1)(Y-\alpha-\beta X)]$ + + $\frac{\partial MSE}{\partial \alpha} = 0 \Leftrightarrow a = E[Y] - \beta E[X] = 0$ + + $\frac{\partial MSE}{\partial \beta} = E[2(-X)(Y-\alpha-\beta X)]$ + + $\frac{\partial MSE}{\partial \beta} = 0 \Leftrightarrow E[XY]-bE[X^2] = 0 \Leftrightarrow b = \frac{Cov[X,Y]}{V[X]}$ + + Now, estimate a and b from the data (replacing population values by sample values, or minimizing the residual sum of squares): + + $\hat{a} = 0$ and $\hat{b} = \frac{\sum_i y_i x_i}{\sum_i x_i^2}$ + + Least-square linear regression is thus a smoothing of the data: + + $\hat{r}(x) = \hat{b}x = \sum_i y_i \frac{x_i}{n s_X^2} x$ + + Indeed, the prediction is a weighted average of the observed values $y_i$, where the weights are proportional to how far $x_i$ is from the center of the data, relative to the variance, and proportional to the magnitude of x. + + Note that the weight of a data point depends on how far it is from the center of all the data, not how far it is from the point at which we are trying to predict. + + '''1.5 Linear Smoothers''' + + $\hat{r}(x) = \sum_i y_i \hat{w}(x_i,x)$ + + Sample mean: $\hat{w}(x_i,x) = 1/n$ + + Ordinary linear regression: $\hat{w}(x_i,x) = (x_i/ns_X^2)x$ + + '''1.5.1 k-Nearest-Neighbor Regression''' + + $\hat{w}(x_i,x) = 1/k$ if $x_i$ is one of the k nearest neighbors of x, 0 otherwise + + '''1.5.2 Kernel Smoothers''' + + For instance use $K(x_i,x) \rightarrow N(0,\sqrt{h}$ where h is the bandwidth so that $\hat{w}(x_i,x) = \frac{K(x_i,x)}{\sum_j K(x_i,x)}$ + + '''1.6 Exercises''' + + What minimizes the mean absolute error? + + $MAE(a) = E[|Y-a|]$ + + $MAE(a) = - \int_l^a (Y-a) p(Y) dY + \int_a^u (Y-a) p(Y) dY$ + + Using Leibniz rule for differentiation under the integral: + + $\frac{dMAE}{da}(a) = \int_l^a p(Y) dY - \int_a^u p(Y) dY$ + + $\frac{dMAE}{da}(a) = 2 \int_l^a p(Y) dY - 1$ + + $\frac{dMAE}{da}(a) = 2 P(Y \le a) - 1$ + + $\frac{dMAE}{da}(a) = 0 \Leftrightarrow P(Y \le r) = \frac{1}{2}$ + + The median minimizes the MAE.

## Current revision

Project name Main project page
Previous entry      Next entry

## "Advanced Data Analysis from an Elementary Point of View" by Cosma Shalizi

• Concepts to know:
• Random variable; population, sample. Cumulative distribution function, probability mass function, probability density function. Specific distributions: Bernoulli, binomial, Poisson, geometric, Gaussian, exponential, t , Gamma. Expectation value. Variance, standard deviation. Sample mean, sample variance. Median, mode. Quartile, percentile, quantile. Inter-quartile range. Histograms.
• Joint distribution functions. Conditional distributions; conditional expectations and variances. Statistical independence and dependence. Covariance and correlation; why dependence is not the same thing as correlation. Rules for arithmetic with expectations, variances and covariances. Laws of total probability, total expectation, total variation. Contingency tables; odds ratio, log odds ratio.
• Sequences of random variables. Stochastic process. Law of large numbers. Central limit theorem.
• Parameters; estimator functions and point estimates. Sampling distribution. Bias of an estimator. Standard error of an estimate; standard error of the mean; how and why the standard error of the mean differs from the standard deviation. Confidence intervals and interval estimates.
• Hypothesis tests. Tests for differences in means and in proportions; Z and t tests; degrees of freedom. Size, significance, power. Relation between hypothesis tests and confidence intervals. χ 2 test of independence for contingency tables; degrees of freedom. KS test for goodness-of-fit to distributions.
• Linear regression. Meaning of the linear regression function. Fitted values and residuals of a regression. Interpretation of regression coefficients. Least-squares estimate of coefficients. Matrix formula for estimating the coefficients; the hat matrix. R2 ; why adding more predictor variables never reduces R2 . The t -test for the significance of individual coefficients given other coefficients. The F -test and partial F -test for the significance of regression models. Degrees of freedom for residuals. Examination of residuals. Confidence intervals for parameters. Confidence intervals for fitted values. Prediction intervals.
• Likelihood. Likelihood functions. Maximum likelihood estimates. Relation between maximum likelihood, least squares, and Gaussian distributions. Relation between confidence intervals and the likelihood function. Likelihood ratio test.
• I. Regression and Its Generalizations. 1. Regression basics

1.1 Statistics, Data Analysis, Regression

1.2 Guessing the Value of a Random Variable

Use mean squared error to see how bad we are doing when guessing value of Y by using a:

MSE(a) = E[(Ya)2]

MSE(a) = (E[Ya])2 + V[Ya]

MSE(a) = (E[Y] − a)2 + V[Y]

$\frac{dMSE}{da}(a) = 2(E[Y]-a)$

$\frac{dMSE}{da}(r) = 0 \Leftrightarrow r = E[Y]$

1.2.1 Estimating the Expected Value

Sample mean: $\hat{r} = \frac{1}{n} \sum_{i=1}^n y_i$

If the (yi) are iid, law of large numbers says $\hat{r} \rightarrow E[Y] = r$ and central limit theorem indicates how fast convergence is (squared error is about V(Y) / n).

1.3 The Regression Function

Use X (predictor or independent variable or covariate or input) to predict Y (dependent or variable or output or response). How bad are we doing when using f(X) to predict Y?

MSE(f(X)) = E[(Yf(X))2]

Use law of total expectation (E[U] = E[E[U | V]]):

MSE(f(X)) = E[E[(Yf(X))2 | X]]

MSE(f(X)) = E[V[Y | X] + (E[Yf(X) | X])2]

Regression function: r(x) = E[Y | X = x]

1.3.1 Some Disclaimers

Usually we observe Y | X = r(X) + η(X), ie. η (noise variable with mean 0 and variance $\sigma_X^2$) depends on X...

1.4 Estimating the Regression Function

Use conditional sample means: $\hat{r}(x) = \frac{1}{\sharp \{i:x_i=x\}} \sum_{i:x_i=x} y_i$

Works only when X is discrete.

$MSE(\hat{r}(x)) = E[(Y-\hat{r}(x))^2]$

$MSE(\hat{r}(x)) = E[(Y-r(x) + r(x)-\hat{r}(x))^2]$

$MSE(\hat{r}(x)) = E[(Y-r(x))^2 + 2(Y-r(x))(r(x)-\hat{r}(x)) + (r(x)-\hat{r}(x))^2]$

$MSE(\hat{r}(x)) = \sigma_x^2 + (r(x)-\hat{r}(x))^2$

In fact, we have analyzed $MSE(\hat{R}_n(x)|\hat{R}_n=\hat{r})$ where $\hat{R}_n$ is a random regression function estimated using n random pairs (xi,yi).

$MSE(\hat{R}_n(x)) = E[(Y-\hat{R}_n(X))^2|X=x]$

$MSE(\hat{R}_n(x)) = E[E[(Y-\hat{R}_n(X))^2|X=x,\hat{R}_n=\hat{r}]|X=x]$

$MSE(\hat{R}_n(x)) = E[\sigma_x^2 + (r(x)-\hat{R}_n(x))^2]$

$MSE(\hat{R}_n(x)) = \sigma_x^2 + E[(r(x)-E[\hat{R}_n(x)]+E[\hat{R}_n(x)]-\hat{R}_n(x))^2]$

$MSE(\hat{R}_n(x)) = \sigma_x^2 + (r(x)-E[\hat{R}_n(x)])^2 + V[\hat{R}_n(x)]$

Even if our method is unbiased ($r(x) = E[\hat{R}_n(x)]$, no approximation bias), we can still have a lot of variance in our estimates ($V[\hat{R}_n(x)]$ large).

A method is consistent (for r) when both the approximation bias and the estimation variance go to 0 when we get more and more data.

1.4.2 The Bias-Variance Trade-Off in Action

1.4.3 Ordinary Least Squares Linear Regression as Smoothing

Assume X is one-dimensional and both X and Y are centered. Choose to approximate r(x) by α + βx. Need to find their values a and b minimizing the MSE.

MSE(α,β) = E[(Y − α − βX)2]

MSE(α,β) = E[E[(Y − α − βX)2 | X]]

MSE(α,β) = E[V[Y | X] + (E[Y − α − βX) | X])2]

MSE(α,β) = E[V[Y | X]] + E[(E[Y − α − βX) | X])2]

$\frac{\partial MSE}{\partial \alpha} = E[2(-1)(Y-\alpha-\beta X)]$

$\frac{\partial MSE}{\partial \alpha} = 0 \Leftrightarrow a = E[Y] - \beta E[X] = 0$

$\frac{\partial MSE}{\partial \beta} = E[2(-X)(Y-\alpha-\beta X)]$

$\frac{\partial MSE}{\partial \beta} = 0 \Leftrightarrow E[XY]-bE[X^2] = 0 \Leftrightarrow b = \frac{Cov[X,Y]}{V[X]}$

Now, estimate a and b from the data (replacing population values by sample values, or minimizing the residual sum of squares):

$\hat{a} = 0$ and $\hat{b} = \frac{\sum_i y_i x_i}{\sum_i x_i^2}$

Least-square linear regression is thus a smoothing of the data:

$\hat{r}(x) = \hat{b}x = \sum_i y_i \frac{x_i}{n s_X^2} x$

Indeed, the prediction is a weighted average of the observed values yi, where the weights are proportional to how far xi is from the center of the data, relative to the variance, and proportional to the magnitude of x.

Note that the weight of a data point depends on how far it is from the center of all the data, not how far it is from the point at which we are trying to predict.

1.5 Linear Smoothers

$\hat{r}(x) = \sum_i y_i \hat{w}(x_i,x)$

Sample mean: $\hat{w}(x_i,x) = 1/n$

Ordinary linear regression: $\hat{w}(x_i,x) = (x_i/ns_X^2)x$

1.5.1 k-Nearest-Neighbor Regression

$\hat{w}(x_i,x) = 1/k$ if xi is one of the k nearest neighbors of x, 0 otherwise

1.5.2 Kernel Smoothers

For instance use $K(x_i,x) \rightarrow N(0,\sqrt{h}$ where h is the bandwidth so that $\hat{w}(x_i,x) = \frac{K(x_i,x)}{\sum_j K(x_i,x)}$

1.6 Exercises

What minimizes the mean absolute error?

MAE(a) = E[ | Ya | ]

$MAE(a) = - \int_l^a (Y-a) p(Y) dY + \int_a^u (Y-a) p(Y) dY$

Using Leibniz rule for differentiation under the integral:

$\frac{dMAE}{da}(a) = \int_l^a p(Y) dY - \int_a^u p(Y) dY$

$\frac{dMAE}{da}(a) = 2 \int_l^a p(Y) dY - 1$

$\frac{dMAE}{da}(a) = 2 P(Y \le a) - 1$

$\frac{dMAE}{da}(a) = 0 \Leftrightarrow P(Y \le r) = \frac{1}{2}$

The median minimizes the MAE.