Difference between revisions of "BIOL478/S13:Microarray Data Analysis"
(began by pasting in protocol from biomath class) 
(No difference)

Revision as of 09:02, 30 April 2013
Contents
Microarray Data Analysis
Background
This is a list of steps required to analyze DNA microarray data.
 Quantitate the fluorescence signal in each spot
 Calculate the ratio of red/green fluorescence
 Log transform the ratios
 Normalize the ratios on each microarray slide
 Steps 14 are performed by the GenePix Pro software.
 You will perform the following steps:
 Normalize the ratios for a set of slides in an experiment
 Perform statistical analysis on the ratios
 Compare individual genes with known data
 Steps 57 are performed in Microsoft Excel
 Pattern finding algorithms (clustering)
 Map onto biological pathways
 We will use software called STEM for the clustering and mapping
The second hybridization of aRNA that we performed was successful, but we only have one set of replicates. Thus we are not able to perform any statistical analysis of these data, although we will be able to perform clustering and Gene Ontology enrichment analysis of the clusters. To gain experience with the statistical analysis and to have a dataset for comparison, you will analyze data from the wild type strain of yeast from the Dahlquist lab.
You will download the wild type Excel spreadsheet from LionShare.
Experimental Design
On the spreadsheet, each row contains the data for one gene (one spot on the microarray). The first column (labeled "MasterIndex") numbers the rows in the spreadsheet so that we can match the data from different experiments together later. The second column (labeled "ID") contains the gene identifier from the Saccharomyces Genome Database. Each subsequent column contains the log_{2} ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 14 above having been done for you by the scanner software).
Each of the column headings from the data begin with the experiment name ("wt" for wild type data). "LogFC" stands for "Log_{2} Fold Change" which is the Log_{2} red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "0", "1", "2", etc. after the timepoint.
The timepoints are t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).
The number of replicates for each timepoint is as follows:
 t30: 30 minutes of cold shock; 5 replicates of the experiment
 t60: 60 minutes of cold shock; 4 replicates of the experiment
 t90: 60 minutes of cold shock followed by 30 minutes of recovery; 5 replicates of the experiment
 t120: 60 minutes of cold shock followed by 60 minutes of recovery; 5 replicates of the experiment
Normalize the ratios for a set of slides in an experiment
To scale and center the data (betweenchip normalization) perform the following operations:
 Insert a new Worksheet into your Excel file, and name it "scaled_centered".
 Go back to the "compiled_raw_data" worksheet, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, lefthand cell (cell A1) and Paste.
 Insert two rows in between the top row of headers and the first data row.
 In cell A2, type "Average" and in cell A3, type "StdDev".
 You will now compute the Average log ratio for each chip (each column of data). In cell C2, type the following equation:
=AVERAGE(C4:C6192)
and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can leftclick on the beginning cell (let go of the mouse button), scroll down to the bottom of the worksheet, and shiftleftclick on the ending cell.
 You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:
=STDEV(C4:C6192)
and press "Enter".
 Excel will now do some work for you. Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
 You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
 Insert a new column to the right of each data column and label the top of the column as with the same name as the column to the left, but adding "_sc" for scaled and centered to the name. For example, "wt_LogFC_t151_sc"
 In cell D4, type the following equation:
=(C4C$2)/C$3
In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3). We use the dollar sign symbols in front of the number to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?
 Copy and paste this equation into the entire column.
 Repeat the scaling and centering equation for each of the columns of data. You can copy and paste the formula above, but be sure that your equation is correct for the column you are calculating.
Perform statistical analysis on the ratios
We are going to perform this step on the scaled and centered data you produced in the previous step.
 Insert a new worksheet into your Excel spreadsheet and name it "statistics".
 Go back to the "scaled_centered" worksheet, Select All and Copy. Go to your new "statistics" worksheet, click on the upper, lefthand cell (cell A1) and Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
 There may be some nonnumerical values in some of the cells in your worksheet. This is due to errors created when Excel tries to compute an equation on a cell that has no data. We need to go through and remove these error messages before going on to the next step.
 Scan through your spreadsheet to find an example of the error message. Then go to the Edit menu and Select Replace. A window will open, type the text you are replacing in the "Find what:" field. In the "Replace with:" field, enter a single space character. Click on the button "Replace All" and record the number of replacements made in your wiki page.
 We are now going to work with your scaled and centered Log Fold Changes only, so delete the columns containing the raw Log Fold changes, leaving only the columns that have the "_sc" suffix in their column headings. You may also delete the second and third rows where you computed the average and standard deviations for each chip.
 Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the average log fold changes that you will compute. Name them with the pattern <wt, dCIN5, etc. for the strain>_<AvgLogFC>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_AvgLogFC_t15".
 Compute the average log fold change for the replicates for each timepoint by typing the equation:
=AVERAGE(range of cells in the row for that timepoint)
into the second cell below the column heading. For example, your equation might read
=AVERAGE(C2:F2)
Copy this equation and paste it into the rest of the column.
 Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.
 Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the T statistic that you will compute. Name them with the pattern <wt, dCIN5, etc. for the strain>_<Tstat>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Tstat_t15". You will now compute a T statistic that tells you whether the scaled and centered average log fold change is significantly different than 0 (no change in expression). Enter the equation into the second cell below the column heading:
=AVERAGE(range of cells)/(STDEV(range of cells)/SQRT(number of replicates))
For example, your equation might read:
=AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4))
(NOTE: in this case the number of replicates is 4. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once.
 Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the P value that you will compute. Name them with the pattern <wt, or dCIN5, etc. for the strain>_<Pval>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Pval_t15". In the cell below the label, enter the equation:
=TDIST(ABS(cell containing T statistic),degrees of freedom,2)
For example, your equation might read:
=TDIST(ABS(AE2),3,2)
The number of degrees of freedom is the number of replicates minus one, so in our case there are 3 degrees of freedom. Copy the equation and paste it into all rows in that column.
 Insert a new worksheet and name it "final".
 Go back to the "statistics" worksheet and Select All and Copy.
 Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. This is your final worksheet from which we will perform biological analysis of the data.
 Select all of the columns containing Fold Changes. Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
 Select all of the columns containing T statistics or P values. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
 Upload the .xls file that you have just created to LionShare. Give Dr. Dahlquist (username kdahlqui) and Dr. Fitzpatrick (username bfitzpatrick) permission to download your file. Send an email to each of us with the link to the file.
Sanity Check: Number of genes significantly changed
Before we move on to the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cutoffs.
 Open your spreadsheet and go to the "final" worksheet.
 Click on cell A1 and select the menu item Data > Filter > Autofilter. Little dropdown arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
 Click on the dropdown arrow on one of your "Pval" columns. Select "Custom". In the window that appears, set a criterion that will filter your data so that the P value has to be less than 0.05.
 How many genes have p value < 0.05?
 What about p < 0.01?
 What about p < 0.001?
 What about p < 0.0001?
 Answer these questions for each timepoint in your dataset.
 When we use a p value cutoff of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
 We have just performed 6189 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 309 times. Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones.
 There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction. To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
 Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction.
 There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction. To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
 The "AvgLogFC" tells us the magnitude of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control. For the timepoint that had the greatest number of genes significantly changed at p < 0.05, answer the following:
 Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change greater than zero. How many meet these two criteria?
 Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change less than zero. How many meet these two criteria?
 Keeping the "Pval" filter at p < 0.05, How many have an average log fold change of > 0.25 and p < 0.05?
 How many have an average log fold change of < 0.25 and p < 0.05? (These are more realistic values for the fold change cutoffs because it represents about a 20% fold change which is about the level of detection of this technology.)
 In summary, the p value cutoff should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cutoff. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cutoff.
 The expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. Find NSR1 in your dataset. Is it's expression significantly changed at any timepoint? Record the average fold change and p value for NSR1 for each timepoint in your dataset.
 Which gene has the smallest p value in your dataset (at any timepoint)? You can find this by sorting your data based on p value (but be careful that you don't cause a mismatch in the rows of your data!) Look up the function of this gene at the Saccharomyces Genome Database and record it in your notebook. Why do you think the cell is changing this gene's expression upon cold shock?