BIOL398-01/S11:Week 11

This journal entry is due on Tuesday, April 5 at midnight PDT (Monday night/Tuesday morning). NOTE new due date and that the server records the time as Eastern Daylight Time (EDT). Therefore, midnight will register as 03:00.

Individual Journal Assignment

 * Store this journal entry as "username Week 11" (i.e., this is the text to place between the square brackets when you link to this page).
 * Create the following set of links. (HINT: you can do all of this easily by adding them to your template and then using the template on your pages.)
 * Link to your journal entry from your user page.
 * Link back from your journal entry to your user page.
 * Link to this assignment from your journal entry.
 * Don't forget to add the "BIOL398-01/S11" category to the end of your wiki page.

Background
This is a list of steps required to analyze DNA microarray data.


 * 1) Quantitate the fluorescence signal in each spot
 * 2) Calculate the ratio of red/green fluorescence
 * 3) Log transform the ratios
 * 4) Normalize the ratios on each microarray slide
 * 5) * Steps 1-4 are performed by the GenePix Pro software.
 * 6) * You will perform the following steps:
 * 7) Normalize the ratios for a set of slides in an experiment
 * 8) Perform statistical analysis on the ratios
 * 9) Compare individual genes with known data
 * 10) * Steps 5-7 are performed in Microsoft Excel
 * 11) Pattern finding algorithms (clustering)
 * 12) Map onto biological pathways
 * 13) * We will use software called STEM for the clustering and mapping
 * 14) Create mathematical model of transcriptional network

Each group will analyze a different microarray dataset:
 * Wild type data from the Schade et al. (2004) paper you read last week.
 * Wild type data from the Dahlquist lab.
 * Δgln3 data from the Dahlquist lab.

For your assignment this week, you will keep an electronic laboratory notebook on your individual wiki page that records all the manipulations you perform on the data and the answers to the questions throughout the protocol.

You will download your assigned Excel spreadsheet from LionShare. Because the Dahlquist Lab data is unpublished, please do not post it on this public wiki. Instead, keep the file(s) on LionShare, which is protected by a password.


 * Groups:
 * Schade et al. (2004) data: Carmen, James
 * Dahlquist lab wild type data: Sarah, Nick
 * Dahlquist lab Δgln3 data: Alondra

Experimental Design
On the spreadsheet, each row contains the data for one gene (one spot on the microarray). The first column (labeled "MasterIndex") numbers the rows in the spreadsheet so that we can match the data from different experiments together later. The second column (labeled "ID") contains the gene identifier from the Saccharomyces Genome Database. Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-4 above having been done for you by the scanner software).

Each of the column headings from the data begin with the experiment name ("Schade" for Schade wild type data, "wt" for Dahlquist wild type data, and "dGLN3" for the Dahlquist Δgln3 data). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.

For the Schade data, the timepoints are t0, t10, t30, t120, t12h (12 hours), and t60 (60 hours) of cold shock at 10°C.

For the Dahlquist data (both wild type and Δgln3), the timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C). Note that the experimental designs are different.


 * Begin by recording in your wiki the number of replicates for each time point in your data. For the group assigned to the Schade data, compare the number of replicates with what is stated in the Materials and Methods for the paper.  Is it the same?  If not, how is it different?

Normalize the ratios for a set of slides in an experiment
To scale and center the data (between-chip normalization) perform the following operations:

=AVERAGE(C4:C6190) and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can left-click on the beginning cell (let go of the mouse button), scroll down to the bottom of the worksheet, and shift-left-click on the ending cell. =STDEV(C4:C6190) and press "Enter". =(C4-C$2)/C$3 In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3). We use the dollar sign symbols in front of the number to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?
 * Insert a new Worksheet into your Excel file, and name it "scaled_centered".
 * Go back to the "compiled_raw_data" worksheet, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
 * Insert two rows in between the top row of headers and the first data row.
 * In cell A2, type "Average" and in cell A3, type "StdDev".
 * You will now compute the Average log ratio for each chip (each column of data). In cell C2, type the following equation:
 * You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:
 * Excel will now do some work for you. Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns.  Excel will automatically change the equation to match the cell designations for those columns.
 * You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
 * Insert a new column to the right of each data column and label the top of the column as with the same name as the column to the left, but adding "_sc" for scaled and centered to the name. For example, "wt_LogFC_t15-1_sc"
 * In cell D4, type the following equation:
 * Copy and paste this equation into the entire column.
 * Repeat the scaling and centering equation for each of the columns of data. You can copy and paste the formula above, but be sure that your equation is correct for the column you are calculating.

Perform statistical analysis on the ratios
We are going to perform this step on the scaled and centered data you produced in the previous step.

=AVERAGE(range of cells in the row for that timepoint) into the second cell below the column heading. For example, your equation might read =AVERAGE(C2:F2) Copy this equation and paste it into the rest of the column. =AVERAGE(range of cells)/(STDEV(range of cells)/SQRT(number of replicates)) For example, your equation might read: =AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4)) (NOTE: in this case the number of replicates is 4. Be careful that you are using the correct number of parentheses.)  Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once. =TDIST(ABS(cell containing T statistic),degrees of freedom,2) For example, your equation might read: =TDIST(ABS(AE2),3,2) The number of degrees of freedom is the number of replicates minus one, so in our case there are 3 degrees of freedom. Copy the equation and paste it into all rows in that column.
 * Insert a new worksheet into your Excel spreadsheet and name it "statistics".
 * Go back to the "scaled_centered" worksheet, Select All and Copy. Go to your new "statistics" worksheet, click on the upper, left-hand cell (cell A1) and Select "Paste Special" from the Edit menu.  A window will open: click on the radio button for "Values" and click OK.  This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
 * There may be some non-numerical values in some of the cells in your worksheet. This is due to errors created when Excel tries to compute an equation on a cell that has no data.  We need to go through and remove these error messages before going on to the next step.
 * Scan through your spreadsheet to find an example of the error message. Then go to the Edit menu and Select Replace.  A window will open, type the text you are replacing in the "Find what:" field.  In the "Replace with:" field, enter a single space character.  Click on the button "Replace All" and record the number of replacements made in your wiki page.
 * We are now going to work with your scaled and centered Log Fold Changes only, so delete the columns containing the raw Log Fold changes, leaving only the columns that have the "_sc" suffix in their column headings. You may also delete the second and third rows where you computed the average and standard deviations for each chip.
 * Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the average log fold changes that you will compute.  Name them with the pattern __ where you use the appropriate text within the <> and where x is the time.  For example, "wt_AvgLogFC_t15".
 * Compute the average log fold change for the replicates for each timepoint by typing the equation:
 * Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.
 * Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the T statistic that you will compute.  Name them with the pattern __ where you use the appropriate text within the <> and where x is the time.  For example, "wt_Tstat_t15".  You will now compute a T statistic that tells you whether the scaled and centered average log fold change is significantly different than 0 (no change in expression).  Enter the equation into the second cell below the column heading:
 * Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the P value that you will compute.  Name them with the pattern __ where you use the appropriate text within the <> and where x is the time.  For example, "wt_Pval_t15".  In the cell below the label, enter the equation:
 * Insert a new worksheet and name it "final".
 * Go back to the "statistics" worksheet and Select All and Copy.
 * Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. This is your final worksheet from which we will perform biological analysis of the data.
 * Select all of the columns containing Fold Changes. Select the menu item Format > Cells.  Under the number tab, select 2 decimal places.  Click OK.
 * Select all of the columns containing T statistics or P values. Select the menu item Format > Cells.  Under the number tab, select 4 decimal places. Click OK.
 * Upload the .xls file that you have just created to LionShare. Give Dr. Dahlquist (username kdahlqui) and Dr. Fitzpatrick (username bfitzpatrick) permission to download your file.  Send an e-mail to each of us with the link to the file.

Sanity Check: Number of genes significantly changed
Before we move on to the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Schade et al. (2004).


 * Open your spreadsheet and go to the "final" worksheet.
 * Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
 * Click on the drop-down arrow on one of your "Pval" columns. Select "Custom".  In the window that appears, set a criterion that will filter your data so that the P value has to be less than 0.05.
 * '''How many genes have p value < 0.05?
 * What about p < 0.01?
 * What about p < 0.001?
 * What about p < 0.0001?'''
 * Answer these questions for each timepoint in your dataset.
 * When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
 * We have just performed 6189 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 309 times.  Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know which ones.
 * There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction. To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
 * Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction.
 * The "AvgLogFC" tells us the magnitude of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control.  For the timepoint that had the greatest number of genes significantly changed at p < 0.05, answer the following:
 * '''Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change greater than zero. How many meet these two criteria?
 * Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change less than zero. How many meet these two criteria?
 * Keeping the "Pval" filter at p < 0.05, How many have an average log fold change of > 0.25 and p < 0.05?
 * How many have an average log fold change of < -0.25 and p < 0.05? (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)'''
 * In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
 * What criteria did Schade et al. (2004) use to determine a significant gene expression change? How does it compare to our method?
 * The expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. (Recall that it is specifically mentioned in the Schade et al. (2004) paper.)  Find NSR1 in your dataset.  Is it's expression significantly changed at any timepoint?  Record the average fold change and p value for NSR1 for each timepoint in your dataset.
 * Which gene has the smallest p value in your dataset (at any timepoint)? You can find this by sorting your data based on p value (but be careful that you don't cause a mismatch in the rows of your data!)  Look up the function of this gene at the Saccharomyces Genome Database and record it in your notebook.  Why do you think the cell is changing this gene's expression upon cold shock?

Shared Journal Assignment

 * Store your journal entry in the shared Class Journal Week 11 page.  If this page does not exist yet, go ahead and create it (congratulations on getting in first :) )
 * Link to your journal entry from your user page.
 * Link back from the journal entry to your user page.
 * Sign your portion of the journal with the standard wiki signature shortcut.
 * Add the "BIOL398-01/S11" category to the end of the wiki page (if someone has not already done so).

Reflection

 * 1) What aspect of this assignment came most easily to you?
 * 2) What aspect of this assignment was the most challenging for you?
 * 3) What (yet) do you not understand?
 * 4) Does "crunching" the data yourself help you to understand microarray experiments in general and the Schade paper in particular?  Why or why not?