James C. Clements: Week 11: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(protocol)
Line 50: Line 50:
* Gene with lowest P value.  
* Gene with lowest P value.  
** Index number 3328 was found for my data to have the smallest P value. This is systematic name YNL316C, standard name PHA2 used in the synthesis of Phenylalanine, an essential amino acid it only shows significant change at the 2 hour mark. The ALFC for this gene was -1.5751, so it was downregulated. It may be difficult for the cell to make proteins or amino acids at the 2 hour time point and thus the cell would downregulate the gene.
** Index number 3328 was found for my data to have the smallest P value. This is systematic name YNL316C, standard name PHA2 used in the synthesis of Phenylalanine, an essential amino acid it only shows significant change at the 2 hour mark. The ALFC for this gene was -1.5751, so it was downregulated. It may be difficult for the cell to make proteins or amino acids at the 2 hour time point and thus the cell would downregulate the gene.
== Protocol: ==
A MatLab file was created and used to follow the procedure below. The beginning for the MatLab script details the preprocessing of the data.
=== Microarray Data Analysis ===
==== Background ====
This is a list of steps required to analyze DNA microarray data.
#Quantitate the fluorescence signal in each spot
#Calculate the ratio of red/green fluorescence
#Log transform the ratios
#Normalize the ratios on each microarray slide
#* Steps 1-4 are performed by the GenePix Pro software.
#* You will perform the following steps:
#Normalize the ratios for a set of slides in an experiment
#Perform statistical analysis on the ratios
#Compare individual genes with known data
#* Steps 5-7 are performed in Microsoft Excel
#Pattern finding algorithms (clustering)
#Map onto biological pathways
#* We will use software called STEM for the clustering and mapping
#Create mathematical model of transcriptional network
Each group will analyze a different microarray dataset:
* Wild type data from the Schade ''et al''. (2004) paper you read last week.
* Wild type data from the Dahlquist lab.
* ''Δgln3'' data from the Dahlquist lab.
For your assignment this week, you will keep an '''''electronic laboratory notebook''''' on your individual wiki page that records all the manipulations you perform on the data and the '''''answers to the questions''''' throughout the protocol.
You will download your assigned Excel spreadsheet from LionShare.  Because the Dahlquist Lab data is unpublished, please do not post it on this public wiki.  Instead, keep the file(s) on LionShare, which is protected by a password.
* '''Groups:'''
** Schade et al. (2004) data:  Carmen, James
** Dahlquist lab wild type data:  Sarah, Nick
** Dahlquist lab ''Δgln3'' data:  Alondra
==== Experimental Design ====
On the spreadsheet, each row contains the data for one gene (one spot on the microarray).  The first column (labeled "MasterIndex") numbers the rows in the spreadsheet so that we can match the data from different experiments together later.  The second column (labeled "ID") contains the gene identifier from the [http://www.yeastgenome.org Saccharomyces Genome Database].  Each subsequent column contains the log<sub>2</sub> ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-4 above having been done for you by the scanner software).
Each of the column headings from the data begin with the experiment name ("Schade" for Schade wild type data, "wt" for Dahlquist wild type data, and "dGLN3" for the Dahlquist ''Δgln3'' data).  "LogFC" stands for "Log<sub>2</sub> Fold Change" which is the Log<sub>2</sub> red/green ratio.  The timepoints are designated as "t" followed by a number in minutes.  Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.
For the Schade data, the timepoints are t0, t10, t30, t120, t12h (12 hours), and t60 (60 hours) of cold shock at 10°C.
For the Dahlquist data (both wild type and ''Δgln3''), the timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).  Note that the experimental designs are different.
* '''Begin by recording in your wiki the number of replicates for each time point in your data.  For the group assigned to the Schade data, compare the number of replicates with what is stated in the Materials and Methods for the paper.  Is it the same?  If not, how is it different?'''
==== Normalize the ratios for a set of slides in an experiment ====
To scale and center the data (between-chip normalization) perform the following operations:
* Insert a new Worksheet into your Excel file, and name it "scaled_centered".
* Go back to the "compiled_raw_data" worksheet, Select All and Copy.  Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
* Insert two rows in between the top row of headers and the first data row.
* In cell A2, type "Average" and in cell A3, type "StdDev".
* You will now compute the Average log ratio for each chip (each column of data).  In cell C2, type the following equation:
=AVERAGE(C4:C6190)
and press "Enter".  Excel is computing the average value of the cells specified in the range given inside the parentheses.  Instead of typing the cell designations, you can left-click on the beginning cell (let go of the mouse button), scroll down to the bottom of the worksheet, and shift-left-click on the ending cell.
* You will now compute the Standard Deviation of the log ratios on each chip (each column of data).  In cell B3, type the following equation:
=STDEV(C4:C6190)
and press "Enter". 
* Excel will now do some work for you.  Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns.  Excel will automatically change the equation to match the cell designations for those columns.
* You have now computed the average and standard deviation of the log ratios for each chip.  Now we will actually do the scaling and centering based on these values.
* Insert a new column to the right of each data column and label the top of the column as with the same name as the column to the left, but adding "_sc" for scaled and centered to the name.  For example, "wt_LogFC_t15-1_sc"
* In cell D4, type the following equation:
=(C4-C$2)/C$3
In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3).  We use the dollar sign symbols in front of the number to tell Excel to always reference that row in the equation, even though we will paste it for the entire column.  '''''Why is this important?'''''
* Copy and paste this equation into the entire column.
* Repeat the scaling and centering equation for each of the columns of data.  You can copy and paste the formula above, but be sure that your equation is correct for the column you are calculating.
==== Perform statistical analysis on the ratios ====
We are going to perform this step on the scaled and centered data you produced in the previous step.
* Insert a new worksheet into your Excel spreadsheet and name it "statistics".
* Go back to the "scaled_centered" worksheet, Select All and Copy.  Go to your new "statistics" worksheet, click on the upper, left-hand cell (cell A1) and Select "Paste Special" from the Edit menu.  A window will open: click on the radio button for "Values" and click OK.  This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
** There may be some non-numerical values in some of the cells in your worksheet.  This is due to errors created when Excel tries to compute an equation on a cell that has no data.  We need to go through and remove these error messages before going on to the next step.
** Scan through your spreadsheet to find an example of the error message.  Then go to the Edit menu and Select Replace.  A window will open, type the text you are replacing in the "Find what:" field.  In the "Replace with:" field, enter a single space character.  Click on the button "Replace All" and record the number of replacements made in your wiki page.
* We are now going to work with your scaled and centered Log Fold Changes only, so delete the columns containing the raw Log Fold changes, leaving only the columns that have the "_sc" suffix in their column headings.  You may also delete the second and third rows where you computed the average and standard deviations for each chip.
* Go to the empty columns to the right on your worksheet.  Create new column headings in the top cells to label the average log fold changes that you will compute.  Name them with the pattern <Schade, wt, or dGLN3>_<AvgLogFC>_<tx> where you use the appropriate text within the <> and where x is the time.  For example, "wt_AvgLogFC_t15".
* Compute the average log fold change for the replicates for each timepoint by typing the equation:
=AVERAGE(''range of cells in the row for that timepoint'')
into the second cell below the column heading.  For example, your equation might read
=AVERAGE(C2:F2)
Copy this equation and paste it into the rest of the column. 
* Create the equation for the rest of the timepoints and paste it into their respective columns.  ''Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.''
* Go to the empty columns to the right on your worksheet.  Create new column headings in the top cells to label the T statistic that you will compute.  Name them with the pattern <Schade, wt, or dGLN3>_<Tstat>_<tx> where you use the appropriate text within the <> and where x is the time.  For example, "wt_Tstat_t15".  You will now compute a T statistic that tells you whether the scaled and centered average log fold change is significantly different than 0 (no change in expression).  Enter the equation into the second cell below the column heading: 
=AVERAGE(''range of cells'')/(STDEV(''range of cells'')/SQRT(''number of replicates''))
For example, your equation might read:
=AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4))
(NOTE: in this case the number of replicates is 4.  Be careful that you are using the correct number of parentheses.)  Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns.  ''Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once.''
* Go to the empty columns to the right on your worksheet.  Create new column headings in the top cells to label the P value that you will compute.  Name them with the pattern <Schade, wt, or dGLN3>_<Pval>_<tx> where you use the appropriate text within the <> and where x is the time.  For example, "wt_Pval_t15".  In the cell below the label, enter the equation: 
=TDIST(ABS(''cell containing T statistic''),''degrees of freedom'',2)
For example, your equation might read:
=TDIST(ABS(AE2),3,2)
The number of degrees of freedom is the number of replicates minus one, so in our case there are 3 degrees of freedom.  Copy the equation and paste it into all rows in that column.
* Insert a new worksheet and name it "final".
* Go back to the "statistics" worksheet and Select All and Copy.
* Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK.  This is your final worksheet from which we will perform biological analysis of the data.
* Select all of the columns containing Fold Changes.  Select the menu item Format > Cells.  Under the number tab, select 2 decimal places.  Click OK.
* Select all of the columns containing T statistics or P values.  Select the menu item Format > Cells.  Under the number tab, select 4 decimal places. Click OK.
* '''''Upload the .xls file that you have just created to LionShare.'''''  Give Dr. Dahlquist (username kdahlqui) and Dr. Fitzpatrick (username bfitzpatrick) permission to download your file.  Send an e-mail to each of us with the link to the file.
==== Sanity Check: Number of genes significantly changed ====
Before we move on to the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly.  We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Schade ''et al''. (2004).
* Open your spreadsheet and go to the "final" worksheet.
* Click on cell A1 and select the menu item Data > Filter > Autofilter.  Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
* Click on the drop-down arrow on one of your "Pval" columns.  Select "Custom".  In the window that appears, set a criterion that will filter your data so that the P value has to be less than 0.05.
** '''How many genes have p value < 0.05?
** What about p < 0.01?
** What about p < 0.001?
** What about p < 0.0001?'''
*** Answer these questions for each timepoint in your dataset.
* When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
* We have just performed 5221 T tests for significance.  Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 309 times.  Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know ''which'' ones.
** There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction.  To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
*** '''''Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction.'''''
* The "AvgLogFC" tells us the magnitude of the gene expression change and in which direction.  Positive values are increases relative to the control; negative values are decreases relative to the control.  For the timepoint that had the '''''greatest number''''' of genes significantly changed at p < 0.05, answer the following:
** '''Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change greater than zero.  How many meet these two criteria?
** Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change less than zero.  How many meet these two criteria?
** Keeping the "Pval" filter at p < 0.05, How many have an average log fold change of > 0.25 and p < 0.05?
** How many have an average log fold change of < -0.25 and p < 0.05?  (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)'''
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. 
* '''What criteria did Schade ''et al''. (2004) use to determine a significant gene expression change?  How does it compare to our method?'''
* The expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock.  (Recall that it is specifically mentioned in the Schade ''et al''. (2004) paper.)  '''Find ''NSR1'' in your dataset.  Is it's expression significantly changed at any timepoint?  Record the average fold change and p value for ''NSR1'' for each timepoint in your dataset.'''
* '''Which gene has the smallest p value in your dataset (at any timepoint)?  You can find this by sorting your data based on p value (''but be careful that you don't cause a mismatch in the rows of your data!'')  Look up the function of this gene at the [http://www.yeastgenome.org Saccharomyces Genome Database] and record it in your notebook.  Why do you think the cell is changing this gene's expression upon cold shock?'''

Revision as of 21:02, 4 April 2011

Responses to questions given in protocol

These are responses to the questions from the protocol found on BIOL398-01/S11:Week 11

  • Number of replicates for each timepoint:
    • t0: 3 replicates
    • t10: 7 replicates
    • t30: 6 replicates
    • t120: 4 replicates
    • t720: 4 replicates
    • t3600: 6 replicates
    • The number of replicates found in the excel file were more than the number of biologically independent replicates reported in the Schade paper. This could mean that some dependent replicates exist in the excel sheet.
  • Find amounts of genes for different timepoints for given P values (Solutions are listed in order from t0, t10, t30, t120, t720, t3600)
    • How many genes have p value < 0.05?
      • 156 612 574 991 1507 930
    • What about p < 0.01?
      • 26 260 214 435 852 469
    • What about p < 0.001?
      • 2 69 52 110 268 154
    • What about p < 0.0001?
      • 0 14 6 15 46 29
  • Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction. (Solutions are listed in order from t0, t10, t30, t120, t720, t3600)
    • P =.05
      • 0 3 0 1 3 8
    • P = .01
      • 0 0 0 1 0 1
    • P = .001
      • 0 0 0 0 0 0
    • P = .0001
      • 0 0 0 0 0 0
  • For timepoint with most genes significantly changed for P <.05, (720 minute timepoint in my analysis), find P <0.05 (assumed P value using Bonferroni correction since it was not specified) for different average log fold changes.
    • A>0
      • 0 genes
    • A>.25
      • 0 genes
    • A<0
      • 8 genes
    • A<-.25
      • 8 genes
  • What criteria did Schade use? How does it compare?
    • Schade’s data is somewhat vague for how this part of the analysis was done. The genespring software was used perform the statistical analysis. It was not mentioned as to what type of statistical analysis was done, however. Some searching was done to determine Genespring’s method of determining a gene to have significant change, but it was not easily accessible.
  • NSR1 Average fold change and P values. Is it significant?
    • AFC:
      • 0.6841 -0.7194 -3.1920 -3.6023 -1.7610 -0.4052
    • P:
      • 0.7557 0.0005864 0.001090 0.0004374 0.0001921 0.1391
    • No Significant change using Bonferroni correction for P = 5%, but without Bonferroni correction there is significant change at 10 minutes, 30 minutes, 120 minutes, 720 minutes, and 3600 minutes.
  • Gene with lowest P value.
    • Index number 3328 was found for my data to have the smallest P value. This is systematic name YNL316C, standard name PHA2 used in the synthesis of Phenylalanine, an essential amino acid it only shows significant change at the 2 hour mark. The ALFC for this gene was -1.5751, so it was downregulated. It may be difficult for the cell to make proteins or amino acids at the 2 hour time point and thus the cell would downregulate the gene.


Protocol:

A MatLab file was created and used to follow the procedure below. The beginning for the MatLab script details the preprocessing of the data.

Microarray Data Analysis

Background

This is a list of steps required to analyze DNA microarray data.

  1. Quantitate the fluorescence signal in each spot
  2. Calculate the ratio of red/green fluorescence
  3. Log transform the ratios
  4. Normalize the ratios on each microarray slide
    • Steps 1-4 are performed by the GenePix Pro software.
    • You will perform the following steps:
  5. Normalize the ratios for a set of slides in an experiment
  6. Perform statistical analysis on the ratios
  7. Compare individual genes with known data
    • Steps 5-7 are performed in Microsoft Excel
  8. Pattern finding algorithms (clustering)
  9. Map onto biological pathways
    • We will use software called STEM for the clustering and mapping
  10. Create mathematical model of transcriptional network

Each group will analyze a different microarray dataset:

  • Wild type data from the Schade et al. (2004) paper you read last week.
  • Wild type data from the Dahlquist lab.
  • Δgln3 data from the Dahlquist lab.

For your assignment this week, you will keep an electronic laboratory notebook on your individual wiki page that records all the manipulations you perform on the data and the answers to the questions throughout the protocol.

You will download your assigned Excel spreadsheet from LionShare. Because the Dahlquist Lab data is unpublished, please do not post it on this public wiki. Instead, keep the file(s) on LionShare, which is protected by a password.

  • Groups:
    • Schade et al. (2004) data: Carmen, James
    • Dahlquist lab wild type data: Sarah, Nick
    • Dahlquist lab Δgln3 data: Alondra

Experimental Design

On the spreadsheet, each row contains the data for one gene (one spot on the microarray). The first column (labeled "MasterIndex") numbers the rows in the spreadsheet so that we can match the data from different experiments together later. The second column (labeled "ID") contains the gene identifier from the Saccharomyces Genome Database. Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-4 above having been done for you by the scanner software).

Each of the column headings from the data begin with the experiment name ("Schade" for Schade wild type data, "wt" for Dahlquist wild type data, and "dGLN3" for the Dahlquist Δgln3 data). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.

For the Schade data, the timepoints are t0, t10, t30, t120, t12h (12 hours), and t60 (60 hours) of cold shock at 10°C.

For the Dahlquist data (both wild type and Δgln3), the timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C). Note that the experimental designs are different.

  • Begin by recording in your wiki the number of replicates for each time point in your data. For the group assigned to the Schade data, compare the number of replicates with what is stated in the Materials and Methods for the paper. Is it the same? If not, how is it different?

Normalize the ratios for a set of slides in an experiment

To scale and center the data (between-chip normalization) perform the following operations:

  • Insert a new Worksheet into your Excel file, and name it "scaled_centered".
  • Go back to the "compiled_raw_data" worksheet, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
  • Insert two rows in between the top row of headers and the first data row.
  • In cell A2, type "Average" and in cell A3, type "StdDev".
  • You will now compute the Average log ratio for each chip (each column of data). In cell C2, type the following equation:
=AVERAGE(C4:C6190)

and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can left-click on the beginning cell (let go of the mouse button), scroll down to the bottom of the worksheet, and shift-left-click on the ending cell.

  • You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:
=STDEV(C4:C6190)

and press "Enter".

  • Excel will now do some work for you. Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
  • You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
  • Insert a new column to the right of each data column and label the top of the column as with the same name as the column to the left, but adding "_sc" for scaled and centered to the name. For example, "wt_LogFC_t15-1_sc"
  • In cell D4, type the following equation:
=(C4-C$2)/C$3

In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3). We use the dollar sign symbols in front of the number to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?

  • Copy and paste this equation into the entire column.
  • Repeat the scaling and centering equation for each of the columns of data. You can copy and paste the formula above, but be sure that your equation is correct for the column you are calculating.

Perform statistical analysis on the ratios

We are going to perform this step on the scaled and centered data you produced in the previous step.

  • Insert a new worksheet into your Excel spreadsheet and name it "statistics".
  • Go back to the "scaled_centered" worksheet, Select All and Copy. Go to your new "statistics" worksheet, click on the upper, left-hand cell (cell A1) and Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
    • There may be some non-numerical values in some of the cells in your worksheet. This is due to errors created when Excel tries to compute an equation on a cell that has no data. We need to go through and remove these error messages before going on to the next step.
    • Scan through your spreadsheet to find an example of the error message. Then go to the Edit menu and Select Replace. A window will open, type the text you are replacing in the "Find what:" field. In the "Replace with:" field, enter a single space character. Click on the button "Replace All" and record the number of replacements made in your wiki page.
  • We are now going to work with your scaled and centered Log Fold Changes only, so delete the columns containing the raw Log Fold changes, leaving only the columns that have the "_sc" suffix in their column headings. You may also delete the second and third rows where you computed the average and standard deviations for each chip.
  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the average log fold changes that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<AvgLogFC>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_AvgLogFC_t15".
  • Compute the average log fold change for the replicates for each timepoint by typing the equation:
=AVERAGE(range of cells in the row for that timepoint)

into the second cell below the column heading. For example, your equation might read

=AVERAGE(C2:F2)

Copy this equation and paste it into the rest of the column.

  • Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.
  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the T statistic that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<Tstat>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Tstat_t15". You will now compute a T statistic that tells you whether the scaled and centered average log fold change is significantly different than 0 (no change in expression). Enter the equation into the second cell below the column heading:
=AVERAGE(range of cells)/(STDEV(range of cells)/SQRT(number of replicates))

For example, your equation might read:

=AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4))

(NOTE: in this case the number of replicates is 4. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once.

  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the P value that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<Pval>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Pval_t15". In the cell below the label, enter the equation:
=TDIST(ABS(cell containing T statistic),degrees of freedom,2)

For example, your equation might read:

=TDIST(ABS(AE2),3,2)

The number of degrees of freedom is the number of replicates minus one, so in our case there are 3 degrees of freedom. Copy the equation and paste it into all rows in that column.

  • Insert a new worksheet and name it "final".
  • Go back to the "statistics" worksheet and Select All and Copy.
  • Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. This is your final worksheet from which we will perform biological analysis of the data.
  • Select all of the columns containing Fold Changes. Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
  • Select all of the columns containing T statistics or P values. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
  • Upload the .xls file that you have just created to LionShare. Give Dr. Dahlquist (username kdahlqui) and Dr. Fitzpatrick (username bfitzpatrick) permission to download your file. Send an e-mail to each of us with the link to the file.

Sanity Check: Number of genes significantly changed

Before we move on to the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Schade et al. (2004).

  • Open your spreadsheet and go to the "final" worksheet.
  • Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
  • Click on the drop-down arrow on one of your "Pval" columns. Select "Custom". In the window that appears, set a criterion that will filter your data so that the P value has to be less than 0.05.
    • How many genes have p value < 0.05?
    • What about p < 0.01?
    • What about p < 0.001?
    • What about p < 0.0001?
      • Answer these questions for each timepoint in your dataset.
  • When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
  • We have just performed 5221 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 309 times. Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones.
    • There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction. To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
      • Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction.
  • The "AvgLogFC" tells us the magnitude of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control. For the timepoint that had the greatest number of genes significantly changed at p < 0.05, answer the following:
    • Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change greater than zero. How many meet these two criteria?
    • Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change less than zero. How many meet these two criteria?
    • Keeping the "Pval" filter at p < 0.05, How many have an average log fold change of > 0.25 and p < 0.05?
    • How many have an average log fold change of < -0.25 and p < 0.05? (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
  • What criteria did Schade et al. (2004) use to determine a significant gene expression change? How does it compare to our method?
  • The expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. (Recall that it is specifically mentioned in the Schade et al. (2004) paper.) Find NSR1 in your dataset. Is it's expression significantly changed at any timepoint? Record the average fold change and p value for NSR1 for each timepoint in your dataset.
  • Which gene has the smallest p value in your dataset (at any timepoint)? You can find this by sorting your data based on p value (but be careful that you don't cause a mismatch in the rows of your data!) Look up the function of this gene at the Saccharomyces Genome Database and record it in your notebook. Why do you think the cell is changing this gene's expression upon cold shock?