BIOL398-04/S15:Week 11

From OpenWetWare
Jump to navigationJump to search
BIOL398-04: Biomathematical Modeling

MATH 388-01: Survey of Biomathematics

Loyola Marymount University

Home       People        LionShare       Help      


This journal entry is due on Tuesday, April 7 at midnight PDT (Monday night/Tuesday morning). NOTE that the server records the time as Eastern Daylight Time (EDT). Therefore, midnight will register as 03:00.

Individual Journal Assignment

  • Store this journal entry as "username Week 11" (i.e., this is the text to place between the square brackets when you link to this page).
  • Create the following set of links. (HINT: These links should all be in your personal template that you created for the Week 1 Assignment; you should then simply invoke your template on each new journal entry.)
    • Link to your journal entry from your user page.
    • Link back from your journal entry to your user page.
    • Link to this assignment from your journal entry.
    • Don't forget to add the "BIOL398-04/S15" category to the end of your wiki page.

Microarray Data Analysis

For your assignment this week, you will keep an electronic laboratory notebook on your individual wiki page that records all the manipulations you perform on the data and the answers to the questions throughout the protocol. We will be working on the protocols in class on Thursday, March 19 and Thursday, March 26. Whatever you do not finish in class will be homework to be completed by the Week 11 journal deadline.

Background

This is a list of steps required to analyze DNA microarray data.

  1. Quantitate the fluorescence signal in each spot
  2. Calculate the ratio of red/green fluorescence
  3. Log transform the ratios
  4. Normalize the ratios on each microarray slide
    • Steps 1-4 have been performed for you by the GenePix Pro software (which runs the microarray scanner).
  5. Normalize the ratios for a set of slides in an experiment
  6. Perform statistical analysis on the ratios
  7. Compare individual genes with known data
    • Steps 6-7 are performed in Microsoft Excel
  8. Pattern finding algorithms (clustering)
  9. Map onto biological pathways
    • We will use software called STEM for the clustering and mapping
  10. Create mathematical model of transcriptional network
    • The modeling will be performed in MATLAB

For the modeling project, each pair of students will analyze a Dahlquist lab microarray dataset comparing the wild type strain to a different strain of yeast. For the statistical analysis, one member of the pair will analyze the wild type data and one member of the pair will analyze the alternate strain:

  • Wild type vs. Δcin5: Will and Jeffrey
  • Wild type vs. Δgln3: Tessa and Alyssa
  • Wild type vs. Δhmo1: Lucia and Lauren
  • Wild type vs. Δzap1: Kara and Kristen
  • Wild type S. cerevisiae vs. Wild type S. paradoxus: Natalie and Karina

You will download your assigned Excel spreadsheet from LionShare. You were e-mailed a link to do this before class. Because the Dahlquist Lab data is unpublished, please do not post it on this public wiki. Instead, post the file(s) back to LionShare, which is protected by a password.

Experimental Design

In the Excel spreadsheet, there is a worksheet labled "data". In this worksheet, each row contains the data for one gene (one spot on the microarray). The first column (labeled "ID") contains the gene identifier from the Saccharomyces Genome Database. The second column contains the Standard Name for each of the genes. Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-5 above having been performed for you already).

Each of the column headings from the data begin with the experiment name ("wt" for wild type S. cerevisiae data, "dCIN5" for the Δcin5 data, etc., and Spar for the S. paradoxus data). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.

The timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).

  • Begin by recording in your wiki, the strain comparison and individual dataset that you will analyze, the filename, the number of replicates for each strain and each time point in your data.
    • NOTE: before beginning any analysis, immediately change the filename so that it contains your initials to distinguish it from other students' work.

Statistical Analysis Part 1: ANOVA

  1. Create a new worksheet, naming it stats
  2. Copy the first two columns of the data worksheet (containing ID and Standard Name) into the stats sheet.
  3. In the first row, columns c through g, create column labels of the form (STRAIN)_xbar_(TIME) where (STRAIN) is wt, dGLN3, etc., and (TIME) is 15, 30, etc.
  4. In the first row, columns h and i, create the column labels (STRAIN)_xbar_grand and (STRAIN)_ss_HO.
  5. In the first row, columns j through n, create the column labels (STRAIN)_ss_(TIME) as in (3).
  6. In the first row, columns o, p, and q, create the column labels (STRAIN)_SS_full, Fstat and p-value.
  7. Now we're ready to compute. In cell c2, type =AVERAGE(
  8. Then click on the tab containing the data, and highlight all the data in row 2 associated with (STRAIN) and t15, press the closing paren key (shift 0),and press the "enter" key.
  9. Click on the tab for the stats sheet. Cell c2 now contains the average of the log fold change data from the first gene at t=15 minutes.
  10. Click on cell c2 and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
  11. Move to cell d2, and repeat (7) through (10) with the t30 data, to e2 with the t60 data, f2 with the t90, g2 with the 120.
  12. Move to cell h2, and repeat (7) through (10) highlighting all the data for (STRAIN) in row 2 instead of the individual time points.
  13. Now, we move to cell i2. Type =SUMSQ(
  14. Click on the data sheet's tab again, and highlight all the data in row 2 for your (STRAIN), press the closing paren key (shift 0),and press the "enter" key.
    • The data highlighted here will be same as in (12).
  15. Make a note of how many data points you have at each time point. In most cases this number will be 4, but for some strains and times it may be 5. Count carefully. Also, make a note of the total number of data points. For most strains this number will be 20, but for wt it may be 23.
  16. In cell j2, type =SUMSQ(data!C2:F2)-4*stats!C2^2 and hit enter.
    • The phrase "data!C2:F2" should be the data associated with t15. The number "4" is the number of data points (note that cells c2, d2, e2, f2 contain 4 data points). The phrase "stats!c2" gets the average you computed in Step (8) for t15, and the "^2" squares that value. Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
  17. In cells k2 through n2, repeat (16) for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell (d2,e2,f2,g2) for each time point, and copy the formula to the whole column for each computation.
  18. Once you've populated cells j2 through n2, click on o2 and type =sum(j2:n2) and hit enter. Copy to the whole column.
  19. recall the number of data points from (15): call that total n.
  20. In cell p2, type =((n-5)/5)*(i2-o2)/o2 and hit enter. Don't actually type the n but instead use the number from (20). copy to the whole column.
  21. In cell q2, type =FDIST(P2,5,n-5) replacing n as in (20) with the number of data points total. Copy to the whole column.
  22. Now we will perform adjustments to the p value to correct for the multiple testing problem. Label column r "STRAIN_Bonferroni_p-value".
  23. Type the equation =q2*6189, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
  24. Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into cell s2: =IF(r2>1,1,r2)
Calculate the Benjamini & Hochberg p value Correction
  1. Insert a new worksheet named "B&H".
  2. First, create an index column by first typing "Index" into cell A1. Then type "1" into cell A2 and "2" into cell A3. Select both cells A2 and A3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189. We will use this to put the genes back in order at the end of these calculations.
  3. Copy and paste the column of ID's from one of the previous worksheets into column B.
  4. For the following, use Paste special > Paste values. Copy Column Q (the unadjusted p values) from the stats worksheet and paste it into Column C.
  5. Select all of columns A, B, and C. Sort by ascending values on Column C. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column C, smallest to largest.
  6. Type the header "Rank" in cell D1. Repeat what you did in step 2 to create a series of numbers in ascending order from 1 to 6189. This is the p value rank, smallest to largest.
  7. Now you can calculate the Benjamini and Hochberg p value correction. Type "STRAIN_B-H_p-value" in cell E1. Type the following formula in cell E2: =(C2*6189)/D2 and press enter. Copy that equation to the entire column using the trick you learned last week.
  8. Type "STRAIN_B-H_p-value" into cell F1.
  9. Type the following formula into cell F2: =IF(E2>1,1,E2) and press enter. Copy that equation to the entire column using the trick you learned last week.
  10. Select columns A through F. Now sort them by your Index in Column A in ascending order.
  11. Copy column F and use Paste special < Paste values to paste it into column T of your stats sheet.


Sanity Check: Number of genes significantly changed

Before we move on to clustering and the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs.

  • Go to the "stats" worksheet.
  • Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
  • Click on the drop-down arrow on Column Q. Select "Custom". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.
    • How many genes have p < 0.05? and what is the percentage (out of 6189)?
    • How many genes have p < 0.01? and what is the percentage (out of 6189)?
    • How many genes have p < 0.001? and what is the percentage (out of 6189)?
    • How many genes have p < 0.0001? and what is the percentage (out of 6189)?
  • When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
  • We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following:
    • How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?
    • How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
  • Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. Find NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "STRAIN)_xbar_(TIME)" in step 3 of the ANOVA analysis. An "x" with a bar on top is the shorthand for "mean" or "average".
  • You and your partner should compare the numbers you got between the wild type strain and the other strain you have been assigned. You will be reporting this information in both your final paper and final presentation in the course, organized as a table. Use this sample PowerPoint slide to record your data. Create a title for the slide that gives the "message" of the slide. Upload the slide to your individual journal page for this week (you and your partner should have an identical slide with the same filename). This is the first slide of your final presentation in the course.
  • Upload your updated spreadsheet to LionShare (using the same name as before; check the box to "overwrite file"). The e-mail link you provided to us earlier will allow us to download your updated spreadsheet.

Clustering and Gene Ontology Analysis with STEM

  1. Begin by downloading and extracting the STEM software. Click here to go to the STEM web site.
    • Click on the download link, register, and download the stem.zip file to your Desktop.
    • Unzip the file. In Seaver 120, you can right click on the file icon and select the menu item 7-zip > Extract Here.
    • This will create a folder called stem. Inside the folder, double-click on the stem.cmd to launch the STEM program.
      • In Seaver 120, we encountered an issue where the program would not launch on the Windows XP machines due to a lack of memory. (Even though the computers have been upgraded to Windows 7, do this to launch the program.) To get around this problem, launch STEM from the command line.
        • Go to the start menu and click on Programs > Accessories > Command Prompt.
        • You will need to navigate to the directory (folder) in which the STEM program resides. If you followed the instructions above and extracted the stem folder to the Desktop, type the following: cd Desktop\stem and press "Enter".
        • To launch the program then type: java -mx512M -jar stem.jar -d defaults.txt and press "Enter". This will launch the program with less memory allocated to it.
  2. Prepare your microarray data file for loading into STEM.
    • Insert a new worksheet into your Excel workbook, and name it "stem".
    • Copy the "Index" column from your "B&H" worksheet and paste it into column A of your "stem" worksheet. Select all of the data from your "stats" worksheet and Paste special > paste values into your "stem" worksheet, starting with column B.
      • Your leftmost column should have the column header "Index". Rename this column to "SPOT". Column B should be named "ID". Rename this column to "Gene Symbol".
      • Filter the data on the B-H corrected p value to be > 0.05 (that's greater than in this case).
        • Once the data has been filtered, select all of the rows (except for your header row) and delete the rows by right-clicking and choosing "Delete Row" from the context menu. Undo the filter. This ensures that we will cluster only the genes with a "significant" change in expression and not the noise.
      • Delete all of the data columns EXCEPT for the Average Log Fold change columns for each timepoint (for example, wt_xbar_t15, etc.).
      • Rename the data columns with just the time and units (for example, 15m, 30m, etc.).
      • Save your work. Then use Save As to save this spreadsheet as Text (Tab-delimited) (*.txt). Click OK to the warnings and close your file.
        • Note that it would be a good idea to turn on the file extensions by following the procedure on the class Help page.
  3. Running STEM
    1. In section 1 (Expression Data Info) of the the main STEM interface window, click on the Browse... button to navigate to and select your file.
      • Click on the radio button No normalization/add 0.
      • Check the box next to Spot IDs included in the data file.
    2. In section 2 (Gene Info) of the main STEM interface window, select Saccharomyces cerevisiae (SGD), from the drop-down menu for Gene Annotation Source. Select No cross references, from the Cross Reference Source drop-down menu. Select No Gene Locations from the Gene Location Source drop-down menu.
    3. In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points.
    4. In section 4 (Execute) click on the yellow Execute button to run STEM.
  4. Viewing and Saving STEM Results
    1. A new window will open called "All STEM Profiles (1)". Each box corresponds to a model expression profile. Colored profiles have a statistically significant number of genes assigned; they are arranged in order from most to least significant p value. Profiles with the same color belong to the same cluster of profiles. The number in each box is simply an ID number for the profile.
      • Click on the button that says "Interface Options...". At the bottom of the Interface Options window that appears below where it says "X-axis scale should be:", click on the radio button that says "Based on real time". Then close the Interface Options window.
      • Take a screenshot of this window (on a PC, simultaneously press the Alt and PrintScreen buttons to save the view in the active window to the clipboard) and paste it into a PowerPoint presentation to save your figures.
    2. Click on each of the SIGNIFICANT profiles to open a window showing a more detailed plot containing all of the genes in that profile.
      • Take a screenshot of each of the individual profile windows and save the images in your PowerPoint presentation.
      • At the bottom of each profile window, there are two yellow buttons "Profile Gene Table" and "Profile GO Table". For each of the profiles, click on the "Profile Gene Table" button to see the list of genes belonging to the profile. In the window that appears, click on the "Save Table" button and save the file to your desktop. Make your filename descriptive of the contents, e.g. "wt_profile#_genelist.txt", where you replace the number symbol with the actual profile number.
      • For each of the significant profiles, click on the "Profile GO Table" to see the list of Gene Ontology terms belonging to the profile. In the window that appears, click on the "Save Table" button and save the file to your desktop. Make your filename descriptive of the contents, e.g. "wt_profile#_GOlist.txt", where you use "wt", "dGLN3", etc. to indicate the dataset and where you replace the number symbol with the actual profile number. At this point you have saved all of the primary data from the STEM software and it's time to interpret the results!
  5. Analyzing and Interpreting STEM Results
    1. Select one of the profiles you saved in the previous step for further intepretation of the data. We suggest that you choose one that has a pattern of up- or down-regulated genes at the early (first three) timepoints. Answer the following:
      • Why did you select this profile? In other words, why was it interesting to you?
      • How many genes belong to this profile?
      • How many genes were expected to belong to this profile?
      • What is the p value for the enrichment of genes in this profile? Bear in mind that we just finished computing p values to determine whether each individual gene had a significant change in gene expression at each time point. This p value determines whether the number of genes that show this particular expression profile across the time points is significantly more than expected.
      • Open the GO list file you saved for this profile in Excel. This list shows all of the Gene Ontology terms that are associated with genes that fit this profile. Select the third row and then choose from the menu Data > Filter > Autofilter. Filter on the "p-value" column to show only GO terms that have a p value of < 0.05. How many GO terms are associated with this profile at p < 0.05? The GO list also has a column called "Corrected p-value". This correction is needed because the software has performed thousands of significance tests. Filter on the "Corrected p-value" column to show only GO terms that have a corrected p value of < 0.05. How many GO terms are associated with this profile with a corrected p value < 0.05?
      • Select 10 Gene Ontology terms from your filtered list (either p < 0.05 or corrected p < 0.05). Look up the definitions for each of the terms at http://geneontology.org. Write a paragraph that describes the biological interpretation of these GO terms. In other words, why does the cell react to cold shock by changing the expression of genes associated with these GO terms?
        • To easily look up the definitions, go to http://geneontology.org.
        • Copy and paste the GO ID (e.g. GO:0044848) into the search field at the upper left of the page called "Search GO Data".
        • In the results page, click on the button that says "Link to detailed information about <term>, in this case "biological phase"".
        • The definition will be on the next results page, e.g. here.

Summary of what you need to turn in for the individual Week 11 assignment

  1. Your individual journal page should have an electronic lab notebook recording your work for the last two weeks. This includes detailed methods, your results, conclusions, and the answers to any questions posed in the protocol above. Don't forget your paragraph which is a biological interpretation of your stem results.
  2. Upload your updated Excel spreadsheet to LionShare that has today's calculations in it. Use the same filename as before so that the download link that you already provided to Drs. Dahlquist and Fitzpatrick will still work.
  3. Create, upload to OpenWetWare, and link to a PowerPoint presentation that contains the p value table and the screenshots of your stem results. Each slide in the presentation should have a meaningful title that describes the main message of the slide. These slides will form the basis of your final presentation in the class.
  4. Zip together all of the tab-delimited text files that you created for and from stem and upload them to LionShare:
    • the file that was saved from your original spreadsheet that you used to run stem
    • each of the genelist and GOlist files for each of your significant profiles.
  5. The shared journal assignment below.

Shared Journal Assignment

  • Store your shared journal entry in the shared Class Journal Week 11 page. If this page does not exist yet, go ahead and create it (congratulations on getting in first :) )
  • Link to your journal entry from your user page.
  • Link back from the journal entry to your user page.
  • Sign your portion of the journal with the standard wiki signature shortcut (~~~~).
  • Add the "BIOL398-04/S15" category to the end of the wiki page (if someone has not already done so).

View

Now that you've done your own microarray data analysis, we will revisit the case "Deception at Duke".

Reflection

  • What were the main issues with the data and analysis identified by Baggerly and Coombs? What best practices enumerated by DataONE were violated? Which of these did Dr. Baggerly claim were common issues?
  • What recommendations does Dr. Baggerly recommend for reproducible research? How do these correspond to what DataONE recommends?
  • Do you have any further reaction to this case after viewing Dr. Baggerly's talk?
  • Go back to the methods section of the paper you presented for journal club. Do you think there is sufficient information there to reproduce their data analysis? Why or why not?