BIOL368/F14:Isabel Gonzaga Week 13
From OpenWetWare
Jump to navigationJump to search
Continue Microarray Data Analysis
- Based on what I submitted last week for the Week 12 Assignment, I received the following customized protocol to follow for analyzing your microarray data.
- Master Data Excel Spreadsheet
- Master Text Data
- MASTER_DATA_IsabelGonzaga_downloaded_20141202_editedKD.xlsx with my calculations for your comparison. — Kam D. Dahlquist 19:40, 2 December 2014 (EST)
Scaling and Centering the Data
To scale and center the data (between chip normalization) perform the following operations:
- Insert a new Worksheet into your Excel file, and name it "scaled_centered".
- Go back to your previous worksheet with the log fold changes, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
- Insert two rows in between the top row of headers and the first data row.
- In cell A2, type "Average" and in cell A3, type "StdDev".
- You will now compute the Average log ratio for each chip (each column of data). In cell C2, type the following equation:
=AVERAGE(C4:C4036)
- and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can click on the beginning cell, scroll down to the bottom of the worksheet, and shift-click on the ending cell.
- You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell C3, type the following equation:
=STDEV(C4:C4036)
- and press "Enter".
- Excel will now do some work for you. Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
- You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
- Copy the column headings for all of your data columns and then paste them to the right of the last data column so that you have a second set of headers above blank colums of cells. Edit the names of the columns so that they now read: <previous name>_scaled_centered, etc.
- In cell S4, type the following equation:
=(C4-C$2)/C$3
- In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3). We use the dollar sign symbols in front of the "2" and "3" to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?
- Copy and paste this equation into the entire column.
- Copy and paste the scaling and centering equation for each of the columns of data with the "_scaled_centered" column header. Be sure that your equation is correct for the column you are calculating.
Perform statistical analysis on the ratios
We are going to perform this step on the scaled and centered data you produced in the previous step.
- Insert a new worksheet and name it "statistics".
- Go back to the "scaling_centering" worksheet and copy the first two columns ("ID" and "Index").
- Paste the data into the first two columns of your new "statistics" worksheet.
- Go back to the "scaling_centering" worksheet and copy the columns that are designated "_scaled_centered".
- Go to your new worksheet and click on the C1 cell. Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
- Delete Rows 2 and 3 where it says "Average" and "StDev" so that your data rows with gene IDs are immediately below the header row 1.
- Go to a new column on the right of your worksheet. Type the headers "Avg_LogFC_NO" and "Avg_LogFC_HYP" into the top cell of the next two columns.
- Compute the average log fold change for the biological replicates for each treatment group (NO and HYP). For example, type the equation for the NO group:
=AVERAGE(C2:I2)
- into cell S2. Copy this equation and paste it into the rest of the column.
- Create the equation for the HYP treatment and paste it into their respective columns.
- Label the column to the right of the AvgLOGFC columns you just computed and label them "Tstat_NO" and "Tstat_HYP". This will compute a T statistic that tells us whether the scaled and centered average log ratio is significantly different than 0 (no change). For the NO group, enter the equation:
=AVERAGE(C2:I2)/(STDEV(C2:I2)/SQRT(number of replicates))
- (NOTE: in this case the number of replicates is 7. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column.
- Label the top cell in the next column to the right "Pvalue_NO". In the cell below the label, enter the equation:
=TDIST(ABS(v2),degrees of freedom,2)
The number of degrees of freedom is the number of replicates minus one, so in this case there are 6 degrees of freedom. Copy the equation and paste it into all rows in that column.
- Insert a new worksheet and name it "forGenMAPP".
- Repeat the Tstat and P value calculations for the HYP group.
- Go back to the "statistics" worksheet and Select All and Copy.
- Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. We will now format this worksheet for import into GenMAPP.
- Select all of the columns containing fold changes. Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
- Select all the Tstat and Pvalue columns. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
- Insert a column to the right of the "ID" column. Type the header "SystemCode" into the top cell of this column. Fill the entire column (each cell) with the letter "N".
- Save your work. Select the menu item File > Save As, and choose "Text (Tab-delimited) (*.txt)" from the file type drop-down menu. Excel will make you click through a couple of warnings because it doesn't like you going all independent and choosing a different file type than the native .xls. This is OK. Your new *.txt file is now ready for import into GenMAPP. But before we do that, we want to know a few things about our data as shown in the next section.
- Upload both the .xls and .txt files that you have just created to your journal page in the class wiki. Make sure that your file name is distinct from your other classmates so that nobody overwrites anyone else's file.
Sanity Check: Number of genes significantly changed
Before we move on to the GenMAPP/MAPPFinder analysis, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results.
- Open your spreadsheet and go to the "forGenMAPP" tab.
- Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
- Click on the drop-down arrow on your "Pvalue" column. Select "Custom". In the window that appears, set a criterion that will filter your data so that the Pvalue has to be less than 0.05.
- How many genes have p value < 0.05? NO: 3344 HYP: 3433
- What about p < 0.01? NO: 3344 HYP: 36
- What about p < 0.001? NO: 0 HYP: 15
- What about p < 0.0001? NO: 0 HYP: 6
- When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
- We have just performed 5480 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 274 times. If have more than 274 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones.
- The "Avg_LogFC_all" tells us the size of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control.
- Keeping the "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change greater than zero. How many are there?
- NO: 0
- HYP: 78
- Keeping the "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change less than zero. How many are there?
- NO: 3355
- HYP: 3355
- What about an average log fold change of > 0.25 and p < 0.05?
- NO: 0
- HYP: 78
- Or an average log: fold change of < -0.25 and p < 0.05? (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)
- NO: 3355
- HYP: 3355
- Keeping the "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change greater than zero. How many are there?
- In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. For the GenMAPP analysis below, we will use the fold change cut-off of greater than 0.25 or less than -0.25 and the p value cut off of p < 0.05 for our analysis because we want to include several hundred genes in our analysis.
- What criteria did your paper use to determine a significant gene expression change? How does it compare to our method?
- The paper performed normalization for Cy3 and Cy5 for all spots, except for induction ratios in the top or bottom 5%. Noise value determined by calculating average intensity for 20% lowest intensity, and values below this were raised to this average value. There is no mention of the log based calculations or statistical analyses used.
- Our method does not discriminate the top and bottom 5% spot intensities when normalizing the log ratios.
Sanity Check: Compare individual genes with known data
- Look in your paper for genes that are specifically mentioned. What are their fold changes and p values in the paper? Are they significantly changed in your analysis?
- Only 38/48 genes in the dormancy regulon were found in the data
- SD values were provided instead of pvalues, as found in Supplemental Table S1. The 48 gene regulon was all considered significantly repressed in the paper
Gene | NO Fold Values in paper | NO Fold Value in analysis | NO Significant in analysis? | HYP Fold Values in paper | HYP Fold Value in analysis | HYP Significant in analysis? |
---|---|---|---|---|---|---|
RV0079 | 15 | 3.8 | ---- | 13.0 | 3.7 | ---- |
RV0080 | 6.4 | 0.4 | yes | 8.2 | 2.0 | yes |
RV0081 | 2.8 | 0.6 | yes | 3.8 | 1.2 | yes |
RV0569 | 24.2 | 3.9 | ---- | 17.1 | 3.2 | ---- |
RV0570 | 3.0 | 0.6 | yes | 3.0 | 0.9 | yes |
RV0571C | 4.3 | 0.1 | yes | 1.8 | 0.7 | yes |
RV0572C | 16.6 | 3.7 | ---- | 9.4 | 5.9 | ---- |
RV0573C | 1.9 | 0.2 | yes | 1.3 | 0.4 | yes |
RV0574C | 4.9 | 0.7 | yes | 2.9 | 0.9 | yes |
RV1733C | 21.0 | 2.5 | NO | 15.5 | 6.1 | NO |
RV1734C | 5.7 | 0.6 | yes | 5.1 | 2.6 | yes |
RV1735C | 1.9 | 0.4 | yes | 2.0 | 0.7 | yes |
RV1736C | 4.0 | 0.1 | yes | 3.3 | 0.8 | yes |
RV1737C | 15.0 | 1.8 | ---- | 12.5 | 3.2 | ---- |
RV1738 | 26.6 | 5.3 | yes | 50.4 | 19.1 | yes |
RV1812C | 2.4 | 0.2 | yes | 2.0 | 1.2 | yes |
RV1813C | 17.8 | 3.4 | ---- | 12.6 | 5.5 | ---- |
RV1996 | 14.9 | 1.3 | yes | 13.7 | 3.2 | yes |
RV1997 | 6.8 | 0.8 | ---- | 4.4 | 1.5 | ---- |
RV1998C | 15.9 | 5.3 | NO | 8.6 | 8.6 | NO |
RV2003C | 13.8 | 1.1 | ---- | 12.3 | 7.5 | ---- |
RV2004C | 2.1 | 0.6 | yes | 2.1 | 0.5 | yes |
RV2005C | 7.3 | 2.1 | yes | 9.2 | 2.9 | yes |
RV2006 | 4.1 | 1.0 | yes | 4.0 | 2.2 | yes |
RV2007C | 15.6 | 4.3 | yes | 24.1 | 11.9 | yes |
RV2028C | 4.8 | 0.9 | yes | 3.5 | 1.2 | yes |
RV2029C | 15.8 | 2.7 | yes | 12.2 | 5.6 | yes |
RV2030C | 19.0 | 1.3 | yes | 10.6 | 3.5 | yes |
RV2031C | 22.6 | 14.4 | yes | 14.6 | 3.0 | yes |
RV2032 | 31.3 | 7.0 | yes | 45.2 | 16.5 | yes |
RV2623 | 5.5 | 0.2 | yes | 7.3 | 2.5 | yes |
RV2624C | 16.9 | 1.0 | yes | 19.7 | 8.6 | yes |
RV2625C | 5.6 | 0.7 | yes | 6.9 | 3.3 | yes |
RV2626C | 14.6 | 1.3 | yes | 40.6 | 8.0 | yes |
RV2627C | 10.6 | 0.4 | yes | 11.9 | 3.0 | yes |
RV2628 | 7.9 | 0.6 | yes | 5.2 | 1.3 | yes |
RV2629 | 7.2 | 0.8 | yes | 7.4 | 1.4 | yes |
RV2630 | 5.1 | 0.6 | yes | 4.2 | 1.1 | yes |
RV2631 | 2.0 | 0.4 | yes | 1.6 | 0.2 | yes |
RV3126C | 21.5 | 5.8 | NO | 22.7 | 8.6 | NO |
RV3127 | 24.5 | 2.5 | yes | 36.0 | 16.2 | yes |
RV3128C | 11.6 | 3.4 | ---- | 17.5 | 6.7 | ---- |
RV3129 | 25.9 | 5.1 | ---- | 24.5 | 7.0 | ---- |
RV3130C | 21.0 | 3.9 | ---- | 14.0 | 6.7 | ---- |
RV3131 | 5.5 | 0.7 | yes | 4.6 | 1.1 | yes |
RV3132C | 12.1 | 4.6 | yes | 9.8 | 4.0 | yes |
RV3133C | 14.4 | 2.0 | NO | 11.9 | 4.2 | NO |
RV3134C | 9.1 | 2.3 | yes | 11.5 | 2.6 | yes |
Weekly Assignments
- Week 1 Assignment
- Week 2 Assignment
- Week 3 Assignment
- Week 4 Assignment
- Week 5 Assignment
- Week 6 Assignment
- Week 7 Assignment
- Week 8 Assignment
- Week 9 Assignment
- Week 10 Assignment
- Week 11 Assignment
- Week 12 Assignment
- Week 13 Assignment
- Week 15 Assignment
Class Journals
- Class Journal Week 1
- Class Journal Week 2
- Class Journal Week 3
- Class Journal Week 4
- Class Journal Week 5
- Class Journal Week 6
- Class Journal Week 7
- Class Journal Week 8
- Class Journal Week 9
- Class Journal Week 10
- Class Journal Week 11
- Class Journal Week 12
- Class Journal Week 13
- Class Journal Week 15