# BIOL398-01/S10:Sample Microarray Analysis Vibrio cholerae

### From OpenWetWare

m (→Sanity Check: Compare individual genes with known data: replaced ? with .) |
(→Before we begin...: fixed link to SMD at Princeton) |
||

Line 7: | Line 7: | ||

== Before we begin... == | == Before we begin... == | ||

- | * The data from the Merrell et al. (2002) paper was accessed from [http://smd. | + | * The data from the Merrell et al. (2002) paper was accessed from [http://smd.princeton.edu/cgi-bin/publication/viewPublication.pl?pub_no=119 this page] at the Stanford Microarray Database (now hosted by Princeton ''— [[User:Kam D. Dahlquist|Kam D. Dahlquist]] 18:26, 7 October 2013 (EDT)''. |

* The Log<sub>2</sub> of R/G Normalized Ratio (Median) has been copied from the raw data files downloaded from the Stanford Microarray Database. | * The Log<sub>2</sub> of R/G Normalized Ratio (Median) has been copied from the raw data files downloaded from the Stanford Microarray Database. | ||

**'''Patient A''' | **'''Patient A''' |

## Revision as of 18:26, 7 October 2013

This page has been written with the analysis of the *Vibrio cholerae* dataset in mind. However, these steps are similar to what needs to be performed with *any* microarray dataset (see Overview of Microarray Data Analysis, although the details will differ with the particular experimental design.

## Contents |

## Before we begin...

- The data from the Merrell et al. (2002) paper was accessed from this page at the Stanford Microarray Database (now hosted by Princeton
*— Kam D. Dahlquist 18:26, 7 October 2013 (EDT)*. - The Log
_{2}of R/G Normalized Ratio (Median) has been copied from the raw data files downloaded from the Stanford Microarray Database.**Patient A**- Sample 1: 24047.xls (A1)
- Sample 2: 24048.xls (A2)
- Sample 3: 24213.xls (A3)
- Sample 4: 24202.xls (A4)

**Patient B**- Sample 5: 24049.xls (B1)
- Sample 6: 24050.xls (B2)
- Sample 7: 24203.xls (B3)
- Sample 8: 24204.xls (B4)

**Patient C**- Sample 9: 24053.xls (C1)
- Sample 10: 24054.xls (C2)
- Sample 11: 24205.xls (C3)
- Sample 12: 24206.xls (C4)

**Stationary Samples**(We will not be using these, they are listed here for completeness, but do not appear in your compiled raw data file.)- Sample 13: 24059.xls (Stationary-1)
- Sample 14: 24060.xls (Stationary-2)
- Sample 15: 24211.xls (Stationary-3)
- Sample 16: 24212.xls (Stationary-4)

- Download the Merrell_Compiled_Raw_Data_Vibrio.xls file to your Desktop.
- Save a copy of the file with a different filename that includes your initials and the date. For example, I would call mine "Merrell_Compiled_Raw_Data_Vibrio_KD_20091020.xls".

## Normalize the log ratios for the set of slides in the experiment

To scale and center the data (between chip normalization) perform the following operations:

- Insert a new Worksheet into your Excel file, and name it "scaled_centered".
- Go back to the "compiled_raw_data" worksheet, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
- Insert two rows in between the top row of headers and the first data row.
- In cell A2, type "Average" and in cell A3, type "StdDev".
- You will now compute the Average log ratio for each chip (each column of data). In cell B2, type the following equation:

=AVERAGE(B4:B5224)

- and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can click on the beginning cell, scroll down to the bottom of the worksheet, and shift-click on the ending cell.

- You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:

=STDEV(B4:B5224)

- and press "Enter".

- Excel will now do some work for you. Copy these two equations (cells B2 and B3) and paste them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
- You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
- Insert a new column to the right of each data column and label the top of the column as follows: A1_scaled_centered, A2_scaled_centered, etc.
- In cell C4, type the following equation:

=(B4-B$2)/B$3

- In this case, we want the data in cell B4 to have the average subtracted from it (cell B2) and be divided by the standard deviation (cell B3). We use the dollar sign symbols in front of the "2" and "3" to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?

- Copy and paste this equation into the entire column.
- Repeat the scaling and centering equation for each of the columns of data. Be sure that your equation is correct for the column you are calculating.

## Perform statistical analysis on the ratios

We are going to perform this step on the scaled and centered data you produced in the previous step.

- Insert a new worksheet and name it "statistics".
- Go back to the "scaling_centering" worksheet and copy the first column ("ID").
- Paste the data into the first column of your new "statistics" worksheet.
- Go back to the "scaling_centering" worksheet and copy Column C ("A1_scaled_centered).
- Go to your new worksheet and click on the B1 cell. Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
- Go to a new column on the right of your worksheet. Type the header "Avg_LogFC_A", "Avg_LogFC_B", and "Avg_LogFC_C" into the top cell of the next three columns.
- Compute the average log fold change for the replicates for each patient by typing the equation:

=AVERAGE(B2:E2)

- into cell N2. Copy this equation and paste it into the rest of the column.

- Create the equation for patients B and C and paste it into their respective columns.
- Now you will compute the average of the averages. Type the header "Avg_LogFC_all" into the first cell in the next empty column. Create the equation that will compute the average of the three previous averages you calculated and paste it into this entire column.
- Insert a new column next to the "Avg_LogFC_all" column that you computed in the previous step. Label the column "Tstat". This will compute a T statistic that tells us whether the scaled and centered average log ratio is significantly different than 0 (no change). Enter the equation:

=AVERAGE(N2:P2)/(STDEV(N2:P2)/SQRT(number of replicates))

- (NOTE: in this case the number of replicates is 3. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column.

- Label the top cell in the next column "Pvalue". In the cell below the label, enter the equation:

=TDIST(ABS(R2),degrees of freedom,2)

The number of degrees of freedom is the number of replicates minus one, so in our case there are 2 degrees of freedom. Copy the equation and paste it into all rows in that column.

- Insert a new worksheet and name it "forGenMAPP".
- Go back to the "statistics" worksheet and Select All and Copy.
- Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. We will now format this worksheet for import into GenMAPP.
- Select Columns B through Q (all the fold changes). Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
- Select Columns R and S. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
- Select Columns N through S and Cut. Select Column B by left-clicking on the "B" at the top of the column. Then right-click on the Column B header and select "Insert Cut Cells". This will insert the data without writing over your existing columns.
- Delete Rows 2 and 3 where it says "Average" and "StDev" so that your data rows with gene IDs are immediately below the header row 1.
- Insert a column to the right of the "ID" column. Type the header "SystemCode" into the top cell of this column. Fill the entire column (each cell) with the letter "N".
- Select the menu item File > Save As, and choose "Text (Tab-delimited) (*.txt)" from the file type drop-down menu. Excel will make you click through a couple of warnings because it doesn't like you going all independent and choosing a different file type than the native .xls. This is OK. Your new *.txt file is now ready for import into GenMAPP. But before we do that, we want to know a few things about our data as shown in the next section.
- Upload both the .xls and .txt files that you have just created to your journal page in the class wiki. Make sure that your file name is distinct from your other classmates so that nobody overwrites anyone else's file.

## Sanity Check: Number of genes significantly changed

Before we move on to the GenMAPP/MAPPFinder analysis, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Merrell et al. (2002).

- Open your spreadsheet and go to the "forGenMAPP" tab.
- Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
- Click on the drop-down arrow on your "Pvalue" column. Select "Custom". In the window that appears, set a criterion that will filter your data so that the Pvalue has to be less than 0.05.
- How many genes have p value < 0.05?
- What about p < 0.01?
- What about p < 0.001?
- What about p < 0.0001?

- When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
- We have just performed 5221 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 261 times. Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know
*which*ones. - The "Avg_LogFC_all" tells us the size of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control.
- Keeping the "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change greater than zero. How many are there?
- Keeping the "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change less than zero. How many are there?
- What about an average log fold change of > 0.25 and p < 0.05?
- Or an average log fold change of < -0.25 and p < 0.05? (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)

- In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. For the GenMAPP analysis below, we will use the fold change cut-off of greater than 0.25 or less than -0.25 and the p value cut off of p < 0.05 for our analysis because we want to include several hundred genes in our analysis.
- What criteria did Merrell et al. (2002) use to determine a significant gene expression change? How does it compare to our method?

## Sanity Check: Compare individual genes with known data

- Merrell et al. (2002) report that genes with IDs: VC0028, VC0941, VC0869, VC0051, VC0647, VC0468, VC2350, and VCA0583 were all significantly changed in their data. Look these genes up in your spreadsheet. What are their fold changes and p values? Are they significantly changed in our analysis?