James C. Clements: Week 11

From OpenWetWare
Jump to navigationJump to search

Responses to questions given in protocol

These are responses to the questions from the protocol found on BIOL398-01/S11:Week 11

  • Number of replicates for each timepoint:
    • t0: 3 replicates
    • t10: 7 replicates
    • t30: 6 replicates
    • t120: 4 replicates
    • t720: 4 replicates
    • t3600: 6 replicates
    • The number of replicates found in the excel file were more than the number of biologically independent replicates reported in the Schade paper. This could mean that some dependent replicates exist in the excel sheet.
  • Find amounts of genes for different timepoints for given P values (Solutions are listed in order from t0, t10, t30, t120, t720, t3600)
    • How many genes have p value < 0.05?
      • 167 812 777 1351 2499 1346
    • What about p < 0.01?
      • 32 291 250 521 1241 673
    • What about p < 0.001?
      • 1 72 42 112 272 203
    • What about p < 0.0001?
      • 0 12 4 11 40 43
  • Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction. (Solutions are listed in order from t0, t10, t30, t120, t720, t3600)
    • P =.05
      • 0 1 0 1 6 6
    • P = .01
      • 0 0 0 1 3 6
    • P = .001
      • 0 0 0 0 3 0
    • P = .0001
      • 0 0 0 0 3 0
  • For timepoint with most genes significantly changed for P <.05, (720 minute timepoint in my analysis), find P <0.05 (assumed P value using Bonferroni correction since it was not specified) for different average log fold changes.
    • A>0
      • 2 genes
    • A>.25
      • 2 genes
    • A<0
      • 4 genes
    • A<-.25
      • 4 genes
  • What criteria did Schade use? How does it compare?
    • Schade’s data is somewhat vague for how this part of the analysis was done. The genespring software was used perform the statistical analysis. It was not mentioned as to what type of statistical analysis was done, however. Some searching was done to determine Genespring’s method of determining a gene to have significant change, but it was not easily accessible.
  • NSR1 Average fold change and P values. Is it significant?
    • AFC:
      • 0.6841 -0.7194 -3.1920 -3.6023 -1.7610 -0.4052
    • P:
      • 0.7557 0.0005864 0.001090 0.0004374 0.0001921 0.1391
    • No Significant change using Bonferroni correction for P = 5%, but without Bonferroni correction there is significant change at 10 minutes, 30 minutes, 120 minutes, 720 minutes, and 3600 minutes.
  • Gene with lowest P value.
    • Index number 3328 was found for my data to have the smallest P value. This is systematic name YNL316C, standard name PHA2 used in the synthesis of Phenylalanine, an essential amino acid it only shows significant change at the 2 hour mark. The ALFC for this gene was -1.5751, so it was downregulated. It may be difficult for the cell to make proteins or amino acids at the 2 hour time point and thus the cell would downregulate the gene.

Protocol:

A MatLab file was created and used to follow the procedure below. The beginning for the MatLab script details the preprocessing of the data.

Microarray Data Analysis

Background

This is a list of steps required to analyze DNA microarray data.

  1. Quantitate the fluorescence signal in each spot
  2. Calculate the ratio of red/green fluorescence
  3. Log transform the ratios
  4. Normalize the ratios on each microarray slide
    • Steps 1-4 are performed by the GenePix Pro software.
    • You will perform the following steps:
  5. Normalize the ratios for a set of slides in an experiment
  6. Perform statistical analysis on the ratios
  7. Compare individual genes with known data
    • Steps 5-7 are performed in Microsoft Excel
  8. Pattern finding algorithms (clustering)
  9. Map onto biological pathways
    • We will use software called STEM for the clustering and mapping
  10. Create mathematical model of transcriptional network

Each group will analyze a different microarray dataset:

  • Wild type data from the Schade et al. (2004) paper you read last week.
  • Wild type data from the Dahlquist lab.
  • Δgln3 data from the Dahlquist lab.

For your assignment this week, you will keep an electronic laboratory notebook on your individual wiki page that records all the manipulations you perform on the data and the answers to the questions throughout the protocol.

You will download your assigned Excel spreadsheet from LionShare. Because the Dahlquist Lab data is unpublished, please do not post it on this public wiki. Instead, keep the file(s) on LionShare, which is protected by a password.

  • Groups:
    • Schade et al. (2004) data: Carmen, James
    • Dahlquist lab wild type data: Sarah, Nick
    • Dahlquist lab Δgln3 data: Alondra

Experimental Design

On the spreadsheet, each row contains the data for one gene (one spot on the microarray). The first column (labeled "MasterIndex") numbers the rows in the spreadsheet so that we can match the data from different experiments together later. The second column (labeled "ID") contains the gene identifier from the Saccharomyces Genome Database. Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-4 above having been done for you by the scanner software).

Each of the column headings from the data begin with the experiment name ("Schade" for Schade wild type data, "wt" for Dahlquist wild type data, and "dGLN3" for the Dahlquist Δgln3 data). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.

For the Schade data, the timepoints are t0, t10, t30, t120, t12h (12 hours), and t60 (60 hours) of cold shock at 10°C.

For the Dahlquist data (both wild type and Δgln3), the timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C). Note that the experimental designs are different.

  • Begin by recording in your wiki the number of replicates for each time point in your data. For the group assigned to the Schade data, compare the number of replicates with what is stated in the Materials and Methods for the paper. Is it the same? If not, how is it different?

Normalize the ratios for a set of slides in an experiment

To scale and center the data (between-chip normalization) perform the following operations:

  • Insert a new Worksheet into your Excel file, and name it "scaled_centered".
  • Go back to the "compiled_raw_data" worksheet, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
  • Insert two rows in between the top row of headers and the first data row.
  • In cell A2, type "Average" and in cell A3, type "StdDev".
  • You will now compute the Average log ratio for each chip (each column of data). In cell C2, type the following equation:
=AVERAGE(C4:C6190)

and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can left-click on the beginning cell (let go of the mouse button), scroll down to the bottom of the worksheet, and shift-left-click on the ending cell.

  • You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:
=STDEV(C4:C6190)

and press "Enter".

  • Excel will now do some work for you. Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
  • You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
  • Insert a new column to the right of each data column and label the top of the column as with the same name as the column to the left, but adding "_sc" for scaled and centered to the name. For example, "wt_LogFC_t15-1_sc"
  • In cell D4, type the following equation:
=(C4-C$2)/C$3

In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3). We use the dollar sign symbols in front of the number to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?

  • Copy and paste this equation into the entire column.
  • Repeat the scaling and centering equation for each of the columns of data. You can copy and paste the formula above, but be sure that your equation is correct for the column you are calculating.

Perform statistical analysis on the ratios

We are going to perform this step on the scaled and centered data you produced in the previous step.

  • Insert a new worksheet into your Excel spreadsheet and name it "statistics".
  • Go back to the "scaled_centered" worksheet, Select All and Copy. Go to your new "statistics" worksheet, click on the upper, left-hand cell (cell A1) and Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
    • There may be some non-numerical values in some of the cells in your worksheet. This is due to errors created when Excel tries to compute an equation on a cell that has no data. We need to go through and remove these error messages before going on to the next step.
    • Scan through your spreadsheet to find an example of the error message. Then go to the Edit menu and Select Replace. A window will open, type the text you are replacing in the "Find what:" field. In the "Replace with:" field, enter a single space character. Click on the button "Replace All" and record the number of replacements made in your wiki page.
  • We are now going to work with your scaled and centered Log Fold Changes only, so delete the columns containing the raw Log Fold changes, leaving only the columns that have the "_sc" suffix in their column headings. You may also delete the second and third rows where you computed the average and standard deviations for each chip.
  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the average log fold changes that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<AvgLogFC>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_AvgLogFC_t15".
  • Compute the average log fold change for the replicates for each timepoint by typing the equation:
=AVERAGE(range of cells in the row for that timepoint)

into the second cell below the column heading. For example, your equation might read

=AVERAGE(C2:F2)

Copy this equation and paste it into the rest of the column.

  • Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.
  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the T statistic that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<Tstat>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Tstat_t15". You will now compute a T statistic that tells you whether the scaled and centered average log fold change is significantly different than 0 (no change in expression). Enter the equation into the second cell below the column heading:
=AVERAGE(range of cells)/(STDEV(range of cells)/SQRT(number of replicates))

For example, your equation might read:

=AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4))

(NOTE: in this case the number of replicates is 4. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once.

  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the P value that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<Pval>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Pval_t15". In the cell below the label, enter the equation:
=TDIST(ABS(cell containing T statistic),degrees of freedom,2)

For example, your equation might read:

=TDIST(ABS(AE2),3,2)

The number of degrees of freedom is the number of replicates minus one, so in our case there are 3 degrees of freedom. Copy the equation and paste it into all rows in that column.

  • Insert a new worksheet and name it "final".
  • Go back to the "statistics" worksheet and Select All and Copy.
  • Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. This is your final worksheet from which we will perform biological analysis of the data.
  • Select all of the columns containing Fold Changes. Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
  • Select all of the columns containing T statistics or P values. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
  • Upload the .xls file that you have just created to LionShare. Give Dr. Dahlquist (username kdahlqui) and Dr. Fitzpatrick (username bfitzpatrick) permission to download your file. Send an e-mail to each of us with the link to the file.

Sanity Check: Number of genes significantly changed

Before we move on to the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Schade et al. (2004).

  • Open your spreadsheet and go to the "final" worksheet.
  • Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
  • Click on the drop-down arrow on one of your "Pval" columns. Select "Custom". In the window that appears, set a criterion that will filter your data so that the P value has to be less than 0.05.
    • How many genes have p value < 0.05?
    • What about p < 0.01?
    • What about p < 0.001?
    • What about p < 0.0001?
      • Answer these questions for each timepoint in your dataset.
  • When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
  • We have just performed 5221 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 309 times. Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones.
    • There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction. To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
      • Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction.
  • The "AvgLogFC" tells us the magnitude of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control. For the timepoint that had the greatest number of genes significantly changed at p < 0.05, answer the following:
    • Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change greater than zero. How many meet these two criteria?
    • Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change less than zero. How many meet these two criteria?
    • Keeping the "Pval" filter at p < 0.05, How many have an average log fold change of > 0.25 and p < 0.05?
    • How many have an average log fold change of < -0.25 and p < 0.05? (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
  • What criteria did Schade et al. (2004) use to determine a significant gene expression change? How does it compare to our method?
  • The expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. (Recall that it is specifically mentioned in the Schade et al. (2004) paper.) Find NSR1 in your dataset. Is it's expression significantly changed at any timepoint? Record the average fold change and p value for NSR1 for each timepoint in your dataset.
  • Which gene has the smallest p value in your dataset (at any timepoint)? You can find this by sorting your data based on p value (but be careful that you don't cause a mismatch in the rows of your data!) Look up the function of this gene at the Saccharomyces Genome Database and record it in your notebook. Why do you think the cell is changing this gene's expression upon cold shock?


MatLab Code

%James Clements %The purpose of this file is to crunch the data from the given excel sheet. %The data has been postprocessed by Dr. Dalquist and by Jaimz Clements. The %postprocessing by Jaimz was loading the data into matlab by typing: % SchadeDat = xlsread('Schade'); Schadedate(:,1)=[];Schadedate(:,1 =[]; %and then saving the matlab vector SchadeDat as a .m file. (The preprosseing reads the script and then deletes the first 2 columns so that only the data are left) % Some info that must be known before running the script are which columns % belong with which time points. In this dataset, Column 1 is the numbers from column C in % the excel file, column 2 is the same for column D, and etc to column 30.

%Please note, this is not a general script yet, more work must be done to %make this useful for analysis of any microarray data set.

clear

%Load data and store as a general variable:


load SchadeDat D = SchadeDat; %Stores SchadeDat as vector D. This makes it so the line where the data is loaded and this line where D is defined are the only lines that must be modified in order to use a different data set (the groupings must still be altered)

%% Initialize matrices

[row, col] = size(D); %numbers of rows and columns of data Ave(1:col) = 0; %initializes mean matrix STD(1:col) = 0; % initializes standard deviation matrix SC = zeros(row,col); %initializes scaled and centered matrix zCol = zeros(row,1);


%% Normalize the ratios for a set of slides

for n=1:col

   CCol = D(:,n); %examines current column for statistical stuff
   Ave(n) =  mean(CCol(~isnan(CCol))); %averages column
   STD(n) = std(CCol(~isnan(CCol))); %calculates standard deviation of column
   SC(:,n) = (Ave(n)-D(:,n))./STD(n); %Scales and centers column n

end

%% Perform statistical analysis on the ratios

%First I group the data by timepoint: timepoint is on left, number of %columns it takes up is on right. Listed in order of appearence t means %time, and then next number is the minutes before collection in experiment


nt0  = 3;
nt10 = 7;
nt30 = 6;
nt120 = 4;
nt720 = 4; %12 hours
nt3600 = 6; %60 hours

%Scaled and centered matrices for each timepoint.

SCt0 = SC(:,1:3); SCt10 = SC(:,4:10); SCt30 = SC(:,11:16); SCt120 = SC(:,17:20); SCt720 = SC(:,21:24); SCt3600 = SC(:,25:30);

At0 = zeros(row,1); At10 = zeros(row,1); At30 = zeros(row,1); At120 = zeros(row,1); At720 = zeros(row,1); At3600 = zeros(row,1);

St0 = zeros(row,1); St10 = zeros(row,1); St30 = zeros(row,1); St120 = zeros(row,1); St720 = zeros(row,1); St3600 = zeros(row,1);


%This loop calculates the average log fold change (Atx) and standard deviation (Stx) for each timepoint


for n = 1:row

   row0 = SCt0(n,:);
   At0(n) = mean(row0(~isnan(row0)));
   St0(n) = std(row0(~isnan(row0)));
   
   row10 = SCt10(n,:);
   At10(n) = mean(row10(~isnan(row10)));
   St10(n) = std(row10(~isnan(row10)));
   
   row30 = SCt30(n,:);
   At30(n) = mean(row30(~isnan(row30)));
   St30(n) = std(row30(~isnan(row30)));
   
   row120 = SCt120(n,:);
   At120(n) = mean(row120(~isnan(row120)));
   St120(n) = std(row120(~isnan(row120)));
   
   row720 = SCt720(n,:);
   At720(n) = mean(row720(~isnan(row720)));
   St720(n) = std(row720(~isnan(row720)));
   
   row3600 = SCt3600(n,:);
   At3600(n) = mean(row3600(~isnan(row3600)));
   St3600(n) = std(row3600(~isnan(row3600)));

end

%Calcualting T stats

Tt0 = At0./(St0./nt0^.5); Tt10 = At10./(St10./nt10^.5); Tt30 = At30./(St30./nt30^.5); Tt120 = At120./(St120./nt120^.5); Tt720 = At720./(St720./nt720^.5); Tt3600 = At3600./(St3600./nt3600^.5);

%Note: I managed to avoid needing the MatLab Statistics toolbox until %now... I'm not quite sure if I can continue without it, however. I think %it's necessary to calculate the P values for the student T test. It %probably also has better data analysis methods than what I've used to just %crunch the raw data. We could always copy and paste into excel or %openoffice from here, but that would defeat the purpose of using matlab %and doing everything in one step.

%<<<FROM HERE ON OUT: MUST HAVE STATISTICS TOOLBOX INSTALLED!!!>>>

% Calculates P values for student T test


Pt0 = tcdf(Tt0, nt0-1); Pt10 = tcdf(Tt10, nt10-1); Pt30 = tcdf(Tt10, nt30-1); Pt120 = tcdf(Tt120, nt120-1); Pt720 = tcdf(Tt720, nt720-1); Pt3600 = tcdf(Tt3600, nt3600-1);


PMatrix = [Pt0 Pt10 Pt30 Pt120 Pt720 Pt3600]; %Column 1 is P values for t0, column 2 is P for t10, etc.


%% Question set 1 %Finding numbers of genes that match different P criteria

%The following matrices are logical matrices. a value of 1 means that that %entry satisfies the criterion on P.


P05 = PMatrix <= .05; P01 = PMatrix <= .01; P001 = PMatrix <= .001; P0001 = PMatrix <= .0001;

%the following matrices count the number of hits for the given P criterion %above. Column one is for t0, column 2 is for t10, etc.

nP05 = sum(P05); nP01 = sum(P01); nP001 = sum(P001); nP0001 = sum(P0001);

%Bonferroni Correction test

Pc05 = PMatrix.*row <= .05; Pc01 = PMatrix.*row <= .01; Pc001 = PMatrix.*row <= .001; Pc0001 = PMatrix.*row <= .0001;

ncP05 = sum(Pc05); ncP01 = sum(Pc01); ncP001 = sum(Pc001); ncP0001 = sum(Pc0001);

AMatrix = [At0 At10 At30 At120 At720 At3600]; %Matrix containing average log fold change of each timepoint. This matrix could be useful for some analyses


%% Question set 2 % finding numbers of genes that match P criteria and Average criteria

%it was assumed for these responses that the P value criteria in the %directions was given in terms of the Bonferroni Correction.

%3600 minute timepoint has the most change at 5% interval.

G0 = [Pc05(:,6) AMatrix(:,6)>=0]; %Logical matrix for P filter of .05 and Average Log fold change is greater than 0 nG0 = sum(sum(G0')==2); %number of matrices that meet such a criterion

G25 = [Pc05(:,6) AMatrix(:,6)>=.25]; nG25 = sum(sum(G25')==2);

L0 = [Pc05(:,6) AMatrix(:,6)<=0]; nL0 = sum(sum(L0')==2);

L25 = [Pc05(:,6) AMatrix(:,6)<=-.05]; nL25 = sum(sum(L25')==2);

%% Question set 3 % Expression of NSR1 (master index number 3274. This corresponds to row % 3274 of my data).

%Average fold change at each timepoint:


NSR1AFC = AMatrix(3274,:); NSR1P = PMatrix(3274,:);


% Logical vector determining significant change at 5% with Bonferroni % Correction (1 means significant change, 0 means no significant change)


NSR1Pc05 = Pc05(3274,:); %note, under this condition there is no significant change

%logical vector determining the same significant change but without %Bonferroni Correction

NSR1P05 = P05(3274,:); %This one actually shows significant change


%Find gene with minimum P value forany time interval.

MinP = min(min(PMatrix)); [rmin cmin] = find(PMatrix == MinP); % rmin is the row in which the min value occurs, cmin is the column.

%(index number 3328 was found for my data This is systematic name YNL316C, standard name PHA2 used in the synthesis of Phenylalanine, an essnetial amino acid it only shows significant change at the 2 hour mark).


AMinP = AMatrix(rmin,cmin); %calculates average log fold change for the minimum P gene. (It is negative).