User:TheLarry/Notebook/Larrys Notebook/2009/11/10

From OpenWetWare
Jump to navigationJump to search
Data Collection Main project page
Previous entry      Next entry

Data Smoothing

All of the movies have now been analyzed, and now is for the fun part of actually getting information from the text files.

The first thing Andy needs is velocity from each text file. For this i'll have to remove any points that are obviously wrong, and then smooth the data. I am not sure what to do after that histogram or best fit it to a horizontal line. I am just not 100% sure. Also Koch has a sliding window smoothing sub.vi that best fits to a Gaussian curve which i am excited about seeing.

Also microtubule length should probably be measured since the velocity might be a product of that. This information probably won't make the poster since there isn't enough time but might be worth seeing.

If I know what I am doing completely. I can get this data by the end of tomorrow. The hard part will be removing the bad points since i don't have a .vi to do this for me automatically which means I'll do it by hand.

There are minor details I don't really know for this smoothing part. Like how big a window, what to do with it next (fit histogram to a Gaussian, or fit to a horizontal line or whatever). I also have to put this information somewhere for Andy to get to it. An important thing here is note keeping so he'll know how many points came from which concentrations for error analysis. I'll probably just make a text file for each concentration with each velocity on there. And the length of the microtubule for each one.

Steve Koch 00:30, 11 November 2009 (EST): Would Google docs make any sense to use? Also in terms of error analysis, I think for this short-term purpose, you should record each velocity discovered, along with it's frame number and D2O concentration. Then, in a spreadsheet, it's easy to calculate the mean and SEM.