Talk:Physics307L:People/Osinski/Lightspeed/Lightspeed.m

Steve Koch 00:42, 20 October 2008 (EDT): If I am understanding the code correctly, you do this:


 * Calibrate the TAC by averaging the voltage change seen by 2 ns delays.
 * Calculate speed of light by only using the first and last points.
 * Standard error in a way that I could not understand easily.

My comments about the calibration:


 * It's a good idea to check the calibration of the TAC. But when you get a different answer than the manufacturer states, you have to ask the difficult question: who is right?  Given how much instrument trouble we have, what gives you confidence in your calibration more than the TAC manufactuer?  How would you get this confidence?
 * Would you get a better calibration of the TAC looking at large or small time changes? Seems to me that much higher delays would work better (but you'd need to account for the change in amplitude of the PMT signal w/ delay)

My comments about your speed of light calculation:


 * Aren't you wasting a ton of data by ignoring all the middle points? Why not do a linear fit?  This seems like a strange method, given what we've talked about in class to this point.

As for the standard error calculation, I wasn't sure what you were doing.