Physics307L F09:People/Osinski/Lightspeed

Determination of Speed of Light by employing a Time-Amplitude Converter (TAC)
The procedure for this lab is # 10 in Prof. Gould's manual.

raw data, instrumentation, measurement settings as well as thoughts on the experiment can be found in the lab notebook

Calibration
The purpose of calibration was to determine the proportionality factor between the amplitude of voltage signal displayed on the oscilloscope and the time difference between LED emission and PMT response. Two calibrations were performed with the LED at two distances from the PMT. We expect the calibration to be linear, but the results show that the voltage difference between 10ns and 8ns delays is noticeably smaller than the V difference between 4ns and 2ns, even though they both represent a change of 2ns. I choose to average these differences for both calibrations to find a single proportionality factor. Unfortunately, this results in a very high standard deviation (calculated at the top of the MATLAB code) for the calibration values.

2ns = .3560V ±.04V (due to instability of observed measurement and resolution of scope) or, Pfactor=.178±.04 V/ns
 * Average Proportionality Factor
 * Standard Deviation of Calibration Data - .1102V (Steve Koch:What am I supposed to do with this number? What does it imply about your data?)

Analysis
Calculations of standard deviation of the mean are at the bottom of the MATLAB code (look for the percent signs which indicate where I put relevant comments). The thumbs on the top right are graphs of raw calibration and measurement data. On the bottom right I provide an averaged graph of the measurement data.

We intended to make measurements in small increments followed by a set of measurements in large increments in order to compare them, but the latter data has been lost so we were left with two columns of data for analysis (at least we get to save some time at home).

0.0433V - this is interesting because this value is almost the same as the instability of the oscilloscope display, which was .04V. I did not expect this, but I realize now that it makes sense that the precision (how consistently it displays the same output for the same input) will be directly reflected in the standard deviation of measurements made by it.
 * Standard Deviation of Mean for Raw Data

c=2.4722*10^8 m/s - Judging by our lack of data and obvious room for human error, I find this result satisfying.
 * Speed of Light