Physics307L F09:People/McCoy/Speed of Light

For my first full lab this semester, I chose to do the speed of light lab, in which I measured the speed of light inside an (approximately) 2 meter long photomultiplier tube. More detailed information about the lab and its procedure is located in section 10 of the lab manual, as written by Professor Gold. The set-up that I used mimics that seen in the pictures found here. My raw data and notes from both days of the lab can be accessed here for day 1, where I describe the set-up and materials used and here for day 2, where I have the raw data notes from the calibration of the Time-Amplitude Converter (TAC) and from the measurements of the speed of light.

All my measurements were taken using the measure function of the oscilloscope, which at the voltage discrimination I was working, have a margin of error of ±.02 Volts. Because of this margin of error, I have included a calculation of the minimum and maximum values of the speed of light due to the oscilloscope's margin of error, along with the error that would be caused by the standard deviation from my relatively small sample size of 20 points for each category.

I did all my calculations using MatLab 7.0.4 and you can access a Microsoft Word file with the data as generated by MatLab at (Steve Koch:It's better to do things in the wiki format, so I converted your document to html here: /Word.  Although you did a good job with that document, please use the wiki next time! (And actually, now that I'm looking at it more, it's a good supporting document, but difficult to understand without more detailed explanation of the analysis techniques))

Calculation of the Speed of Light To calculate the speed of light, I calibrated the TAC by taking 20 data points at each "0 delay", ".5ns delay", "1ns delay", and "2ns delay". In doing so I was able to take the average of the points and graphically set a linear model to the data such that the slope of the regression line was of units Volts/nanosecond. Having done this, I measured the same number of points for differences of distance measuring 0, .25, .5, .75, and 1 meter. Fitting a regression line to this data gave me an increment measuring Volts/meter. With these two data points, I was able to calculate the speed of light in terms of meters/nanosecond and from that find my final value.

The data points that I measured returned the calculated values of .2323 V/ns and .4988 V/m so when I calculated the speed of light I came out with a value of $$.2323(V/ns)/.4988(V/m)*1E9 = 4.66E8(m/s)$$. This gave me a value of 4.66e8 m/s for the speed of light, which is approximately 1.5 times the accepted value. Having this high of a speed surprised me greatly, primarily because I know the value of the speed of light in a vacuum is approximately 3e8 m/s.

Error Calculation To determine the error in my measurements, I decided to calculate the error in two different manners, using the measurement error and calculating the maximum and minimum values of the regression lines for each data series, the time delay and length difference, such that I could determine the maximum and minimum values of the speed of light within the margin of error of the oscilloscope. Doing so, I calculated a minimum value of 3.83e8 m/s and maximum value of 5.66e8 m/s. The minimum calculation was significantly closer to the true value of the speed of light, but as it was still significantly higher than the actual value I decided to do the calculations using the standard deviation.

Using the standard deviation, I first found that the calculated values may be inaccurate, because the high standard deviation led to having a negative or near-zero value for the regression line. Because of this, I took the magnitude of the calculated speed for the maximum and minimum values. From this assumption, I calculated a minimum value of the speed of light to be 6.23e7 m/s, or approximately 1/5 the accepted value, and a maximum value to be 1.31e11 m/s, which is around 400 times the accepted value of the speed of light. Because of these calculations, a 68% confidence interval of the speed of light would range from 6.23e7 to 1.31e11 m/s. With that interval, the speed of light is easily within the range of 1 standard deviation from the mean, but the large size of the mean due to the large standard deviation of the individual calculations makes this range somewhat invalid.

From these error calculations, I believe that the safest statement would be that the speed of light is approximately 4.66e8 ± 8.3e7 m/s or that we are 68% confident that the speed of light is between 6.23e7 and 3.46e9 m/s. For both statements, I used the convention of setting the smallest difference from the mean (additive for fixed error and multiplicative for standard error) for the calculation as the deviation value.

Error Analysis

The greatest reason for the high standard deviation is the large variance in calculated values as each pulse of light would scatter along the length of the tube and some would be absorbed by the cardboard tube, making the intensity of each measured pulse by the anode of the PMT slightly different. Because of this, and the fact that the TAC triggers at a fixed voltage level, the calculated speed of light is dependent of the triggering level of the TAC, meaning that it may not trigger for all pulses along the PMT, along with registering a different voltage dependent of the intensity of the measured pulse and the concept of time walk. These corresponding ideas are based off the different discriminator levels of the TAC as a different magnitude of energy pulse, as it doesn't immediately spike to it's maximum negative voltage, making a lower voltage peak have a later time signature than one that has a higher magnitude. Also the discriminator level may be such that the peak doesn't trigger the stop code on the TAC such that it runs two light pulses as a single time sequence. An graphical image of time walk is provided in the lab manual, section 10.3. Another possible reason for error is the signal sent from the LED to the TAC, as if the signal is looked at directly using an oscilloscope, it has multiple peaks, which could possibly trigger the TAC multiple times, such that it doesn't register on the initial peak of the light pulse, but on one of the later peaks.

Possible Improvements The most noticeable improvement to this lab would have been to run more data points so as to decrease the standard deviation and gather a set of data closer to the true value of the speed of light. Other than increasing the number of data points, the next improvement would be to increase the sensitivity of the TAC to decrease the effect of the time walk and the discriminator level on the registration of values. Although this would increase the sensitivity of the ability to gather data, it may also hinder the lab, because as the light pulses are not a single pulse, but have a slight oscillation in the signal, it may trigger on more than one of those oscillations, such that the time calculation would not be constant. The final improvement would be to decrease the margin of error in the oscilloscope's measurements, but doing so would require the ability to decrease the discriminator levels in the voltage measurements, along with increasing the frequency range that it can register on. These improvements can be made with higher quality oscilloscopes, but to the point where the data resolution makes the error negligible isn't a cost-effective change.

What I Learned/Benefits of Lab I found this lab to be very beneficial as it allowed me to learn about the use of a TAC and PMT, along with being able to continue increasing my proficiency with the use of an oscilloscope. The greatest benefit in my opinion was learning about time walk and how the discriminator level of the TAC determines the relative time-reception of the start and start pulses because of their magnitude. Learning how the pulse's initial jump is relative to the magnitude is quite valuable, as it allows me to understand the benefit of having more sensitive measurements, such that the intitial pulse has a steeper slope, rather than having a smaller slope, which delays the trigger. The other thing that I feel was very beneficial was the use of different methods of error calculation and how they cannot always be considered accurate, because with relatively small data sets, outliers in the data still are large enough contributors, that the calculations put too much influence in them to come out truly accurate.