User:Arianna Pregenzer-Wenzler/Notebook/Junior Lab/2008/10/15

From OpenWetWare
Jump to: navigation, search
Owwnotebook icon.png Project name Report.pngMain project page
Resultset previous.pngPrevious entry      Next entryResultset next.png

Speed of light

SJK 11:05, 6 November 2008 (EST)
11:05, 6 November 2008 (EST)
You are missing a description of the equipment (make / model number)! Especially since you are specifically talking about calibrating the instruments, you would want to be able to say exactly which instrument you were calibrating. Other than that, your notebook looks very good with lots of important details.

set up

Set up for this lab seems mostly to be about understanding what is going on between the power sources, the PMT, and what the oscilliscope is measuring and how to inturperate the wave forms displayed on the screen. Over all I think I failed to make the important connections as to what is going on, because when we started taking data, and ran into a problem, I had no idea what was causing the problem. Our first problem was with the triggering on the oscilliscope. We could adjust the polarization between the LED and the PTM to max intensity, but then if we attempted to make a small adjustment to decrease the intensity, our voltage as displayed on the oscilliscope went to zero. We had our trigger set to read to high up the wave form and if the wave did not reach the trigger level our voltage reading was zero. We fixed this and we understood the basic concept that the stopping potential needed to be constant through out the process (when we actually get to measuring TOF) so when we start the max intensity need to be read with the light source far from the PTM so that as we decrease our distance we can maintain the same potential by decreasing the intensity.


The power supply for the PMT is 2000V We measured the delays for high intensity with LED source at pretty great distance from PMT We are running into trouble because for different delays we are getting next to no voltage change. Who knows if this is the right approach, but we took this calibration data by measuring changes in the square wave since that was the only thing that was changing.

Delay (nsec) Set1(V) error1(±) Set2(V) error2(±)
0 4.13 4.0
.5 4.24 4.04 .04
1 4.24 4.12 .04
2 4.56 4.4
4 4.92 .04 4.76 .04
8 5.68 5.56 .04
16 7.32 .04 7.12

Media:Speed of Light.xls

Day II


Ok, so we actually didn't do as badly as I thought the first day of this lab, still my notes were minimal and we are going to need to be pretty organized if we are going to get any worth while data in the next lab secession.


  • 1st thing with this lab is safety
A couple weeks ago at least one student got a pretty serious shock from the dc power supply in this lab, so the important lesson is unplug all equiptment before moving it around. Even if I (or any other student) mannage to survive a shock without experiencing any negative effects, our teacher might not survive the stress, so be careful! SJK 01:46, 22 October 2008 (EDT)
01:46, 22 October 2008 (EDT)
You are so right....thank you!!!

Since I happened to read this, I have a comment for tomorrow: Do the data first, and "calibrate" afterwards. Calibration is a useful exercise in frustration, but don't want you to get so caught up in it that you can't take enough data (can explain in person)
  • Once everything is positioned correctly, plug in and turn on. The dc power supply should be set between 150V and 200V, NOT above 200V. The HV for the PMT between 1800-2000V. The TAC requires at least a 2nsec delay, (check this on the oscilliscope).
  • In this lab we are trying to measure the speed of light. Our dc power supply delivers voltage to a LED which sends out pulses of aprox 10kHz that are detected by a PMT. The TAC (time to amplitude converter) measures the difference between the start time of the light pulse(the voltage sent out by the dc power supply) and its stop time (the voltage delivered by the PMT)and converts it into an output voltage which we measure and convert back to time. Using this time and the distance traveled we should be able to measure the speed of light.
  • Important, the TAC triggers later for smaller pulses, even if the max pulse amplitude is the same so the voltage going into the PMT must be kept constant!! This is done by using polarizers attached to the PMT, when the LED source is at its max distance from the PMT the intensity of the light should be maximized, as the LED is moved closer use the polarizers to decrease the intensity there by keeping the starting voltage constant.

Data II


  • Calibration
Delay (nsec) Set1(V) error1(±) Set2(V) error2(±)

The starting amplitude is seen on the oscilliscope as a sharp negative peak (vertical on one side and steeply sloped on the other). The stop time is what we are measuring, it is a positive table like function. There needs to be a distance of at least 2nsec between the two curves, and the trigger needs to be set near the base of the curves. Use average function on oscilliscope to stablize picture when taking measurments. The start voltage needs to be at least several volts probably around 400V, and the LED source should be at close to the max distance with the polarizers turned so that the intensity is maximized.

  • Measure stop time for a series of delays, check that it is linear.
  • The time resolution should be better than 1nsec
  • Measure systemaic uncertainty due to time walk. This means move the LED source inward without adjusting the polarizers and take the same calibration data.


Distance(time walk) (cm) Set1(V) error1(±) Set2(V) error2(±)


  • Experiment

Keeping start voltage constant using polarizers, measure the stop time over a series of distances (from LED to PMT) ranging from around 0 cm to 150cm. Do multiple trials at all distances. Set an arbitrary 0 distance, then measure from there going toward the PMT.

Start at zero (which we set as 20cm on the ruler, there is a burn mark by the 20cm so we can't get this 20cm confused with any other 20cm on the multiple meter sticks) move LED inward.

Start Voltage (needs to remain constant)

Set1: 450mV ±10mV

Set2: really try to keep chanel 1 at 440V, little changes in start voltage make a large difference

Set3: same chanel 1 at 440V

TAC settings range 50 nsec, multiplier 1

Distance (cm) Set1(V) error1(±) Set2(V) error2(±) Set3(V) error3(±)
0 8.56 0 8.48 .08 8.5 .08
10 8.44 .04 8.24 0 8.2 .1
20 8.56 0 8.5 .08 8.16 .08
30 7.5 .1 8.4 0 7.9 .06
40 7.2 .08 8.4 0 7.7 .08
50 7.7 .1 8.32 0 7.6 .2
60 7.44 .08 8.2 .04 7.24 .08
70 7.8 .04 8.12 .04 7.64 .06
80 7.44 .15 8.08 .08 7.6 0
90 7.6 .08 8.08 .08 7.4 .1
100 7.8 .2 8.0 0 7.3 .1
110 7.9 .1 8.0 .08 7.6 .04
120 7.8 .1 7.9 .1 7.56 .04
130 7.24 .04 7.6 .08 7.96 .04
140 6.8 .08 6.7 .1 7.4 .1
150 6.6 .1 7.24 .04 6.9 .1
160 7.24 .04 6.9 .08 7.36 .1
170 6.5 .1 7.12 .06 6.8 .08
180 6.88 .08 7.0 .2 6.72 .08
190 6 .08 6.95 .05 6.7 .15
200 6.6 .04 7.25 .20 7.0 .08
210 6.8 .08 6.6 .20 7.24 .04

Media:Speed of LightII.xls

Just eyeballing our data, this is not looking good, and my suspision is confirmed by the initial anaylsis I did on our data in the lab, (posted above as Speed of Light II). Before I go back to data anaylisis, just a few notes about this lab and possible problems. While I took the first set of data, my partner took the second and third sets, which might have been a good thing if the sense that I don't think he quite new what we were looking for and therefore was note trying to get the data to say what he thought it was supposed to say. Just looking at our data, you can see that it doesn't make a whole lot of sense. My lab partner thought it looked pretty linear, but if you think about what we were actually trying to measure; the time it takes light to travel a given distance, and if that distance is getting progressively shorter, then the time should be getting progressively shorter. In our case you can only say the time is decreasing as the distance decreases if you take a really broad prospective, ie if you compare our initial vs final time. I am saying time but our measurments are in volts, and here is another problem, when our teacher Dr Koch, looked at our calibration results, he said that we actually got data there that is quite close to the actual value if you take the value off the machine settings, but we did our calibration during the previous lab and I do not know if the settings we used on that day were the ones we used on the second day (here I am refering to the TAC range and the multiplier). What I do know is that in some point during our second day Dr Koch explained these settings and why we should expect out calibration curve to be close to .2 (this is the multiplier divided by the range, which is a frequency), but at the end of our experiment I recorded a range of 50 and a multiplier of 1 (not 10) which would give us a conversion factor of .02, and a really steep curve. After talking again to Dr Koch, I learned that you use the range to get your conversion factor, the multiplier multiplies the range. The range is listed in the manuel for the TAC as being aplicable to an output voltage of 0 to 10 volts, so our conversion factor is determined by dividing 10V by the range 50V/nsec to give .2nsec, which corresponds well to our calibration.


Our data is bad, and even though I made a real effort to understand what we were doing, and what our equiptment was doing prior to our second day in lab, I still did not record the detail necessary about our equiptment to even do inteligent analysis of systematic error, so I almost don't know how to begin.SJK 11:20, 6 November 2008 (EST)
11:20, 6 November 2008 (EST)
As noted on your summary page, I added some stuff to your excel sheet that I think you would learn a lot from and be excited to see: File:Speed of Light data Arianna SJK.xls. See my comments there. One notable thing is that it looks like your intuition was correct that you in fact got better data with practice. This is perfectly reasonable in experimental science. However, you'd want to avoid "trying until you're right," which is sort of what I'm doing. But in your case, you were not saying you got better because your final value got better, but you were saying you thought your technique was improving, and this was possibly confirmed by subsequent analysis. Thus, if this were your formal report, you'd be very well setup to repeat the experiment using your good technique and I'd expect you'd get data consistent with 30 cm / ns!

I am going to attempt to go through what we were supposed to have done, and do a resonable analysis what we did do and what we should have done.

Instructions from lab manuel under procedure:

3. Using the oscilloscope look at start/stop signals for the TAC and make sure there is a sufficient dalay between them.

We did this, the start signal was the one that looked like a sharp negative peak, the stop signal resembled a square wave. Our oscilloscope was set to read (I think, I did not record this at the time) 250nsec per division, and there were at least a couple divisions between peaks. The required delay was at least 2nsec.

4. Calibrate your system using known delays.

We did this on day I of lab and were advised not to do it a second time since it was not considered an accurate way to calibrate the TAC. The data we did obtain from our calibration did in fact come very close to the expected results if the range of the TAC was set at 50 and the mulitiplier was set at 10. Unfortunately, as I already mentioned, at the end of the 2nd day, I read our settings as range 50 and mulitiplier at 1, and I did not know if all our data for distance vs voltage was taken at this setting but using these values as a calibration between voltage and time (time = (1/50)*voltage) gave even worse results for c than we got using our calibrated value of .2. And I do not know the setting on the TAC the first day when we took our calibration data, because at that time I did not understand what I needed to look for. For the second day in lab I don't have a good excuse for not being careful about settings. As I noted above this was an incorrect interpretation of how to extract the calibration from the dial settings, but this underlines the importance of actually understanding the euiptment.

5. Measure uncertainty due to time walk.

I set the table up for this but we did not do it, I guess I lumped this data into calibration data, and just didn't have the energy to go back to it, especially after seeing how poorly our data for distance vs voltage turned out. I am sorry now that I did not take at least a few measurments, because it seems like we had a large error even when we were trying to keep the start time amplitude constant, I am curious to know how large our error would have been had we disregarded entirely the attempt to keep constant amplitude as we decreased our distance between the PMT and the LED source. What I can say is that on our first set of data (dataIII) where we took voltage vs decreasing distance from LED to PMT, we attempted to hold our starting voltage constant, and could only keep it constant with an error of ±10 mV, so I think alot of the error in our data has to do with time walk.

6. Take data...

i. Over short (~25cm) changes in distance, and over long (~150cm) changes in distance. Which set of data would you expect to give better results?

We took 3 sets of data over the total distance of 220 cm, starting at an arbitrary max distance from the PMT with our LED light source and moving it closer, while attempting to keep the start amplitude constant using the polarizers, in 10cm incriments. If you look at a series of data points that are close together there is very little consistancy to our data, often we record a longer time for light to travel a shorter distance. If you look at data points at our max and mininum distance you at least see the expected trend, ie, it takes light less time to travel a shorter distance. This goes with what I would expect given the amount of systematic error in this lab, at short distances there is two little changing in distance to make up for the amount of overall uncertainty. I have looked at some of the data collected by other groups, in particular, the data collect by Darrel Bon and Boleshk in the monday lab. There data was, if not linear, at least consistant in that a shorter distance always corresponded to a shorter time. If we had managed to get data like theirs then I might say something different, because if all of our data was basically linear, then time walk might have played a larger role in making data taken of large distances less consistant than data collected over a relatively short distance.

ii. Take data over a series of distances, and use it to determine an experimental value for c from a least squares fit, and its error. This at least I have been able to do.

iii. Summarize and compare your results for c

7. By using error propagation show that the relative error in c is dominated by the time resolution error, not by the position resolution error.

Because we did not take any actual time walk, I doubt that I will be able to show any conclusive reasons for what is the dominate cause of our error. Actually giving this some more thought, we were able to experimentally get a calibreation that corresponded well with what we were actually using as voltage to time conversion as listed by the TAC, but even so we got very poor data. The important thing to not is that we got poor data for all three of our trials, but on the first one we were holding start voltage constant within a range of ±.10, while on the second 2 trials we tried to greatly reduce this error, and truly attempt to hold start voltage constant. If you compare the data for all three trials, the extra attention to the start voltage does not appear to have improved the quality of our results, and in the final analysis of the data our value for c is pretty off, this might point away from my earlier idea that our error was mainly do to time walk and lend creedence to the statement in 7. that relative error is dominated by the time resolution error.

This is not quite true, I don't need time walk data to do some error propagation, I just need to show the difference in my best guess in the calibration data and in the distance data, if one error term is greater, this is the term that would most effect my results.