User:Arianna Pregenzer-Wenzler/Notebook/Junior Lab/2008/10/15
|Project name||Main project page|
Previous entry Next entry
Speed of lightSJK 11:05, 6 November 2008 (EST)
Set up for this lab seems mostly to be about understanding what is going on between the power sources, the PMT, and what the oscilliscope is measuring and how to inturperate the wave forms displayed on the screen. Over all I think I failed to make the important connections as to what is going on, because when we started taking data, and ran into a problem, I had no idea what was causing the problem. Our first problem was with the triggering on the oscilliscope. We could adjust the polarization between the LED and the PTM to max intensity, but then if we attempted to make a small adjustment to decrease the intensity, our voltage as displayed on the oscilliscope went to zero. We had our trigger set to read to high up the wave form and if the wave did not reach the trigger level our voltage reading was zero. We fixed this and we understood the basic concept that the stopping potential needed to be constant through out the process (when we actually get to measuring TOF) so when we start the max intensity need to be read with the light source far from the PTM so that as we decrease our distance we can maintain the same potential by decreasing the intensity.
The power supply for the PMT is 2000V We measured the delays for high intensity with LED source at pretty great distance from PMT We are running into trouble because for different delays we are getting next to no voltage change. Who knows if this is the right approach, but we took this calibration data by measuring changes in the square wave since that was the only thing that was changing.
Ok, so we actually didn't do as badly as I thought the first day of this lab, still my notes were minimal and we are going to need to be pretty organized if we are going to get any worth while data in the next lab secession.
The starting amplitude is seen on the oscilliscope as a sharp negative peak (vertical on one side and steeply sloped on the other). The stop time is what we are measuring, it is a positive table like function. There needs to be a distance of at least 2nsec between the two curves, and the trigger needs to be set near the base of the curves. Use average function on oscilliscope to stablize picture when taking measurments. The start voltage needs to be at least several volts probably around 400V, and the LED source should be at close to the max distance with the polarizers turned so that the intensity is maximized.
Keeping start voltage constant using polarizers, measure the stop time over a series of distances (from LED to PMT) ranging from around 0 cm to 150cm. Do multiple trials at all distances. Set an arbitrary 0 distance, then measure from there going toward the PMT.
Start at zero (which we set as 20cm on the ruler, there is a burn mark by the 20cm so we can't get this 20cm confused with any other 20cm on the multiple meter sticks) move LED inward.
Start Voltage (needs to remain constant)
Set1: 450mV ±10mV
Set2: really try to keep chanel 1 at 440V, little changes in start voltage make a large difference
Set3: same chanel 1 at 440V
TAC settings range 50 nsec, multiplier 1
Just eyeballing our data, this is not looking good, and my suspision is confirmed by the initial anaylsis I did on our data in the lab, (posted above as Speed of Light II). Before I go back to data anaylisis, just a few notes about this lab and possible problems. While I took the first set of data, my partner took the second and third sets, which might have been a good thing if the sense that I don't think he quite new what we were looking for and therefore was note trying to get the data to say what he thought it was supposed to say. Just looking at our data, you can see that it doesn't make a whole lot of sense. My lab partner thought it looked pretty linear, but if you think about what we were actually trying to measure; the time it takes light to travel a given distance, and if that distance is getting progressively shorter, then the time should be getting progressively shorter. In our case you can only say the time is decreasing as the distance decreases if you take a really broad prospective, ie if you compare our initial vs final time. I am saying time but our measurments are in volts, and here is another problem, when our teacher Dr Koch, looked at our calibration results, he said that we actually got data there that is quite close to the actual value if you take the value off the machine settings, but we did our calibration during the previous lab and I do not know if the settings we used on that day were the ones we used on the second day (here I am refering to the TAC range and the multiplier). What I do know is that in some point during our second day Dr Koch explained these settings and why we should expect out calibration curve to be close to .2 (this is the multiplier divided by the range, which is a frequency), but at the end of our experiment I recorded a range of 50 and a multiplier of 1 (not 10) which would give us a conversion factor of .02, and a really steep curve. After talking again to Dr Koch, I learned that you use the range to get your conversion factor, the multiplier multiplies the range. The range is listed in the manuel for the TAC as being aplicable to an output voltage of 0 to 10 volts, so our conversion factor is determined by dividing 10V by the range 50V/nsec to give .2nsec, which corresponds well to our calibration.
AnalysisOur data is bad, and even though I made a real effort to understand what we were doing, and what our equiptment was doing prior to our second day in lab, I still did not record the detail necessary about our equiptment to even do inteligent analysis of systematic error, so I almost don't know how to begin.SJK 11:20, 6 November 2008 (EST)
I am going to attempt to go through what we were supposed to have done, and do a resonable analysis what we did do and what we should have done.
Instructions from lab manuel under procedure:
3. Using the oscilloscope look at start/stop signals for the TAC and make sure there is a sufficient dalay between them.
We did this, the start signal was the one that looked like a sharp negative peak, the stop signal resembled a square wave. Our oscilloscope was set to read (I think, I did not record this at the time) 250nsec per division, and there were at least a couple divisions between peaks. The required delay was at least 2nsec.
4. Calibrate your system using known delays.
We did this on day I of lab and were advised not to do it a second time since it was not considered an accurate way to calibrate the TAC. The data we did obtain from our calibration did in fact come very close to the expected results if the range of the TAC was set at 50 and the mulitiplier was set at 10. Unfortunately, as I already mentioned, at the end of the 2nd day, I read our settings as range 50 and mulitiplier at 1, and I did not know if all our data for distance vs voltage was taken at this setting but using these values as a calibration between voltage and time (time = (1/50)*voltage) gave even worse results for c than we got using our calibrated value of .2. And I do not know the setting on the TAC the first day when we took our calibration data, because at that time I did not understand what I needed to look for. For the second day in lab I don't have a good excuse for not being careful about settings. As I noted above this was an incorrect interpretation of how to extract the calibration from the dial settings, but this underlines the importance of actually understanding the euiptment.
5. Measure uncertainty due to time walk.
I set the table up for this but we did not do it, I guess I lumped this data into calibration data, and just didn't have the energy to go back to it, especially after seeing how poorly our data for distance vs voltage turned out. I am sorry now that I did not take at least a few measurments, because it seems like we had a large error even when we were trying to keep the start time amplitude constant, I am curious to know how large our error would have been had we disregarded entirely the attempt to keep constant amplitude as we decreased our distance between the PMT and the LED source. What I can say is that on our first set of data (dataIII) where we took voltage vs decreasing distance from LED to PMT, we attempted to hold our starting voltage constant, and could only keep it constant with an error of ±10 mV, so I think alot of the error in our data has to do with time walk.
6. Take data...
i. Over short (~25cm) changes in distance, and over long (~150cm) changes in distance. Which set of data would you expect to give better results?
We took 3 sets of data over the total distance of 220 cm, starting at an arbitrary max distance from the PMT with our LED light source and moving it closer, while attempting to keep the start amplitude constant using the polarizers, in 10cm incriments. If you look at a series of data points that are close together there is very little consistancy to our data, often we record a longer time for light to travel a shorter distance. If you look at data points at our max and mininum distance you at least see the expected trend, ie, it takes light less time to travel a shorter distance. This goes with what I would expect given the amount of systematic error in this lab, at short distances there is two little changing in distance to make up for the amount of overall uncertainty. I have looked at some of the data collected by other groups, in particular, the data collect by Darrel Bon and Boleshk in the monday lab. There data was, if not linear, at least consistant in that a shorter distance always corresponded to a shorter time. If we had managed to get data like theirs then I might say something different, because if all of our data was basically linear, then time walk might have played a larger role in making data taken of large distances less consistant than data collected over a relatively short distance.
ii. Take data over a series of distances, and use it to determine an experimental value for c from a least squares fit, and its error. This at least I have been able to do.
iii. Summarize and compare your results for c
7. By using error propagation show that the relative error in c is dominated by the time resolution error, not by the position resolution error.
Because we did not take any actual time walk, I doubt that I will be able to show any conclusive reasons for what is the dominate cause of our error. Actually giving this some more thought, we were able to experimentally get a calibreation that corresponded well with what we were actually using as voltage to time conversion as listed by the TAC, but even so we got very poor data. The important thing to not is that we got poor data for all three of our trials, but on the first one we were holding start voltage constant within a range of ±.10, while on the second 2 trials we tried to greatly reduce this error, and truly attempt to hold start voltage constant. If you compare the data for all three trials, the extra attention to the start voltage does not appear to have improved the quality of our results, and in the final analysis of the data our value for c is pretty off, this might point away from my earlier idea that our error was mainly do to time walk and lend creedence to the statement in 7. that relative error is dominated by the time resolution error.
This is not quite true, I don't need time walk data to do some error propagation, I just need to show the difference in my best guess in the calibration data and in the distance data, if one error term is greater, this is the term that would most effect my results.