Physics307L:People/Harriger/Formal/Rough Draft

class notes:
 * cite and rewrite-but don't copy
 * figures: must be numbered and have good captions, and be bold when in text (tables and graphs too)
 * graphs don't have to be figures (Steve Koch 20:52, 29 November 2010 (EST):I'm not sure what I said in class, but actually graphs should be figures!)
 * methods: section is all in past tense, need model number and company for the equipment, needs to include methods for analysis, include code from MATLAB as a link in the report
 * references: should be used in introduction and discussion, need history of radiation detection, use NIST for values of constants, Wikipedia links are acceptable, use libproxy.unm.edu to send traffic through UNM and get articles from over the paywall, if UNM doesn't suscribe you can go to illiad.unm.edu to order articles from interlibrary loan system, formatting-see mendeley.com to build library of citations
 * abstract: motivation, method, results including error and deviation etc

= Measuring and Predicting Background Radiation Events Using Poisson Statistics =

Author: Kirstin G.G. Harriger

Experimentalists: Brian P Josey and Kirstin G. G. Harriger

Junior Lab, Department of Physics & Astronomy, University of New Mexico

Albuquerque, NM 87131

kharrig1@unm.edu



Abstract(add results)
Background radiation events in the lab were counted using a scintillator connected to a photomultiplier tube and a computer. The data was gathered over 9 trials with time periods: 10 ms, 20 ms, 40 ms, 80 ms, 100 ms, 200 ms, 400 ms, 800 ms, and 1 sec. We aimed to show that the number of radiation events detected is well described by a Poisson distribution. To demonstrate this, we performed data analysis using the characteristics of the Poisson distribution. We compared the standard deviation of the data to what the theoretical standard deviation would be if that data was represented by a Poisson distribution. The average percent error between these two standard deviations over all 9 trials was ?. We also predicted the number of background radiation events in the lab during a one second time period with data from the first 8 trials. Data was then gathered for a one second time period in the 9th trial to compare to the average prediction for a one second period from trials 1-8. In the one second time period there were ? radiation events, and the average prediction was ?. We also compared trends in our data as the average rate increased for increasing time periods to the known trends in the Poisson distribution for these conditions and demonstrated the trends in our data graphically. 

Introduction(fix citations)
The Poisson distribution is a discrete probability distribution and it is characterized by a function that gives the probability of a discrete random number being equal to some valueprobability mass function wiki page. Specifically, it expresses the probability of a number of discrete events occurring within a time frame if the timing of each event is independent of the timing of the previous event, the events are rare, i.e. unlikely to be simultaneous, and the average rate of events does not change over time1. If the known average rate is $$\lambda$$ then the probability that there are exactly k occurrences is equal to2


 * $$f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}$$

The Poisson distribution has trends as the average rate of events per time period increases in response to data sets that cover longer time periods.

Statistical questions can be answered with only the average rate being known for a given Poisson process3. One statistical quantity that is relevant to our experiment is the theoretical standard deviation, the square root of the average, of the Poisson DistributionGold's Manual pp 60. Consequently, the average and the standard deviation are not independent of each other as in Gaussian distributionUncertainties pp 29. However, as the average increases, the Poisson Distribution begins to resemble a Gaussian distribution until they are indistinguishableGold's Manual pp 60. The data is very likely to represent a Poisson Distribution if it can be shown that the standard deviation as calculated with the usual formula is significantly close to the theoretical standard deviation of the Poisson distribution as in this formula


 * usual formula approximates theoretical formula

Another characteristic of the Poisson distribution is that the average rate of events for one time period can be used to predict the average rate of another time period by a using scalar factor that converts between the time periods.Uncertainties pp 30. The data is very likely to represent a Poisson distribution if the predicted average rate for a given time period is significantly close to the average rate measured for that time period as in this formula


 * formula showing the attainment and use of scalar factor for prediction approximating measurement

The Poisson distribution has many applications in physics, especially in modeling radioactive decay processes4. This is because radioactive decay is governed by particle characteristics which can be treated discretely5 and counted, the timing of each of event is independent of the timing of the last, and the events are rare and unlikely to be simultaneous. Additionally, it is not difficult to get a known average of events from one time period to find a theoretical standard deviation and to predict an average number of events for another time period.

We measured the number of background radiation events in a standard physics teaching laboratory at 5000 ft elevation. We used a scintillator attached to a photomultiplier tube (PMT) and wired to a computer to gather the data. When the scintillator absorbs radiation, it fires a beam of ultraviolet light down the tube to the PMT. The PMT then creates a signal voltage that is picked up by a card in the computer. We used the USC 30 software on the computer to set our dwell times and record radiation events. We also used a Spectech Universal Computer Spectrometer power supply to give a bias voltage to the detector. This voltage determines the sensitivity of the detector. After we gathered the data, we analyzed it using the two characteristics of the Poisson distribution described above: the theoretical standard deviation, and the prediction of an average rate from one time period to another time period. We compared trends in our data as the average rate increased for increasing time periods to the known trends in the Poisson distribution under the same circumstances, and we demonstrated the trends graphically to show further evidence that our data represented a Poisson distribution.



Methods and Materials




A combined scintillator photomultiplier tube, see Figure 1, was used to collect data. Every time the scintillator detected radiation, it fired a beam of ultraviolet light to the photomultiplier tube. The photomultiplier tube would then convert the light into a single voltage. This voltage would be carried to an internal MCS card in a computer, where it would be analyzed by a UCS 30 software. In order for the scintillator to detect the radiation, it had to have a potential gradient that would pick up ions created in the radiation event. This potential was supplied by a Spectech Universal Computer Spectrometer power supply, see Figure 2, set to 1200 V.

The UCS 30 Software counted each signal voltage picked up by the card in the computer as an event. The data output by this software contained the size of the dwell time, and the number of radiation events that occurred in each instance, or window, of the given dwell time. This data was then saved into data file that was manipulated with MATLAB v. 2009a. If the dwell time was set to 1 microsecond, then the output from the software would contain the number of events detected for each window of 1 microsecond. In this experiment we used the dwell times 10, 20, 40, 80, 100, 200, 400, 800 ms to determine the prediction for one second. We then used a dwell time of 1 second to gather data to compare to the prediction. Each run of the software collected data for 2047 windows for the given dwell time.

Data Analysis(need graphs)
Table 1: In the first column the possible number of radiation events per window are listed from lowest to highest for each dwell time. The second column shows how many windows in which a given number of radiation events were detected for that dwell time. For instance if there were 100 windows in which 3 radiation events were detected, 3 would be in the first column and 100 would be in the second column next to it. The third column shows the fractional probability that there will be a given number of events in a window.

Table 2: The first row shows the average number of radiation events per window as calculated directly from the data for each dwell time. The second row shows the standard deviation of the data as the square root of the average for each dwell time, and the third row shows the standard deviation as directly calculated from the data for each dwell time. The fourth row shows the percent error between the two standard deviations. The fifth row shows the predicted average rate per second from the data for the dwell time.

Graph 1:...

Graph 2:...

Discussion(moar sauce, err citations-from journals)

 * Compare the calculated standard deviations and the square root standard deviations
 * Compare the measurement and the average of the predictions for 1s dwell time
 * The average of the predictions for a one second dwell time is ?
 * The measured average for a one second dwell time is ?
 * The percent error between these two is ?
 * Describe graphical trends in the data
 * Does the number of events with the highest fractional probability show a trend? (increasing with increasing window size)
 * Do the standard deviations have a trend? (decreasing with increasing window size)
 * Does the value of highest fractional probability show a trend compared to the other values? (flattening with increasing window size)


 * Verify the data is characteristic of Poisson Distributions
 * the prediction is good
 * the two standard deviations are very close to each other for each trial
 * highest fractional probability trend shows data becoming more symmetric and flattening
 * the standard deviation shrinks with increasing window size


 * Identify possible sources of error
 * Human Error
 * Systematic Error
 * Random Error

Conclusions

 * results
 * error
 * potential continuations?

Acknowledgments
I would like to thank the professor for this course, Dr. Steve Koch, the TA, Katie Richardson, Dr. Michael Gold, the original creator of the experiment, and especially my fellow experimenter, Brian P. Josey.

General SJK Comments
Steve Koch 13:20, 6 December 2010 (EST): Hi Kirstin, apparently I didn't finish going over this! I will! Please see Brian's formal report rough draft, though, for discussion of what you can do further today. Obviously we already talked in person, so I think you're OK on that. More to come.