γ-ray spectroscopy with a LaBr 3 :(Ce) scintillation detector at ultra high count rates

Size: px
Start display at page:

Download "γ-ray spectroscopy with a LaBr 3 :(Ce) scintillation detector at ultra high count rates"

Transcription

1 γ-ray spectroscopy with a LaBr 3 :(Ce) scintillation detector at ultra high count rates γ-spektroskopie mit einem LaBr 3 :(Ce) Szintillationsdetektor bei sehr hohen Zählraten Master-Thesis von Bastian Löher November 2010

2 γ-ray spectroscopy with a LaBr 3 :(Ce) scintillation detector at ultra high count rates γ-spektroskopie mit einem LaBr 3 :(Ce) Szintillationsdetektor bei sehr hohen Zählraten Vorgelegte Master-Thesis von Bastian Löher 1. Gutachten: Prof. Dr. Norbert Pietralla 2. Gutachten: Dr. Kerstin Sonnabend Tag der Einreichung:

3 Abstract The excellent scintillation properties of the material LaBr 3 :Ce, especially the high energy resolution of < 3 % at 662 kev and the very short decay time constant of the emitted light of < 25 ns, suggest that LaBr 3 :Ce detectors can be used in ultra high count rate γ-ray spectroscopy far beyond the kbq regime. In this thesis a 3x3" LaBr 3 :Ce detector is thoroughly investigated and its linearity is validated with a range of calibraton sources up to 4 MeV. Photo peak efficiency and resolution are measured for a range of energies below 4 MeV and for count rates up to 20 MBq. All data has been recorded with a 500 MHz sampling ADC using a custom data acquisition software and electronics from Struck SIS. The recorded signal traces have been analysed with digital signal processing algorithms to yield timing and calorimetry information. Simulations with synthetic signals have been carried out to derive optimum parameters for the analysis. At very high count rates the detected signals overlap and form pile-up events, which are normally rejected in the analysis to maintain a good energy resolution. This process leads to efficiency losses. A digital pile-up correction method is applied to the simulated signals as well as to the acquired data in order to restore pile-up induced efficiency without affecting the resolution. The method is based on the fact that the signal shape can be desribed by a constant model. If that is the case, the recorded traces are just a superposition of this model at each timestamp with a matching amplitude. It is then possible to derive the true amplitudes by taking into account the contributions from the neighboring signals by solving a set of linear equations. The measurements show that γ-ray spectroscopy is possible up to count rates of 20 MBq with a reasonable energy resolution. Several tools for data acquisition and analysis have been developed. SamDSP is a DSP-library containing a set of convenience functions for handling and manipulation of digital signals in C++. A generic experiment control kit and online analysis tool (GECKO) has been created as a modular multi-threaded data acquisition software for linux PC. It interfaces with Struck SIS and CAEN VME hardware, but is easily extendable to support other hardware as well. Also the design for an active beam dump for the low-energy photon tagger NEPTUN at the superconducting linear accelerator S-DALINAC at TU Darmstadt is presented. It is planned to install the 3x3" LaBr 3 :Ce detector inside a massive lead shield to function as an in-beam energy and photon flux monitor.

4

5 Contents 1. Introduction Motivation Application at NEPTUN The detector The LaBr 3 :Ce scintillator Background spectrum Digital Signal Processing and pile-up correction Digital timestamping and calorimetry Pile-Up Correction Method Application to synthetical data Composition of realistic signals Influence of algorithm parameters Effectiveness of pile-up correction Influence of false triggers Summary Measurements Experimental setup Preparative measurements Pile-Up Correction Energy resolution of the detector Peak to Total Ratio Measurement in strong background radiation Data acquisition Construction of an active beam dump for NEPTUN Requirements Design of detector support Collimator design Setup of beam dump Summary and outlook Outlook Acknowledgments 51 A. Data acquisition 53 A.1. GECKO overview A.2. Extending GECKO A.3. Quick start guide A.4. Future developments

6

7 1 Introduction One of the preferred tools in nuclear physics to determine characteristic properties of atomic nuclei is γ-ray spectroscopy. Ever since their invention, scintillation detectors and semiconductor detectors have become more and more powerful devices to probe the internal structure of thousands of different isotopes by detecting the emitted γ-rays originating from radioactive decay. The development of new lanthanum trihalide scintillation materials has drawn a lot of interest in γ-ray spectroscopy since the invention of the cerium-doped lanthanum bromide scintillator LaBr 3 :Ce in 2001 at Delft and Bern Universities [vl01]. The material is produced by Saint-Gobain Crystals under the tradename BrilLanCe Its superior scintillation properties in comparison to the standard NaI or BaF 2 [Kum09] have set LaBr 3 :Ce as the material of choice for nuclear γ-ray spectroscopy [Cie09], medical imaging (in [Mos05], [Pan07], [Kur10]) or industrial and military applications ([Kul07],[Mil07]). The main advantages of LaBr 3 :Ce over conventional scintillators are the high light yield of >60000 photons/mev (165 % of NaI), the high energy resolution of <3 % at 662 kev, the very short scintillation light decay times of 25 ns, the high proton number (Z=57) and material density of 5.08 g/cm 3, and the excellent temperature stability of about 0.01 %. The disadvantages of LaBr 3 :Ce are that the material is highly hygroscopic (more so than NaI) and its production process is difficult. The scintillator has to be kept inside a hermetically sealed package at all times to avoid degradation of the material. To achieve the aforementioned excellent scintillation properties, LaBr 3 :Ce crystals must be grown as uncracked single crystal ingots. The production and processing of these ingots is a difficult task, so that the largest single crystal volumes produced so far only yield detector sizes of a maximum of 3x3". These properties have been reported and verified several times in [vl01], [vl02], [Dev06], [Ilt06], [Mos06], [Qua07], and [Pan09], and special investigations have been done for high energy γ-rays [Cie09], large volume LaBr 3 :Ce crystals [Men07], and in-beam γ-ray spectroscopy with fast ion beams [Wei08]. The scintillator is also useful for timing applications, because the time resolution of LaBr 3 :Ce is on the order of 300 ps. This has been investigated in [Kum09]. Particle discrimination based on the pulse shape of the scintillation signals has been investigated and shown to be possible for α and γ-ray induced pulses in [Cre09], even though the differences are minute. These reports have investigated the scintillation properties of LaBr 3 :Ce usually in a controlled environment using low count rate radioactive γ-ray or X-ray sources or low-intensity ion beams. High-intensity sources or particle beams have been used so far only to investigate the excellent radiation hardness of LaBr 3 :Ce. Most susceptible to radiation damage from protons or γ-rays were the additional detector components like the scintillator encapsulation, the light guide, and the photomultiplier as reported in [Dro08], [Owe07] and [Nor07]. LaBr 3 :Ce has been discussed in [Ilt06] briefly as a scintillation detector in high count rate γ-ray spectroscopy and compared to NaI. Due to the much lower decay time constant of the scintillator signal, LaBr 3 :Ce can be used at much higher count rates compared to NaI or other scintillators while maintaining an adequate energy resolution. Within this thesis the mentioned excellent properties of the LaBr 3 :Ce material are used to investigate the high count rate limit for γ-ray spectroscopy today and how this limit can be pushed even further with future scintillation materials. In this chapter the issue of pile-up events in γ-ray detectors at high count rates is introduced, and the importance of resolving this issue is motivated. Chapter 2 shows some general aspects of the LaBr 3 :Ce detector and the photomultiplier. Also the background spectrum taken with the detector is discussed. A fast method for correcting pile-up events using digital data acquisition and digital signal processing methods is presented in chapter 3, which can be applied to signals from scintillation detectors. This method is first tested with a number of simulated synthetic signals, in order to study the influence of algorithm parameters on the results. In chapter 4 the measurements for this work and the results are presented. The newly developed data acquisition software is described. Chapter 5 motivates the necessity of an active beam dump at the low energy photon tagger NEPTUN at the superconducting linear accelerator S-DALINAC [Ric96] at the TU Darmstadt, and why a LaBr 3 :Ce detector can be used in this case. The design of the detector mount and a matching collimator is discussed and the planned setup is layed out. The last chapter summarizes the main assertions made in this work and enumerates the future prospects. Some ideas for future improvement in each part of this work are given. The appendix A focuses on the computational tools developed in the course of this thesis to enable efficient data acquisition and processing. A short overview of the data acquisition software (GECKO) and the structure of the C++ library SamDSP for digital signal processing (DSP) is given. 1.1 Motivation Since radioactive decay can happen at immense rates, either because the source activity is extremely high or because of an intense beam at an accelerator facility, the detector system for the decay products must be able to cope with these 5

8 Figure 1.1.: A single digitized signal from a scintillation detector. The finite length l is a characteristic property of the scintillation material. Figure 1.2.: Example of two piled up signals. A model pulse (red) is shown on top of the first signal. Since the time difference between the two signals is too small, the tail of the first signal contributes to the amplitude of the second signal. This leads to measurement errors. high rates. If the decay rate is higher than the detection capability of the detector system, the detection efficiency (usually the full energy peak efficiency) of the system decreases. At first glance this circumstance might not seem like a serious issue at all. Indeed, in many cases this issue was not directly addressed, but instead a variety of workarounds has been used. To understand why a detector system has a limited detection rate one has to introduce the concept of dead time. This is the period of time from the first interaction of the particle with the sensitive medium until the detector has returned to its initial state (this condition may vary for special types of detectors). During this time the detector is usually incapable of a proper response to any additional events. In this thesis dead time always refers to the intrinsic dead time of the detector system, not to dead time introduced by the limited processing capability of the data acquisition system. Depending on the detector system the dead time can be in the range of many ns to a few ms per event, limiting the detection rate to a maximum of a few MHz. One of the most often used detectors for high resolution γ-ray spectroscopy are high purity germanium (HPGe) detectors, because of its very good intrinsic resolution of better than 2 kev at 1.33 MeV photon energy. Due to the charge collection mechanism involved, these detectors generate relatively slow signals which results in longer dead times. As already stated above, this leads to a limited detection rate, which affects a wide variety of experiments in nuclear physics. Also scintillation detectors such as NaI or CsI crystals are commonly used in γ-ray spectroscopy, but are either limited in resolution or produce similarly slow signals. To comply with this limitation, often the beam intensity has to be lowered, which leads to longer measurement times. In order to overcome or at least soften this dead time limitation one has to take a closer look at the origin of dead time. In most cases the ultimate cause of dead time in detector systems is the fact that each generated signal has a finite length, as can be seen in figure 1.1. This length is determined by a number of factors, most of which depend on the detector material, the detector geometry, and also the front end electronics. Usually for a given experiment all of these factors should have been considered and optimized to minimize the signal length. Two events occurring in a detector can only be processed, if they are separated by at least this signal length, otherwise the signals would start to overlap. This case is shown in figure 1.2, where two ideal signals overlap, because their separation in time is smaller than their length Introduction

9 Figure 1.3.: Left: Probabilty P(N) of detecting N signals within the time interval τ at a count rate of n for several values of N. Right: Accumulated probabilites. For count rates so high, that nτ = 2, the probability for detecting a single signal drops below 10 %. In this regime pile-up rejection is no longer feasible. A situation like this is often referred to as pile-up. In γ-ray spectroscopy the most important information conveyed by a detected signal is its representation of the energy that a photon has deposited in the sensitive medium. This information can usually be determined by measuring the collected charge, or if the shape of the signal is constant, by measuring the amplitude of the signal. In a pile-up situation, this information is not as easy to determine. As the figure shows, neither the amplitude of nor the area under the second signal are directly accessible properties. The attempt to measure these properties regardless of the situation would result in values larger than the true values, which would then contribute to higher energies in an energy spectrum. The energy resolution as well as the efficiency of the detector would be reduced. If the detector has successfully detected an event, the probability that no additional events occur in a small time period τ after a detected event is given by P(0) = e nτ, (1.1) where n is the actual event rate [Kno00]. If τ is the dead time of the detector system, it is clear that for low event rates almost all events can be detected. The probability for a piled-up event is rather low and even using pile-up events can be identified and suppressed, so that they do not contribute to the energy spectrum. This so called pile-up rejection mechanism maintains the energy resolution of the spectrum by sacrificing detector (i.e. photo peak) efficiency. The probability for piled up events as a function of the number of signals N occurring in the time period τ is given by P(N) = (nτ)n e nτ. (1.2) N! This relation is visualized in figure 1.3, where on the left side the probability P(N) is plotted for several values of N over the product of count rate n and interval size τ. With increasing count rates the probability for piled up events with N > 0 increases, until the probability for single events (N = 0) vanishes. The right part of the figure shows the accumulated probabilities, and one can see that the fraction of single events drops below 10% when nτ > 2. The pile-up rejection mechanism is in this case useless, because the efficiency would tend to zero. Recently there have been many attempts to address the issue of pile-up with a variety of methods. For large HPGe detectors a method using a time-domain Moving Window Deconvolution has been developed [Geo93], but it is difficult to apply to very short signals, and thus not applicable to scintillation detectors. If the dynamics of the detector signal can be expressed with a mathematical model, then Kalman smoothing can be applied, as is presented in [Bar06] for HPGe detectors. For LaCl 3 a different deconvolution method has been developed in which first the detector system response function is determined and then the deconvolution is done in the frequency domain [Yan09]. Also computationally intensive methods such as pulse fitting methods (e.g. [Bel08]) have already been presented. This work shows the applicability of a fast digital pile-up reconstruction algorithm, which takes ordinary listmode data (corresponding timestamps and amplitudes) as input and performs the reconstruction using a matrix inversion. The method has originally been developed for pile-up correction of NaI detector pulses [Ven09]. The input listmode data for the algorithm is produced from digitized detector signals. 1.2 Application at NEPTUN The low-energy photon tagger NEPTUN is designed to produce photons in the range of 6 to 20 MeV with an energy uncertainty of about 25 kev at 10 MeV for nuclear structure and nuclear astrophysics experiments [Sav10]. The photons 1.2. Application at NEPTUN 7

10 Figure 1.4.: Schematic drawing of the NEPTUN tagger experimental setup. An incident electron beam creates bremsstrahlung photons in the bremstarget. The scattered electrons are analysed with a dipole magnet and detected in the segmented focal plane. With a coincidence unit each produced photon can be assigned to a scattered electron, and thus the photon energy can be determined. are produced via bremsstrahlung in a thin radiator target, and their energy is determined by measuring the energy loss of the scattered electrons with a dipole magnet. Using the coincidence between the detection of the reaction product produced by the bremsstrahlung photon and the scattered electron in the segmented detector array in the focal plane of the tagger, it is possible to assign the energy of the photon. It is necessary to precisely know the energy, which is tagged with a specific setting of the magnetic field of the dipole magnet. This can be done with a high resolution γ-ray detector set directly in-beam behind the experiment, as sketched in figure 1.4. With an HPGe detector the energy can only be determined when the beam intensity is low in comparison to an actual experiment, in order to avoid destroying the detector. This procedure reduces the amount of time available for experiments and requires frequent changes in the geometrical setup. In addition, it does not yield online information about the temporal stability of the energy-positionassignment. If instead a LaBr 3 :Ce scintillation detector is used, then it is possible to maintain a high beam intensity while simultaneously monitoring the tagged beam energy. Measurement of the absolute photon flux might also be possible, if all single events can be resolved by the signal processing unit. These measurements can be done online, while the experiment is in progress, so that low latency feedback is available. Finally no unnecessary changes to the setup have to be done during an experiment and no time is lost for calibration measurements Introduction

11 2 The detector The 3x3" LaBr 3 :Ce detector from Saint-Gobain Crystals is a hermetically sealed package containing a cylindrical scintillator crystal (76.2 mm diameter, 76.2 mm length), a light guide, and a Hamamatsu R10233 photomultiplier. The outer shell is made of 0.5 mm aluminum with an additional magnetic shield structure around the photomultiplier dynode area. Two photomultiplier bases from Canberra are available for the detector. The 2007P base contains a shaping pre-amplifier, which amplifies the signals and adds µs-long exponential tails. This behavior is preferred for use with a spectroscopy amplifier at count rates up to 100 kbq. The 2007 base without shaping pre-amplifier on the other hand delivers the immediate signals from the photomultiplier anode and the last dynode without any shaping or additional amplification. Two tuning screws allow attenuation of the anode signal and changing the focus characteristic of the first dynode. This base has been used for all measurements for this thesis, since the main objective was to study the behavior at high count rates. 2.1 The LaBr 3 :Ce scintillator Figure 2.1.: The LaBr 3 :Ce detector signal is shown here for three different amplitudes. The rise time t rise from 10 % to 90 % of the full pulse height is 20 ns. An exponential fit to the tail of the pulse yields a time constant τ of 25 ns. From these shape parameters the maximum feasible count rate without pile-up correction can be estimated to n max = 2/((5 τ) + t rise ) = 5 MBq. The LaBr 3 :Ce scintillator is doped with 0.5 % cerium to enhance the light output per deposited energy. The light emission curve of the material peaks is in the range of 380 nm [Ilt06], which is a good match for the bialkali window of the photomultiplier. The quantum efficiency of the photocathode is around 25 % for this type of window, which is good enough to allow spectroscopy of low energy γ-rays and X-rays. In figure 2.1 several scintillator signal shapes are shown for different amplitudes. About one thousand single signals were averaged to yield a mean pulse shape for each amplitude. The rise time t rise of the signal from 10 % to 90 % of its full height is 20 ns, while the fall time from 100 % to 1/e is 36 ns. An exponential fit to the falling edge results in a decay time constant τ of 25 ns. The directly measured fall time is longer than the decay time constant, because the signal shape is not exponential near the amplitude. Compared to signals from a smaller LaBr 3 :Ce detector with a size of 0.5x0.5" the signals from this detector are about twice as wide. The rise time for pulses from the small detector has been measured as 10 ns, while the decay time constant was estimated as 16 ns. It has been shown, that in contrast to many other scintillating materials, LaBr 3 :Ce signals can be characterized by only one decay time [vl02]. The fact that the signals from larger crystals are wider and thus bear less exact timing information has been studied in [Ilt06]. The reason for this behavior is the greater spread in optical path lengths inside the scintillator. From the estimated rise and decay times it is possible to derive a maximum feasible count rate, when piled-up signals are rejected. From figure 1.3 the probability for single signals drops below 10 % at a value for nτ of 2. An exponential decay drops below 1 % of its initial value when 5 times the decay time has passed. So the length of a signal as shown in figure 1.1 is the sum of the rise time and 5 times the decay time constant. This results in: n max 5 τ + t rise = 2. (2.1) 9

12 Solving this equation for the maximum count rate n max with the values for τ and t rise from above, yields a maximum count rate of 5 MBq. Beyond this count rate more than 90 % of data would be dropped due to pile-up rejection. To further increase the count rate, a pile-up correction mechanism can be used. The Hamamatsu R10233 photomultiplier tube was connected to a 2007 photomultiplier voltage divider base from Canberra and operated at bias voltages from 300 V to a maximum of 1200 V. The 3.5" tube contains 10 amplification stages, achieving a maximum gain of about The voltage divider has 10 equal stages with a drop of 75 V per 1 kv of bias voltage per stage. 2.2 Background spectrum Figure 2.2.: The background spectrum as recorded with the LaBr 3 :Ce detector over 20 h. The number of counts is normalized to 1 cubic centimeter of LaBr 3 :Ce to allow comparison with literature. A small contribution from a nearby 60 Co source is also visible. Since the scintillation crystals are grown from natural lanthanum, the LaBr 3 :Ce detector contains 0.09 % of the radioactive isotope 138 La [dl05]. Also a small amount of contamination with radioactive 227 Ac is present in the scintillator. The amount of this contamination has been reduced in recent production cycles [Ilt06] by two orders of magnitude. The radioactive decay products of these impurities contribute to the natural background spectrum of the LaBr 3 :Ce scintillator. In figure 2.2 such a spectrum normalized to the volume of 1 cc material is shown. It has been recorded over a duration of about 20 h. Beside the low-energy X-rays, three main features can be clearly seen in this spectrum. 138 La decays either to 138 Ba by electron capture (66.4 %) or 138 Ce by β-decay (33.6 %) [Ilt06], [Nic07]. From 789 kev up to about 1 MeV, the decay of 138 Ce and the additional β continuum is visible. The prominent peak at 1436 kev results from the electron capture branch, where also a coincident 32 kev X-ray is emitted. These two form a double peak structure. The group of peaks at energies between 1.8 MeV and 2.8 MeV are α-decays from the 227 Ac decay chain. Additional features in the spectrum can be accounted for by decay products of the 238 U and 232 Th decay chains. At 1173 kev and 1332 kev small contributions from a 60 Co source can be seen. This high activity source has been kept within a lead enclosure with walls of 15 cm thickness, but in this long term measurement the impact is visible. The full scintillator volume of the 3x3" detector is 344 cc. Below is a table of activity (table 2.1) with values from [Men07], containing additional measurements from the 3x3" detector. As expected, the total activity from internal radioactive decay is larger for larger crystals, but still negligible for high count rate applications. The photo peak efficiency measured from the 1436 kev peak is also the highest for the largest scintillator The detector

13 Size [in] Total activity [Bq] Photo peak efficiency Photo peak count rate [Bq] expected measured 1x x x x x (5) (1) (2) Table 2.1.: Activity in LaBr 3 :Ce. Data taken from [Men07], data for 3x3" ( ) was measured in this work. Expected count rates are from MCNP simulations Background spectrum 11

14

15 3 Digital Signal Processing and pile-up correction In this chapter the details are given, on how the detector signals were processed to extract digital timestamps and amplitudes. To investigate the impact of the various algorithm parameters, a number of simulations with synthetic signals have been done. Also the pile-up correction method is described, and the performance of the method when applied to simulated signals is investigated. The chapter closes with the discussion of some which have a negative influence on the pile-up correction performance. 3.1 Digital timestamping and calorimetry Figure 3.1.: Left: Bibox FIR filter with duration 2t ftw for timing filtering. Right: Boxcar FIR filter with duration t fcw for calorimetry. The recorded signals from the sampling ADC must first be processed with digital signal processing techniques in order to extract timing and calorimetry information. Fast digital filtering of time discrete signals is usually done by applying a time discrete convolution of the form (f g)[n] def = m= f [n m] g[m], (3.1) where the function f is convolved with the convolution kernel g. In the context of digital signal processing this convolution equation is often reduced to the so called finite impulse response (FIR) filtering, where the filter product y[n] = N h i x[n i], (3.2) i=0 only depends on past values of the input signal x. The filter kernel coefficients h i describe the weight with which each previous signal value contributes to the current output value. Usually N is much smaller than the amount of data points in the input signal. For the course of this thesis the two most important FIR filters are the boxcar filter and the bibox filter shown in figure 3.1. The boxcar filter is a simple integration filter and the output is comparable to a moving average, while the bibox filter is of a bipolar nature and serves as a differentiation filter. When the filters are applied to the input signal, not only the shape of the signal is altered accordingly, but also the signal amplitude changes and it is delayed in time. The effects of the different filters depend strongly on their widths, because the width of the filter determines how many data points are taken into account for calculating the output value. If a wider filter is used, then the signal is averaged over more data points, which leads to a reduction of noise in the signal. Thus each of these filters also act as a low pass filter on the signal. Triggering Digital triggering can be done in various ways, but in the frame of this work the simple, yet powerful, method of filtering the signal with a bipolar FIR filter is used. Due to its local nature this method is independent from the value of the baseline. It is also applicable in dense pile-up situations, as opposed to digital CFD, zero-crossing or LED methods. The effect of this timing filter is shown in figure 3.2. It is obvious that the resulting signal has a much faster rise time than the original signal, so that timing measurement on this signal is more precise. The trigger signal is constructed by applying a trigger condition to the timing signal. A simple but fairly accurate trigger condition is to look for local maxima 13

16 Figure 3.2.: After filtering with a bibox filter (width 8 ns), the signal (dashed line) is transformed into a much faster timing signal (solid line). With an appropriate trigger condition the trigger signal (black) is generated. Figure 3.3.: After filtering with an integrating box filter (width 80 ns), the signal (dashed line) is transformed into a calorimetry signal (solid line) with a stable amplitude. in the signal while the signal is above a certain threshold a thr. The resulting trigger signal can then be used to measure the signal amplitude. During previous studies, the above mentioned trigger condition was enforced by checking only the two next neighbor sample points around the maximum. This condition works well with large signals and at moderately high count rates, when most of the signals are well separated. However, in case of small signals or during unusually large statistical fluctuations, this condition failed to give the correct result at all times. False triggers were assumed to exist on the rising edge a signal or even in between signals, which resulted in incorrectly measured timestamps and amplitudes. Trigger quality could be improved upon taking into account also the next-to-next neighbor sample points, so that a 5-point check is done every time. This improvement led to less false triggers, and thus improved the overall trigger quality. Digital calorimetry In digital calorimetry of very fast signals it is important to integrate the signal over a certain range in order to measure a stable amplitude. If instead simply the peak point of the signal is used, the resulting amplitude is more influenced by noise dependent fluctuations. These fluctuations can be reduced to a minimum, if the signal is filtered with an integrating boxcar filter. The use of this filter integrates the signal and thus reduces the high frequency noise component, as can be seen in figure 3.3. It is possible, that signals with a very small distance in time become indistinguishable after the filter has been applied. As stated before also the produced calorimetry signal is delayed in time by a constant value. In order to measure the amplitudes of the signals at the trigger positions this delay has to be compensated. This is done by simply applying each filter to a single signal and measuring the corresponding time shifts. The trigger signal and the calorimetry signal can be realigned and the signal amplitudes can be measured at the trigger positions. Of course these measured amplitudes are only correct, if none of the signals overlap. In case of pile-up the measured amplitudes have to be corrected Digital Signal Processing and pile-up correction

17 Figure 3.4.: Example of a pile-up situation. Amplitudes (black) have been measured from the calorimetry filtered signal (solid line). Figure 3.5.: After pile-up correction the measured amplitudes b i are corrected to yield the true amplitudes a i (black). It is clear, that the superposition of the model pulse shapes (thin lines) results in the filtered signal (thick line). 3.2 Pile-Up Correction Method Whenever two or more pulses occur close together in time and overlap partially, the measured amplitudes no longer reflect the true amplitudes of the constituent signals, since their integrals overlap. This situation is demonstrated in figure 3.4, where as an example a group of five signals is shown. It is clear, that the measured amplitudes b i do not correspond to the true amplitudes. The pile-up correction method is based on the fact, that the piled-up trace is a superposition of all signal pulses, and thus also the measured amplitudes can be expressed as a linear combination of the true amplitudes of the constituent signals: b i = m i j a i. (3.3) j Of course this description is only valid, if the pulse shape is constant and can be described by a time-discrete model pulse shape m(t). The coefficients m i j depend on the time difference of the i th pulse to all neighboring pulses and express the relative contribution of the j th pulse to the i th pulse. At this point it is already obvious, that a precise timing is mandatory for the actual process of resolving piled-up events. Small changes in the timestamp can already lead to significant changes in the contribution coefficient. Also if the trigger can not distinguish between two consecutive signals, then the pile-up correction algorithm acts as if only one signal is present and the amplitudes is not corrected. As will be discussed in the next section this is especially noticeable for small signals which are very close to large signals. The pile-up correction algorithm takes as input two vectors which contain both the timestamps t i and the measured amplitudes b i. The third and most important input for the algorithm is an accurate model pulse shape m(t) of the signals which are to be measured. This model pulse shape can either be a numerical approximation of the real signal, or an average signal derived from measured signals at a low count rate setting. In any case the model pulse shape must be normalized with respect to the amplitude, so that the peak data point has the value 1.0. From this model pulse shape the contribution coefficients m i j are determined. Considering again the example from figure 3.4, the amplitude of the rightmost signal (b 5 ) is augmented by contributions of the two earlier signals. The relative amount of these contributions can be determined by superimposing the model pulse shape m at the timestamp of each contributing signal (here t 3, t 4 ) 3.2. Pile-Up Correction Method 15

18 and measuring its value at timestamp t 5. So in general m i j is given by the value of the model pulse shape at the time t max + t i t j, where t max is the time of the peak point in the model pulse shape. Even though it is possible to take into account the contributions of all other signals, it is advised to restrict the amount of neighboring signals considered to a small integer number in order to keep the computational effort low. To estimate a reasonable number of neighbors for a given count rate, the left part of figure 1.3 can be used. If all coefficients have been determined they can be combined into a square matrix M, and eq. 3.3 can be written as a set of linear equations: b = M a. (3.4) The inverse of M can be calculated, but it has to be taken into account that M might be singular. Singular matrices are not invertible, but matrix inversion is necessary to solve equ In those cases matrix inversion is done by calcalating the pseudoinverse with the use of singular value decomposition. With the inverse of M eq. 3.4 becomes a = M 1 b, (3.5) and the true amplitudes a i can be calculated simply by multiplying the measured amplitudes b i with M 1. The measured amplitudes are corrected and the trace is properly corrected for pile-up, as can be seen in figure 3.5. A number of free parameters control the behavior of the algorithm and the quality of the results. These parameters are the trigger filter width t ftw, the trigger threshold a thr, the calorimetry filter width t fcw, and the number of signals N s which are taken into account for each matrix inversion. This number also define the dimension D = 2N s +1 of the matrix. The free parameters always have to be chosen in an optimal range to yield the best results in any possible pile-up situation. 3.3 Application to synthetical data In the following section the behavior and quality of the trigger generation and the pile-up correction algorithm is investigated under variation of the free parameters and the pile-up situation. This is done using simulated signals, which are carefully modeled after the shape of real pulses and then combined into test cases. The advantage of this method is, that the result of the analysis is already known, and can be used to quantify the accuracy of the actual results. It is also possible to systematically create a range of test situations from a set of free parameters, which would not be possible in a real test setup Composition of realistic signals At first it is important to mimic the shape of the expected detector signal as closely as possible, to maximize the applicability of the results. This can either be done by averaging over some recorded real signals, or by constructing the signal from components. The latter method has the advantage that the effect of each component can be investigated separately. In [Ben74] a formula is given to reproduce a scintillator pulse shape using a convolution of a gaussian with an exponential decay. This description makes the assumption that effects of the statistical scintillation process are negligible for large signals, and that all other statistical processes like electron emission on the photo cathode and production of secondary electrons in the photo multiplier can be described by the convolution with a gaussian. For the investigation of the best choice of parameters the simulated signals have been constructed in the following way: The first approximation of the scintillation signal is to use a simple exponential decay with the decay constant τ of the scintillator, shown in the left pane of figure 3.6. The decay constant is in this case τ = 16 ns. For detector signals which have a very short rise time compared to the decay constant this would already be a satisfying model. However, for LaBr 3 :Ce signals this is not the case. To mimic the statistical processes an FIR filter is applied. This adds a finite rise time to the signal. The middle pane of figure 3.6 shows this effect for two different filters in comparison to an actual detector signal. Displayed with a dashed line is the boxcar filter, which broadens the peak a lot. In order to resemble the actual signal more closely, a ramp filter (solid line) is used to produce a narrower peak. This was done as an approximation to [Ben78], where a clipped Gaussian is used as a filter to simulate ultra fast scintillator signals. The resulting rise time after filtering is about 8 ns. Please note, that this value for the rise time was chosen to simulate signals matching those from a small 0.5x0.5" LaBr 3 :Ce scintillator. As stated above larger crystals also exhibit longer rise times. Several sources of noise (electronic, statistical fluctuations) are combined into one white noise signal, which is superimposed on the filtered signal. The resulting signal is shown in the right pane of figure 3.6 and it seems valid to use the constructed signal as a good model pulse. The signal to noise ratio (SNR) is maintained as a free parameter for creating the simulated signals. In scintillators, and especially in lanthanum halide scintillators with high light yield, the fluctuations imposed by statistical processes on the photocathode or the photomultiplier dynodes are the largest contributors Digital Signal Processing and pile-up correction

19 Figure 3.6.: Composition of the simulated signal. Left: Simple exponential pulse with time constant τ = 16 ns. Middle: Dashed (solid) line: Exponential signal filtered with boxcar (ramp) filter. Right: Filtered signal with additional noise. Figure 3.7.: Scintillation photon statistics. Left: Filtered pulses generated with several amounts of photons ranging from 10 1 to Middle: Single signal generated from 10 photons with 1σ and 2σ uncertainty intervals. Right: Same as in the middle, but with 100 photons. With increasing photon count the signal uncertainties decrease, while at the start of a signal the uncertainty is always at a minimum. to the uncertainty of the signal shape. Further contributions are transfer variance due to imperfect coupling between scintillator and photodetector, inhomogeneity of the photocathode and imperfect focusing on the first dynode [Dev06]. For very small signals it might be convenient to also study the effect of scintillation photon statistics. Because of the small amount of photons the simple exponential decay is no longer a valid model. Instead a base model is constructed by randomly generating photons and distributing them along the signal in an exponentially decaying fashion. This leads to the shapes shown in the left part of figure 3.7, which are made up of different amounts of photons N. It is obvious that for large N the signal approaches the exponential shape again. The use of photon statistics as the underlying model introduces a second source of uncertainty besides the noise amplitude to the signal. The magnitude of the uncertainty is of course correlated with the amount of scintillation photons detected and thus the pulse amplitude, as is shown in the right part of figure 3.7. At the beginning of the pulse the uncertainty is minimal, but then increases to rather large values at the maximum and on the falling edge Influence of algorithm parameters To determine the optimal algorithm parameters a two step parameter scan has been conducted. During the first step the parameters governing triggering and timing (Trigger filter width t ftw and Trigger threshold a thr ) have been investigated, while the second step covered the parameter for pile-up correction (Calorimetry filter width t fcw ). For each parameter set a test doublet of two signals was simulated. Figure 3.8 shows this situation with the free parameters (Amplitudes a 1, a 2, 3.3. Application to synthetical data 17

20 Figure 3.8.: Simulated signal doublet with free parameters. Both amplitudes (a 1 and a 2 ), the signal distance t, and the noise level can be varied to test the algorithm. time difference t and noise level a n ). The trigger performance and the effectiveness of the pile-up correction algorithm was then investigated under variation of the test situation. To also get an idea of the statistics for each situation, each single parameter setting was benchmarked for a number of times. The following parameter scans have been carried out: Scan Nr Amplitude 1 [%] Amplitude 2 [%] t ftw [ns] t fcw [ns] t [ns] SNR a thr [%] N 1 [20,100] 100 [1,30] 1 [0,60] [20,4] [0.1,0.5] 20 2 [20,100] 100 [1,30] 1 [0,30] [4,2] [0.1,0.5] [20,100] [10,30] 1 [0,30] [20,2] [0.1,0.2] [20,80] 8 [1,30] [10,59] [20,2] [20,100] [1,30] [31,59] [20,2] Table 3.1.: List of parameter scans. Expressions in square brackets are inclusive ranges. N is the number of iterations carried out for each setting. The signal to noise ratio (SNR) is only given for the smaller amplitude. Figure 3.9.: Density plot of parameter scan for a large signal following a small one with an amplitude ratio of a 1 /a 2 = 0.2. From left to right the distance between the signals t is increasing. Each column in the density plot shows the probability of a trigger at a certain time step (vertical axis, zoomed in). The corresponding filter width t ftw is drawn along the horizontal axis. The upward slope is due to the shift introduced by the filter Digital Signal Processing and pile-up correction

21 Figure 3.10.: Results of parameter scan with a fixed value for t = 16. The noise level is varied to yield a decreasing signal to noise ratio (SNR) from left to right. The SNR for the smaller signal is given in brackets. Dependence on t ftw Because of the multitude of possible parameter combinations only a few of them will be discussed in the following. Figure 3.9 shows trigger scan results for the case that a large signal follows a small signal (a 1 /a 2 = 0.2) for several time difference settings ( t = 5, 16, 22 ns). The resulting data sets of scans 1 and 2 have been used. They contain trigger timing histograms for each parameter setting. These histograms are combined into two-dimensional density plots shown in figure 3.9 and the following figures. Along the horizontal axis the width of the timing filter (t ftw ) is varied, while the vertical axis shows the determined trigger time. Each point in this plot represents one point in each trigger timing histogram. The number of times a certain trigger time has been calculated for each value of t ftw is color-coded with different shades of gray. On top of each density plot the respective simulated signal is shown. Obviously the trigger can distinguish both pulses only when they are a certain distance apart. The smallest distance for this case has been determined to be 16 ns, but for greater distances the trigger timing is more precise. The best setting of t ftw in this case is between 6 ns and 10 ns, because in this range two signals can be found. The figure also shows that, if the filter width is chosen too large, the second signal can not be found either. Thus it is advisable to choose t ftw as low as possible, while still avoiding false triggering on noise for very low settings. In figure 3.10 the filter width is kept constant at 16 ns, but the Signal to Noise ratio (SNR) within the signals is varied. As expected, trigger precision is worse for higher noise levels. This degradation can not be recovered by increasing the filter width, because then the smaller signal can not be found. For detection of very small amplitudes it is thus very important to keep the noise level on the recorded signals at a minimum. In figure 3.11 the ratio of the two amplitudes is varied while keeping noise level and distance constant at SNR = 25 for the larger amplitude and t = 22 ns. It can be seen that for a wider trigger filter width the trigger behavior is rather unpredictable and the trigger might not even find both signals. This is another reason to keep the filter width at a reasonable minimum. For the rest of the analysis and the determination of further parameters a fixed value for the filter width is set to 8 ns. This is about the length of the signal rise time. Dependence on t fcw To determine an estimate for the optimum value for the calorimetry filter width t fcw scans 4 and 5 have been performed. The filtered pulses have been fed into the pile-up correction algorithm. The reconstructed amplitudes are shown in figure 3.12 for various distances t and a fixed amplitude ratio of 0.8 as well as a fixed SNR of 40. On the horizontal axis the calorimetry filter width is varied. Only small differences can be seen for different distances, but as expected the amplitudes are better defined for greater distances. As for the timing filter width, it can be seen that for small time differences a smaller filter width is favorable, while for greater distances a wider filter width yields better results. This is due to the fact that larger integration intervals usually create a more stable amplitude. It is therefore possible to find an optimum value for the filter width somewhere in the middle. With increasing noise level, the amplitudes are less well defined, but for greater time distances the results tend to be better, as is shown in figure In figure 3.14 the influence of the amplitude ratio is investigated and it is again clear that for values between 10 and 20 ns of the filter width the results show the sharpest distribution for all ratios. To sum it up, the best value for the calorimetry filter width 3.3. Application to synthetical data 19

22 Figure 3.11.: Results of parameter scan for varying amplitude ratio. From left to right the ratio increases while all other parameters are fixed. For large filter widths the behavior becomes unpredictable. Figure 3.12.: Results of parameter scan for various time differences. From left to right the difference increases. The best results can be found for medium values of t fcw around 20 ns. depends on the distance of the following signal, but it should be chosen large enough to create a stable amplitude. For most cases a value of about 20 ns (2.5 times the rise time) is a good choice Effectiveness of pile-up correction Now the effectiveness of the pile-up correction algorithm is tested. To keep this section short only the most difficult cases are shown here. These include extreme amplitude ratios, small SNRs and very small time distances. For the following discussion the filter widths are set to fixed values (t ftw = 8 ns, t fcw = 22 ns). Figure 3.15 shows three different pile-up situations once without pile-up correction in the upper part and with pile-up correction applied in the lower part. The first case is the set up with a moderate amplitude ratio of 0.8 and also low noise. Without pile-up correction the first signal amplitude can be determined quite well to 0.8, but the second amplitude is strongly overestimated up to time differences of 50 ns. This would lead to serious pile-up artifacts in a spectrum. If instead the pile-up correction is applied to the amplitudes, then from time differences of about 20 ns on both amplitudes are correctly determined. For very small time differences there it is still the case that the sum of both amplitudes is determined. If spectroscopy is done on real data, then a threshold should be imposed to reject those signals. The second case (middle Digital Signal Processing and pile-up correction

23 Figure 3.13.: Results of parameter scan for larger noise level compared to figure At this setting it is difficult to say which filter width t fcw is favorable. Figure 3.14.: Results of parameter scan for various amplitude ratios. For all shown ratios a filter width t fcw between 10 and 20 ns yields the best results. pane of figure 3.15) demonstrates the pile-up correction for a signal doublet with a noise level ten times higher. The black region in the lower part of the plots is due to incorrect triggers on noise, which lead to the assumption of ghost signals. These ghost signals can be the cause of serious distortion of the corrected amplitudes, as is discussed later on. In the situation at hand there are no pulses with a comparably low amplitude, so that the incorrect triggers are of lesser importance. If the amplitudes are only measured, then again the amplitude of the second signal is overestimated, and it is not as precisely determined. After pile-up correction the amplitudes are correctly restored with a much smaller uncertainty as before. On the right the case is shown, where not only the noise level is high, but also the amplitude ratio is only 0.2, so the amount of information available is in this case at a minimum. This results in the amplitude of the first signal getting lost in the trigger noise. Note that the mean distortion of the second amplitude is not as high as in the previous cases, but this is only due to the fact that the first signal is much smaller in comparison. After pile-up correction the amplitudes are again restored and it is even possible to distinguish the amplitude of the first signal from the trigger noise. It is however not possible to cleanly restore the amplitudes of signals with a time difference well below 20 ns. Still the transition region between measuring the sum amplitude and the correct single amplitudes is rather narrow. To evaluate the amount of uncertainty introduced solely by the noise component, the dashed lines are shown in each plot Application to synthetical data 21

24 Figure 3.15.: Scan results for measured (pile-up corrected) amplitudes are shown in the upper (lower) half plotted over the distance of the two signals. The dashed lines indicate the 1σ interval, if only one signal is present. Thus most of the uncertainty is due to noise and not a weakness of the algorithm. They indicate the 1σ interval around the mean value of the determined amplitude, if only a single signal is present. It is clear, that in the pile-up corrected cases the uncertainty is mostly due to the additional noise component. Simulated traces In the next step long traces are simulated with randomly distributed model pulses. Each trace is 10 7 samples long which corresponds to 10 2 seconds. Using the previously determined optimum values for the filter widths the pile-up correction algorithm is applied to the simulated traces. The results can be seen in figure 3.16, where the measured amplitude spectrum is compared to the pile-up corrected spectrum and a spectrum simulated at low event rates. Several cases are shown. 1 At first the event rate was varied in between 1, 10 and 50 MBq corresponding to about, 1 and 1. In the low rate 60τ 6τ τ spectrum the peak at amplitude 1 ( full energy, marked with 1) can be seen very clearly, and also the peak at twice the amplitude 2 is visible. Pile-up artifacts have already been reduced from the right tail of the full energy peak. At the very low end of the spectrum a pedestal due to incorrect triggering on noise 3 is present. The full energy peak efficiency (relative to the actual amount of signals in the input trace) has been determined by counting the events within a 5 % interval around the normalized amplitude. In the uncorrected spectrum this interval contains 90.2(7) % of all events, while in the spectrum after pile-up correction the interval contains 95.8(7) %. The absolute trigger efficiency (i.e. the ratio of the number of simulated triggers compared to the number of determined triggers) is 99 %. At 10 MBq the difference between the corrected and uncorrected spectra is more noticeable. Pile-up artifacts in the uncorrected spectrum between full energy and the sum peak have increased a lot and also second-order pile-up effects beyond the sum peak are visible. With pile-up correction these artifacts are reduced by at least a factor of three for direct pile-up and a factor of two for second-order effects. Please note, that additional artifacts can not be corrected in the region just below the sum peak 4. This is due to the fact that signals which are too close together can not be separated Digital Signal Processing and pile-up correction

25 Figure 3.16.: Spectra for simulated traces (log scale). In blue (red) spectrum with measured (pile-up corrected) amplitudes. The insets display portions of the simulated trace. Top: Simulated event rate at 1 MBq. The spectrum shows the full energy peak at 1 and a summation peak at 2. Middle: 10 MBq, 3 Pedestal originating from triggering on noise, 4 Artifacts from limited trigger resolution. Bottom: 50 MBq. with the trigger mechanism of the analysis. This is a fundamental limitation introduced by the trigger mechanism and can not be addressed by improving the pile-up correction algorithm. Further evidence for this is that in a spectrum generated without a trigger mechanism, but instead using the original time stamps of the event generator, these artifacts do not show up. This is shown in the upper half of figure The full energy peak efficiency within a 5 % interval is 45.2(2) % for the uncorrected and 63.6(2) % for the corrected spectrum, while the absolute trigger efficiency is 86 %. At a rate of 50 MBq (Fig. 3.16) the uncorrected spectrum completely lacks a defined full energy peak and shows almost no structure. After pile-up correction the full energy peak as well as the sum peaks at twice and three times the full amplitude can be restored. To the right side of the full energy peak a small artifact from the summation of incorrect triggers and correct triggers is visible. The full energy peak efficiency within a 5 % interval is 4.2(1) % for the uncorrected and 2.4(1) % for the corrected spectrum, while the absolute trigger efficiency is 35 %. If again the original timestamps are used, then all artifacts vanish from the pile-up corrected spectrum (Fig. 3.17, bottom). As a second test for the algorithm simulated traces were produced for different noise levels corresponsing to SNR values of 50, 25, 12, 6, and 3. These traces have been simulated with an event rate of 10 MBq. The resulting spectra show a nice reconstruction of the full energy and the sum peak for SNR values of 50, 25, 12, and 6 (Fig. 3.18, top). Below this value the noise is too high for the trigger threshold and incorrect triggers distort the spectrum. A higher trigger threshold setting helps to reduce this effect, so that even very small signals can still be resolved (Fig. 3.18, middle). If the original time stamps are used, again the situation is improved and no artifacts are visible anymore (Fig. 3.18, bottom) Application to synthetical data 23

26 Figure 3.17.: Spectra for simulated traces. In blue (red) spectrum with FIR trigger (original timestamps). Top: Simulated event rate at 10 MBq. Bottom: 50 MBq, for details refer to text Influence of false triggers As already mentioned, trigger performance is one of the main quality assets. If every trigger denotes exactly the time, at which a signal has occurred and the signal pulse shape is known, then finding the correct amplitude of each signal is straightforward. If, however the trigger algorithm misses triggers, maybe because the trigger threshold was set too high, then these signals will not be taken into account. Efficiency loss and probably incorrectly determined amplitudes of neighboring signals are the result. If on the other hand extra triggers are generated, because of signal noise, high frequency glitches, or just because the trigger algorithm is inaccurately configured, then the outcome may be even worse. The behavior of the pile-up correction algorithm was tested with the test setup depicted in figure A simulated trace (blue, dashed) made up from two true signals was generated. The triggers are indicated by the vertical black lines. The two triggers for each signal are drawn at the position of their corresponding signal amplitude. An additional false trigger is set in between the two true triggers (t Fake ) in order to disturb the pile-up correction method with the presence of a ghost signal. The measured pulse amplitudes are not affected by the additional trigger point, which is why triggering on false triggers does not have a very severe effect, when no pile-up reconstruction is done. Since the result of the pile-up correction algorithm depends strongly on the accuracy of the determined trigger points, the additional false trigger has a strong influence on the reconstructed amplitudes (red). The severity of this distortion is a function of both the distance of the two actual signals and the relative position of the false trigger. This correlation has been studied for a range of time distances t between the two true trigger points, while also varying the position of the fake trigger. The results are shown in figure 3.20 for three different time distances. The reconstruction error relative to the actual amplitude pulse height is shown over the distance of the false trigger ( t Fake = t 0 t Fake ) to the first trigger point (t 0 ) for both reconstructed amplitudes. Three important observations can be made. The magnitude of the reconstruction error decreases when the two pulses are further apart, because the magnitude of the distortion is correlated with the value of the model pulse shape. The impact on the second amplitude is generally smaller and zero for large t. The reason for this behavior is that the measured amplitude, and thus the magnitude of the ghost signal, at the position of the fake trigger is smaller. The impact is at a maximum, whenever the fake trigger is close to a true one. This is the effect of a large measured amplitude for the ghost signal, so that a large disturbance is generated. Pile-up correction can lead in this case even to negative amplitudes. Although not shown in the figure, the amplitude at the position of the fake trigger after pile-up correction shows a very similar distortion pattern. Trigger offset due to Pile-Up Since pile-up leads to amplitude errors, it is possible that the timing information determined from pile-up events is affected as well. With the help of an analogy this becomes very clear: Imagine stacking file folders on top of each other, Digital Signal Processing and pile-up correction

27 Figure 3.18.: Spectra for simulated traces. Top: Spectra with different simulated signal to noise ratios. High SNR spectra (low noise) are drawn with thin lines. Low SNR spectra (greater noise) with thick lines. Middle: High SNR spectra gained with higher trigger threshold setting. Bottom: High SNR spectra with triggering on original timestamps. always with their backs facing in one direction. Eventually the stack will topple over, because the slope of the topmost folders is too steep. The slope of their backs has also decreased with every added folder. The same change happens with stacked signals. Since the rising edge of a signal is not infinitely sharp, the slope of the rising edge of a closely following signal is altered. This effect is larger, the closer the second signal follows. In the same manner also the estimated trigger time changes slightly for the stacked signal. This change is not noticeable for moderate distances, but gets more severe for higher count rates. So far no compensation for this offset has been done, mostly because it is a small effect. With higher timing resolution the impact should be more noticeable Summary Digital signal processing must be done to extract timing and calorimetric information from the signal traces. The most important and most difficult task is to determine as many trigger timestamps as accurately as possible. Too few triggers result in missed efficiency, while too many triggers result in ghost signals, which are a cause for incorrect amplitudes. The simulations have shown that the timing filter should be about as wide as the rising edge of the signal, while the calorimetry filter should be as wide as possible to yield the best resolution. A width of times the signal rise time is advised. Pile-up correction using the matrix inversion algorithm works very well, but the results depend very strongly on the trigger mechanism. If signals are present, but no trigger pulse is generated, then the algorithm does not have the correct input data and consequently produces incorrect output. A number of possible solutions are applicable and might help in this case. One approach would be to use pulse shape analysis to determine the rise time of each triggered pulse. If the rise time turns out to be too large, then the signal can be 3.3. Application to synthetical data 25

28 Figure 3.19.: To test the influence of false triggers on the pileup correction algorithm, a test situation like this one was used. A simulated trace (blue, dashed) made up from two true signals was generated. The triggers are indicated by the vertical black lines. In the displayed case the false trigger is set in between the two true triggers. As can be seen, the measured pulse amplitudes are not affected by the additional trigger point. Since the result of the pile-up correction algorithm depends on the accuracy of the determined trigger points, the additional false trigger has a strong influence on the reconstructed amplitudes (red). The severity of this distortion is a function of both the distance of the two actual signals and the relative position of the false trigger. Fake Figure 3.20.: The reconstruction error relative to the actual amplitude pulse height is shown over the distance of the false trigger ( t Fake = t 0 t Fake ) to the first trigger point (t 0 ) for both reconstructed amplitudes (amplitude of first pulse with blue plusses, amplitude of second pulse with red crosses). The time distance ( t) between the two true trigger points increases from left to right. discarded, because a longer rise time indicates pile-up or some other cause of signal distortion. Another solution would be to use several different trigger mechanisms, each one specially crafted to find either very small pulses, or pulses, which are very close together. A third approach would be an adaptive trigger mechanism, which would apply a simple forward deconvolution of the trace. The trigger would need knowledge of the pulse shape and can then subtract a pulse from the signal once it is found. This approach would also eliminate the problem of finding the baseline, because it would always stay at zero. Solution number two was already used for testing purpuses, but didn t show significantly better results. When trying to extract pieces of the trace for baseline correction, it is advisable to use a separate trigger with a very low threshold, Digital Signal Processing and pile-up correction

29 because even slight distortions can disturb the baseline measurement. The third approach was tested, but turned out to be numerically unstable Application to synthetical data 27

30

31 4 Measurements In this chapter the experimental results are presented. First the experimental setup for most of the measurements is described. Then preparative measurements are shown, from which parameter ranges for the following linearity, resolution, and efficiency measurements are derived. The pile-up correction algorithm is applied to the data, and the effect on the resulting spectra is shown. A number of checks on the quality and an evaluation of the applicability of the algorithm for this detector is done. Afterwards the resolution and efficiency measurements for a range of count rates are presented. At last the data acquisition hardware and software is described in detail. A discussion of the findings concludes this chapter. 4.1 Experimental setup Most of the measurements have been conducted with the experimental setup shown in figure 4.1. The LaBr 3 :Ce detector was placed at a distance d from one of the available radioactive sources. A 15 cm lead shield surrounded the source on three sides. Bias voltage for the photomultiplier was supplied by an iseg NHQ-203M power supply. The NIM crate also contained a fast TFA manufactured by the electronics shop of the Institut für Kernphysik at TU Darmstadt. The anode signal of the detector was either directly plugged into a Struck SIS MHz sampling ADC (A) or first amplified by an additional TFA module (B). The data acquisition was done using a Struck SIS3104/SIS1100e PCIe-VMEinterface and a standard linux PC (Dell). Figure 4.1.: A schematic illustration of the experimental setup for most of the measurements. Depending on the measurement several different radioactive sources were placed within lead shielding. The distance d to the detector was varied to achieve different count rates. Also the high voltage for the detector was modified. The anode signal of the detector was either directly plugged into the sampling ADC (A) or amplified by an additional TFA module (B). The data acquisition was done using a PCIe-VME-interface and a standard linux PC. 4.2 Preparative measurements Photomultiplier gain To determine the dynamic energy range of the photomultiplier in combination with the sampling ADC, the position of the 662 kev peak of a 137 Cs source has been measured under variation of the bias voltage. The proportional behavior of LaBr 3 :Ce has been reported already in the literature (refer to [vl01], [Cie09]), so that for this estimation a linear response of LaBr 3 :Ce was assumed. The sampling ADC has variable input gain stages to amplify the signals to match the accepted voltage range. During test measurements it became clear, that this amplification introduces a significant amount of noise to the signal, so that the input gain has been kept as low as possible. At a setting of 10 for the input gain, the full scale range (FSR) is 8 V, while at the highest amplification setting of 255 the FSR is 100 mv. Since the anode 29

32 Figure 4.2.: The position of the 662 kev peak from the 137 Cs source is shown with respect to the applied photomultiplier voltage. A power law model can be fitted to the data points (black, dashed). The errorbars indicate the uncertainty of the peak position. The uncertainty of the power law fit is indicated by the shaded area. This measurement was done by directly recording the photomultiplier anode signals. signals from the detector are on average 200 mv high at 662 kev for a bias voltage of 600 V, an input gain setting of 100 (FSR 1 V) was used. The result is shown in figure 4.2, where the position of the 662 kev peak is shown over the applied bias voltage. It is apparent from the data, that in order to carry out measurements up to 10 MeV, it is necessary to choose the lowest possible voltage. Lowering the bias below 600 V affects the resolution of the spectrum, and is therefore not advised. This is also shown in more detail in section 4.4. A bias voltage in excess of 1200 V is not recommended. Count rate measurement The measurements at high count rates have been carried out with a 73 MBq 60 Co source. The count rate was varied by changing the distance of the detector to the source. In order to be able to measure the detector properties with respect to the count rate of the source, a calibration measurement was done to determine the count rate at each distance. Figure 4.3 shows the measured and expected count rates. For distances outside of the measured data points the count rate has been extrapolated according to the expected count rate. The function used to calculate the expected count rate is derived from the calculation of the solid angle of the detector as seen from the source: n[bq] = n cos atan The parameters r det and d are the radius of the cylindrical detector surface and the distance of the detector to the source. Saturation effects in the photomultiplier Since only a limited capacitance is available at each dynode stage, the voltage drops slightly, whenever a signal is produced. The larger the signal or the higher the rate of the signals is, the larger is this effect. Lower voltage leads to a lower electric field in between the dynodes, which ultimately leads a smaller output signal. This is one of the main causes for photomultiplier non-linearity. In figure 4.4 this effect is shown using the position of the 1173 kev peak of the 60 Co source. The red data points indicate the relative deviation of the peak position from the position at low count rates measured with a bias voltage of 640 V, which is already low. At a count rate of more than 500 kbq the position is shifted to much lower values. This is a clear indication, that the photomultiplier saturates at this count rate. At the same time the resolution of the recorded spectrum decreases, so that spectroscopy at count rates greater than 500 kbq is not possible with this setup. If the energy of the γ-rays is higher than 1 MeV, the situation will be even worse. Using this detector for in-beam measurements at energies of 10 to 30 MeV at large photon fluxes does not make sense with this kind of performance. It seemed, as if the combination of detector, photomultiplier and voltage divider is not ideal to measure at high count rates. This predicament is ironically caused by the supreme properties of the LaBr 3 :Ce scintillator, and has also been reported several times [Dor04], [Qua07], [Cie09] Measurements rdet d (4.1)

33 Figure 4.3.: The expected count rate of the 60 Co source is shown here together with estimated count rates from measured spectra taking into account the geometric efficiency. The activity of the source was 73 MBq at the time the data were taken. Figure 4.4.: The deviation of the position of the 1173 kev peak of a 60 Co source relative to its position at a low count rate is shown here for increasing count rates. The measurement was done once with an additional TFA (blue circles) and once without (red squares). The errorbars represent the width of the peak in the initial spectrum for each count rate. In both cases the peak position decreases above a critical count rate, which clearly indicates a saturation of the voltage divider network in the photomultiplier base. The critical count rate is about an order of magnitude higher, when the TFA is used. Several ways to compensate for the imposed limitations have been followed. Efforts have been made to contact both Canberra and Hamamatsu regarding information on the photomultiplier and the voltage divider, but the reactions did not yield a quick and affordable solution to the problem. Building a voltage divider with a design as is mentioned in [Dor04] would have taken too long and contracting a third party would have been too expensive. In private communication an easy and inexpensive solution was found to at least postpone the immediate need for new hardware Preparative measurements 31

34 The situation can be improved by lowering the bias voltage even more, so that the currents in between the dynodes decrease and the voltages stay stable. It turned out, that then the detector signals are too small to be detected by the sampling ADC, even at the highest gain setting. Since no suitably fast amplifier was available to amplify the detector signals right after the photomultiplier, a fast shaping TFA was used. The signals are amplified with a low noise amplifier by a factor of 10 at the input of the TFA, and the shaping times have been chosen so that the change in the signal shape was as small as possible. For the measurements with the TFA the largest differentiation time of 1 µs and no integration was chosen. The positive effect can directly be seen in figure 4.4, since the blue data points correspond to the same position measurement as before, but this time the signals were additionally amplified with the TFA. The saturation effect is shifted for almost one order of magnitude in the count rate. For this measurement the bias voltage could be set much lower at 452 V, and the input gain of the sampling ADC could also be reduced to a value of 32. Energy calibration and linearity The energy calibration and linearity measurement of the LaBr 3 :Ce detector was done with a set of radioactive calibration sources ( 133 Ba, 137 Cs, 56 Co, 60 Co, AmBe) at low count rates. Pulse height spectra have been recorded for each calibration source once without the TFA with a PMT bias of 800 V and once with the TFA at a bias of 452 V. These spectra have been analysed with the analysis software Tv from the Universität zu Köln [Tv]. In figure 4.5 the two calibration curves can be seen. Both measurements show excellent linearity up to 4.5 MeV, with only a slight deviation from the ideal line at high energies in the measurement without a TFA. This deviation is presumably caused by the beginning saturation of the photomultiplier, because this behavior is not visible when the TFA is used. A linearity of LaBr 3 :Ce scintillators for energies up to 9 MeV has already been reported in the literature [Cie09]. Figure 4.5.: The detector was calibrated using several radioactive calibration sources ( 133 Ba, 137 Cs, 56 Co, 60 Co, AmBe) with activities in the range below 30 kbq. The calibration points are shown here together with a linear best fit. The error of the linear fit is negligible, so that error marks have been omitted. The data resulting in the steeper calibration slope (red) was produced with a TFA and a high voltage of 452 V. Linearity is given over the complete energy range. The blue data points have been recorded without a TFA and a high voltage of 800 V. Only for large energies around 4.5 MeV a slight deviation from linearity is observable. Impact of TFA on signal shape As already stated, the TFA parameters have been chosen as to conserve the shape of the signal as much as possible. In figure 4.6 the average pulse shapes without (blue) and with (red) the TFA are displayed. The TFA shaping property leads to a slightly wider signal in comparison to the directly recorded signal. The signals differ mainly on the rising edge, making the TFA signal even more symmetric. Apart from this change, the use of the TFA does not have any negative effect on the detector signals. Since the advantages of using the TFA cleary outweigh the disadvantages, all measurements where the count rate is important have been done with the TFA. To get a feeling for the dynamic range when the TFA is used, the same measurement of the position of the 662 kev peak from the 137 Cs source has been done again. The photomultiplier bias has been varied in a different range, because Measurements

35 Figure 4.6.: The TFA shaping property leads to a slightly wider signal (red) in comparison to the directly recorded signal (blue). The signals differ mainly on the rising edge, making the TFA signal more symmetric. These sample pulses were generated by averaging over about 1000 signals of each kind. Figure 4.7.: As in fig. 4.2 the position of the 662 kev peak from the 137 Cs source is shown with respect to the applied photomultiplier voltage. The errorbars indicate the uncertainty of the peak position as calculated during the fit. This measurement was done with an additional TFA module to amplify the photomultiplier pulses. lower values are now possible, but larger values lead to over-voltage artifacts (analog clipping) in the TFA. It is apparent from the data, that a suitable voltage range for measurements up to 10 MeV is from 400 V to 450 V. For higher photon energies it is safe to reduce the bias voltage even more. For most measurements with TFA a bias voltage of 452 V was chosen to allow a dynamic range of 0-6 MeV. Temperature dependence It was found that after long term measurements the resolution in the spectra was significantly less compared to short measurements. This behavior suggested a change of signal amplification over time. Since the respective data were taken during the summer in a lab with large windows, the obvious cause of these changes was the changing temperature. The temperature dependence of LaBr 3 :Ce has been studied in [Mos06], and it has been found that the contribution to temperature induced effects is mainly due to the instability of the photomultiplier gain. The scintillator itself exhibits almost no temperature dependence in comparison. In the upper part of figure 4.8 the deviation of the position of the 1173 kev peak is shown over time. Every half an hour a spectrum was saved. A 24 h periodic structure can be seen for three consecutive days. Below the peak position a temperature measurement is shown for the last two days. The correlations between the two data sets are clearly visible and suggest that long term measurements had better be carried out under stable atmospheric conditions. From the sparse data set in figure 4.8 a temperature dependence of about 4.2. Preparative measurements 33

36 Figure 4.8.: Top: The relative shift (blue dots) of the 1173 kev peak of a 60 Co source is shown over time. For these data points pulse height spectra were measured for half an hour each. The resulting values were then averaged with a 5-point moving average. The shaded area marks the uncertainty of the peak position. Bottom: The measured environment temperature is shown (red, dashed). The temperature measurement was started almost one day late, so the first measurements are missing. Clearly the peak position follows the temperature drift with a lag of about 5 hours. A temperature dependence of about 1.1 % / K at 1173 kev can be derived. 1.1 % / K at 1173 kev can be derived. After this measurement all consecutive experiments have been done in an underground lab with a much more stable temperature Measurements

37 4.3 Pile-Up Correction So far only the results from preparative measurements have been discussed. Now the pile-up correction algorithm is applied to the measured data and the resulting improvements are shown. Effect The effect of the pile-up correction algorithm can be seen in figure 4.9. The upper part of the figure shows a pulse height spectrum taken with a 60 Co source at a count rate of 6 MBq. In blue the uncorrected spectrum is shown, while the pile-up corrected spectrum is red. For comparison a spectrum at a low count rate is also shown (22 kbq, gray), where no pile-up is visible. The uncorrected spectrum has a lower full energy peak efficiency compared to the pile-up corrected spectrum. The number of events to the right of each peak is reduced, because the corresponding signal amplitudes have been restored to their correct values. This effect was achieved with taking into account the next and next-to-next neighbors of each signal (with a matrix of dimension 5). The lower part of the figure shows the same spectrum, but now Figure 4.9.: Top: Two pulse height spectra from a 60 Co source show the effect of the pile-up correction algorithm at a count rate of 6 MBq in the detector. For comparison a spectrum at a low count rate is also shown (gray). The uncorrected spectrum (blue) has a lower full energy peak efficiency compared to the pile-up corrected spectrum (red). The number of events to the right of each peak is reduced, because the corresponding signal amplitudes have been restored. This effect was achieved by taking into account the next and next-to-next neighbors of each signal. Bottom: Same as above at a count rate of 20 MBq. The efficiency of the full energy peak is partially restored. Due to the reduction of pile-up events also the peak resolution is recovered. the distance to the source was reduced, so that the count rate has increased to about 20 MBq. This is the first time an energy spectrum with this resolution has been recorded at such a high count rate with a scintillator. The efficiency of the full energy peak is partially restored. Due to the reduction of pile-up events also the peak resolution is recovered. From these examples it is clear, that it does make sense to use the pile-up reconstruction to recover information losses due to pile-up. On the other hand one can see, that the effect is limited in the sense that only a fraction of pile-up events is corrected. Some reasons, why not all events are recovered, will be given in the next section. Quality check To evaluate the quality of the pile-up correction on a pulse by pulse basis the following method is proposed. After pileup correction a set of timestamps and corresponding corrected amplitudes has been calculated. It is now a straightforward task to rebuild a proof trace from this set by adding the model pulse shape at every timestamp with the right amplitude to an empty trace. In the ideal case the product is an exact copy of the original trace, but trigger uncertainties on the sub-sample scale and missed triggers cause differences. This is shown in figure The upper part shows a random part of a long trace (taken at a count rate of 8 MBq), where the calorimetry filtered data (blue) is shown together with the reconstructed proof signal (red). For the most part the two traces match very well, but in some places the differences are 4.3. Pile-Up Correction 35

38 Figure 4.10.: To investigate the quality of the pile-up correction, the model pulse shape is superimposed on an empty trace at the measured timestamps with the reconstructed amplitudes. Top: Part of a long trace, where the calorimetry filtered data (blue) is shown together with the reconstructed "proof" signal (red). Bottom: The difference of the two traces from above. This "residue" signal helps identify the parts of the trace, where the pile-up correction algorithm yielded incorrect results. Every time the residue signal raises above a certain threshold (black, solid line), this is an indication that the amplitudes have not been reconstructed correctly. larger. Most of the time this is caused, when the trigger algorithm is not capable of distinguishing a small signal following very closely after a large signal. In the lower part the residue signal, here defined as the difference of the original and the proof signal, is shown. It is now possible to introduce a threshold (black), so that whenever the difference of the proof is larger than this threshold, the contributing amplitudes at the surrounding timestamps should be marked as unresolvable pile-up. This method has not yet been implemented into the code, but it is certainly an easy way to check, whether or not the parameters for the analysis are well-tuned. Uncertainty in model pulse shape In the last chapter the uncertainty of the scintillator pulse shapes have been briefly discussed with respect to photon statistics and other statistical processes. These uncertainties have been modeled in the simulations with an additional component of random noise. To investigate the true uncertainty of the model pulse shape, a set of 400,000 single signals at a low count rate irradiated by a 60 Co source have been recorded. This data set has been partitioned into subsets, depending on the amplitude and the phase of the timestamp. The amplitude was determined as discussed in the last chapter, while the timestamp is determined from the crossing of an amplitude dependent threshold. To calculate the phase of the timestamp, the signal values were interpolated between the two data points just before and after the crossing. Each of these subsets contains signals with an amplitude spread of 10 kev and a timing spread of 0.2 ns. Within each of the subsets an average signal and the variance within the set is calculated. For three different amplitudes the results are shown in figure In the upper part the average pulse shape for signals from the 836 kev set is shown together with its 1σ confidence interval. Compared to the amplitude the absolute uncertainty is not very large, so that the 1σ interval is narrow. In the middle the relative uncertainty is shown for all three different energies. The uncertainty is minimal at the peak position (since a subset with one amplitude bin has been selected) and increases in both directions. Due to greater influence of statistical fluctuations the uncertainty increases also with decreasing pulse amplitude. Another important observation is that the uncertainty of the pulse shape is always greater on the rising edge of the pulse compared to the falling edge. The lower part of the figure shows only the absolute uncertainty of the pulse shapes in kev. It is noteworthy, that even though the three different energies span almost an order of magnitude, the absolute uncertainties of the signals are almost the same and bear the same structure. The smaller relative uncertainty of Measurements

39 Figure 4.11.: Top: Average pulse shape of a detector anode signal. The error bars indicate the 1σ confidence interval of the signal shape. Middle: The relative uncertainty of the recorded pulse shape is shown over time for three different energies. Due to greater influence of statistical fluctuations the uncertainty increases with decreasing pulse amplitude. Bottom: The absolute uncertainty for each pulse energy is shown in kev. The maximum uncertainty of 24 kev (for 1332 kev signals) is of the same order of magnitude as the detector resolution. the larger signals seems to cancel out the great increase due to the higher energy. The maximum uncertainty of 30 kev is of the same order of magnitude as the detector resolution. Applicability window From figure 4.11 it is possible to derive a count rate window for which the application of the pile-up correction method makes sense. Two competing contributions to the measured energy have to be considered in a pile-up situation. The first is the one that has been discussed in the last chapter, which is the amplitude (i.e. energy) error introduced by the presence of the neighboring signals. The second one is an additional uncertainty introduced indirectly by the timing uncertainty. Its contribution to the error in energy can be calculated by using the model pulse shape as a lookup table. If the model pulse shape is curved at the top, then this uncertainty can easily amount to a few per cent of the pulse height. The magnitude of the timing uncertainty depends strongly on the count rate, the sampling frequency of the ADC and of course on the timing method. For LaBr 3 :Ce detectors the pile-up induced errors become apparent starting at count rates in the range of 500 kbq. The timing-induced errors start to arise, when the rising edge of the signals are distorted by pile-up. This is the case, when the probability for higher order pile-up (P(N>2)) becomes significant. For the 3x3" 4.3. Pile-Up Correction 37

40 Figure 4.12.: The resolution of the 1173 kev peak in a pulse height spectrum from a 60 Co source with respect to the applied photomultiplier voltage for several count rates. The results without TFA (blue, white) were achieved by recording the anode signal of the photomultiplier directly. Solid red points show results with TFA. The optimal voltage depends on the count rate. In the case with TFA it is in the range of V. LaBr 3 :Ce detector this is at about 5 MBq. So in between count rates of 500 kbq and 5-10 MBq the applicability of the pile-up correction is estimated to be at a maximum. As already mentioned, the same pile-up correction algorithm has been used successfully with NaI detector signals. In this case the application window is a lot wider, because the pile-up induced errors start to become visible already at count rates of 50 kbq. The algorithm would also be applicable to signals from HPGe detectors, if they can be shaped so that the signal shape becomes constant. This could be achieved by using a spectroscopy amplifier. For those HPGe detector signals the application window would be even larger, because the rise time is orders of magnitude shorter than the signal decay time. 4.4 Energy resolution of the detector The energy resolution of the LaBr 3 :Ce detector has been studied as a function of the voltage bias and the count rate, in order to determine the best suited high voltage for the photomultiplier. With the use of the calibration sources also the energy dependence of resolution was investigated. The first two measurements have also been carried out without the TFA amplification, to verify that the results with the use of the TFA are indeed better for high count rates. Voltage and count rate dependence The resolution has been measured as the FWHM of the 1173 kev peak in pulse height spectra from a 60 Co source. To achieve the different count rates the distance of the detector to the source has been varied. Pulse height spectra have been recorded for all bias voltages, where the two peaks could be separated. In figure 4.12 the results are presented. The first observation is that the best resolutions without a TFA are reached with higher bias voltages. Higher voltage settings allow less values for the input gain of the ADC, and thus less noise is added to the signal. The higher bias voltage has already been discussed as the cause for photomultiplier saturation and is the reason, why the TFA is necessary. Also for each count rate the resolution mimimum is at a different voltage setting. This does not seem to be the case for measurements with the TFA, allowing a more stable operation. Still the optimal voltage range decreases in width with increasing count rate, which is also an effect of saturation. The best achieved resolutions with the TFA are generally better, even for low count rates. A resolution of 2.1 % is the absolute minimum in the set of measurements. A very important point is, that the resolution does not degrade much with increasing count rate, when the TFA is used. It stays even below 3 % at the highest count rate of 3 MBq, which was not even possible to evaluate without the TFA. It is now very clear, that high count rate measurements should be carried out at bias voltages below 500 V, and this makes additional amplification necessary Measurements

41 Figure 4.13.: Energy resolution of the detector shown over energy. Resolution is generally better when the TFA is used (red, open). Least squares fits with a square root model have been applied to the data (dashed lines). Energy dependence The same data sets which have been used for the linearity measurement, were also analysed for the energy dependence of the resolution. The results are shown in figure For energies above 4 MeV sum lines of the 56 Co source have been used. Similar to [Cie09] the energy dependence of the FWHM has been modeled with a square root model For the relative FWHM follows FWHM(E) = a E b. (4.2) FWHM (E) = a E b/e. (4.3) E Least squares fits to the data yielded a = 70(5) % and b = 7.1(1.1) for the measurement with TFA and a = 79(4) % and b = 9.7(1.5) for the measurement without. The former agrees within the uncertainty with earlier measurements on smaller scintillators with different types of photomultipliers ([vl01], [Cie09]). If the data is extrapolated with this model to an energy of 10 MeV, the resulting FWHM is 77 kev, which is only half the resolution of the low-energy photon tagger NEPTUN at this energy. The extrapolated value of 0.57 % at 17.6 MeV is the same as the one measured in high energy measurements done by Ciemala et al. [Cie09]. The high energy photons were produced in the 9 Li(p,γ) 8 Be reaction induced by a Van de Graff proton accelerator at ATOMKI. Dependence on count rate The detector resolution has also been studied for a range of count rates, to show that it does make sense to use the detector as an inbeam energy and photon flux monitor. As shown in the upper part of figure 4.14, the count rate of the source has been varied in a range from 500 kbq up to about 20 MBq. Only the FWHM of the 1173 kev peak has been analysed. The blue data points represent the results from the uncorrected pulse height spectra, while the black line marks the low count rate limit. For the red data points the spectra after pile-up correction have been used. For higher count rates the quality of the resolution decreases, but with the pile-up correction the decrease is shifted to higher count rates. An improvement in resolution is visible for all evaluated count rates. For this and all of the following measurements the pile-up correction was configured to take into accout the next and next-to-next neighbor signals. 4.5 Peak to Total Ratio From all pulse height spectra also the ratio of the integral over the full energy peak compared to the total number of counts in each spectrum (Peak to Total Ratio, PTR) was calculated. The integration interval has been chosen so as to always integrate over the full area of the peak, and not to include parts of neighboring peaks. The background in the spectrum below each peak has been estimated from flat portions of the spectrum next to the peak and was subtracted Peak to Total Ratio 39

42 Figure 4.14.: Top: Energy resolution of the detector shown over count rate measured on the 1173 kev peak of a 60 Co source. The resolution is generally better when pile-up correction is used. Even at moderate count rates the effect is significant. The black line shows the low count rate resolution limit. Bottom: The peak to total ratio is shown here over the total count rate for the 1173 kev peak. The pile-up correction has a visible effect from count rates greater than 2 MBq. The black line shows the low count rate peak to total limit. Figure 4.15.: Ratio of uncorrected vs. corrected PTR as a function of count rate. At low rates the effect is small, and increases with increasing count rate. A maximum peak to total gain of about a factor of 2 has been achieved for the 1173 kev peak at an estimated count rate of 20 MBq. In the lower part of figure 4.14 the results for the PTR are shown in analogy to the resolution measurement in blue for the uncorrected data and in red for the corrected values. As expected the PTR decreases for higher count rates, but with pile-up correction switched on, the results are improved starting at count rates of 2 MBq. An increase by a factor of two PTR can be achieved. In figure 4.15 the ratio of the PTR before and after pile-up correction is shown for each of the two 60 Co peaks. This is a measure for the "gain", that can be achieved when pile-up correction is used. In this case the maximum gain is about a factor of 2 for the 1173 kev peak at an estimated count rate of 20 MBq Measurements

43 4.6 Measurement in strong background radiation Experimental setup This set of measurements has been carried out with two radioactive sources at the same time. The weaker 137 Cs source was kept at a fixed distance to the detector (close to the front cap to get the highest count rate possible), while the distance of the strong 60 Co was varied between 2 and 100 cm to create a variable amount of background radiation. The modified setup is shown schematically in figure Figure 4.16.: A schematic illustration of the modified experimental setup. The weaker 137 Cs source was kept at a fixed distance to the detector (close to the front cap to get the highest count rate possible), while the distance of the strong 60 Co was varied to create a variable amount of background radiation. Amplitude spectrum For each count rate amplitude spectra were recorded for the same amount of time, to be able to compare the spectra. Figure 4.17 shows the energy region around the 662 kev peak of the 137 Cs source. In both parts of the figure the recorded amplitude spectra are shown for count rates of 28 kbq, 110 kbq, 440 kbq, 1.7 MBq and 5.6 MBq (from bottom to top respectively). The top part shows the uncorrected spectra, while the spectra in the bottom part have been pile-up corrected. One can see that the 137 Ce peak shrinks for higher count rates, while the FWHM increases and the number of events in the peak decreases. At the same time the compton background induced by the strong 60 Co source increases significantly. Comparing the top part of the figure to the bottom part the effect of pile-up correction is small. Only for Figure 4.17.: Top: Energy spectra of a 137 Cs and a high activity 60 Co source. Shown is only the surrounding energy range of the 662 kev peak. From the bottom to the top the total count rate increases by a factor of four in each spectrum (28 kbq, 110 kbq, 440 kbq, 1.7 MBq, 5.6 MBq). To allow easier comparability, a negative offset was added to the high count rate spectra, but the scale was conserved. The shaded area identifies the integration bounds for peak analysis. Bottom: Same as above with pile-up correction turned on. high count rates the difference in the spectra is visible. The 662 kev peak is more pronounced in the topmost corrected 4.6. Measurement in strong background radiation 41

44 spectrum, when compared to the topmost uncorrected spectrum. Unfortunately a strong 137 Cs source was not available to carry out a comparable experiment, in which the background is induced by the source with the lower energy. That way the compton background under the peak of interest could have been avoided. Energy resolution and relative efficiency The energy resolution was estimated from the FWHM of the 662 kev peak. In the upper part of figure 4.18 the results are shown with respect to the combined count rates of both sources. The resolution stays below 6 % for all count rates, but for high count rates the evaluation of the peaks becomes difficult. For the last two highest count rates it was not possible to calculate a resolution without pile-up correction, because the 662 kev peak could not be evaluated in the last two uncorrected spectra. The relative photo peak efficiency of the 662 kev peak is shown in the middle part of the figure. Figure 4.18.: Top: Width of the 662 kev peak of a 137 Cs source shown over the total count rate in the detector. The count rate from the 137 Cs source was kept constant, while strong background radiation was created with a 60 Co source. Energy resolution stays below 6 %, and for high count rates the pile-up correction can improve the resolution. Middle: The number of counts in the 662 kev full energy peak. The values are normalized to the number of counts at the lowest count rate. Even at low count rates the number of events in the peak is enhanced. The fraction of recovered events grows with increasing count rate. The lines are taken from figure 1.3 for P(N=0,1,2). Bottom: Gain in photo peak efficiency with pile-up correction. All values have been normalized to the number of counts at the lowest count rate (22 kbq). In this experimental setup already for low count rates the pile-up correction has an effect on the efficiency. For higher count rates the fraction of successfully recovered events grows, and since the peak could not be evaluated in the last two uncorrected spectra, the pile-up correction even recovers a completely lost peak at the highest count rates. The lines correspond to the probability for N th order pile-up as described in figure 1.3. From the shape of the lines it is clear, that the recovery of more efficiency would require a lot greater pile-up correction effort, because the order of the pile-up which has to be corrected, increases rapidly with higher count rates. The lower part of the figure shows the relative gain of efficiency by using the pile-up correction. Similar to figure 4.15, the gain is higher for increasing count rates Measurements

45 Figure 4.19.: Schematic overview of the experiment control kit GECKO. scope.ini PCIe Interface Struck 500 MHz ADC out 0 out 1 out 2 out 3 meta info in Digital QDC spectrum in File output Figure 4.20.: Left: The GECKO user interface is based on the Qt4 toolkit. The left treeview contains all configured modules, while the right part shows the respective controls. Right: Corresponding connections between the modules and plugins inside the software. Here channel 0 and 1 are read from the sampling ADC, while channel 0 is processed by a digital QDC, the data from channel 1 are directly stored to disk. Summary Within this chapter a set of measuremets and the results have been presented. It has been shown, that the 3x3" LaBr 3 :Ce detector performs in terms of linearity of light output, resolution and efficiency just as well as has been reported in the literature for other crystal dimensions. However, it was also shown for the first time, that this detector can be used for γ-ray spectroscopy at very high count rates in excess of 5 MBq, while still maintaining a reasonable efficiency and energy resolution. This has been achieved with the help of a digital pile-up correction algorithm. The applicability and value of the algorithm has been verified in a number of measurements. 4.7 Data acquisition For all measurements described in this chapter the customized data acquisition software GECKO (generic experiment control kit and online analysis) was used. This program has been developed during the course of this thesis specifically 4.7. Data acquisition 43

46 Interfaces Modules Plugins Struck SIS3150usb (USB) Struck SIS3350 ADC Digital filter Struck SIS3104 (PCIe) CAEN 792 QDC Baseline correction CAEN 785 ADC Several triggers CAEN 830 Scaler Pile-up correction CAEN 1190 Multihit TDC Histogramming CAEN 775 TDC Pulse extraction Digital ADC & QDC 2D plotting File output Signal caching Table 4.1.: Modules and plugins for GECKO. to allow easy setup and control of nuclear physics experiments including both digital and analog VME electronics. At the time this thesis was started, there was no data acquisition available at the institute, which incorporated all of the following features: Compatibility with the digital ADCs and fast VME-interfaces from Struck and with analog modules from CAEN at the same time, easy configuration for different setups, online analysis of the recorded signal traces to save storage space, ability to be used from a mobile computer if necessary. The decision was made to use the initial C code for simple readout of the sampling ADCs to be embedded in a highly modular, high-performance, C++-data acquisition framework with a graphical user interface. The modular design of the software allows to switch to the requirements of a completely different experiment just by loading a different settings file. Each VME module is implemented as a pluggable "module" inside the software, with a standard interface for the common functionality like reset, configuration and readout. The interfaces to access the VME bus from the software are also implemented as "modules", so it is easy to interchange them or even add a new module, if a different kind of interface becomes available. At the moment the connection to the VME crate can be done either with the SIS3150usb USB interface or the SIS3104/1100e PCIe interface from Struck. To enable the highest possible data throughput, GECKO makes use of multi-threaded programming, so that data readout, data processing and user interaction are split into separate threads. The so-called "run-thread" governs all mechanisms to setup the hardware, initialize all modules and then start an event based run cycle. All necessary data from the VME modules is collected upon a configurable trigger condition and packed into a FIFO-like event buffer. The data analysis is done by the so-called "plugin-thread", which takes the data from the event buffer, validates it, and performs the analysis. Data analysis can be as simple as just writing the recorded data into a spectrum, or as complex as doing digital filtering, triggering, pile-up correction and coincidence steps. To make the configuration of the analysis as easy and transparent as possible, each step of the analysis is available as a "plugin", hence the name of the responsible thread. The interaction of the different components of GECKO can be seen in figure A simple example setup for a QDC measurement is shown in figure The table 4.1 lists the currently available interfaces, modules and plugins for GECKO Measurements

47 5 Construction of an active beam dump for NEPTUN Figure 5.1.: Schematic overview of the experimental setup of the low-energy photon tagger NEPTUN. The new beam dump and in-beam energy monitor can be seen on the right. The low-energy photon tagger NEPTUN at the superconducting linear accelerator S-DALINAC at the TU Darmstadt produces photons of known energy from an electron beam by the means of bremsstrahlung. In order to monitor the energy of the tagged photons, it is necessary to have a detector with high energy resolution placed in-beam during an experiment. At the same time the amount of backscattered photons should be minimized, so the experiments at the target position are not affected. This combination of beam dump and monitor for the energy of the tagged photons is described in this chapter. 5.1 Requirements An overview of the experimental setup is shown in figure 5.1. The beam dump is supposed to be the last setup in the beam line. In this case the design has to take into account a number of requirements. Of course the main function of the beam dump is to act as a sink for radiation from the primary photon beam. This also includes all secondary radiation generated in the process of dumping the primary beam. The second requirement is that the beam dump can be used as a monitor for the current energy range of tagged photons. The first requirement can be fullfilled by setting up a lead cavity with walls of 15 cm thickness. An entrance slightly larger in diameter than the primary beam traps the radiation, which is then dumped in the matarial at the end of the tunnel. Secondary radiation from this process can only escape the beam dump when it is generated or scattered with extreme backward angles. The second function is accomplished with a combination of the 3x3" LaBr 3 :Ce detector, which has been thoroughly investigated in this thesis, and a variable collimator. To install the detector in a fixed position, a more manageable support has to be designed as well. A small form factor is preferred, but it should provide a rigid support for the scintillator. For energy measurement only the primary beam should be picked up by the detector. To reduce the low energy radiation in the detector, the innermost layer of the tunnel can be made out of copper instead of lead. This graded shield design is shown in the following descriptions, but must not necessarily be carried out. 5.2 Design of detector support The main idea for the detector support is that it should be compatible with the dimensions of the lead bricks, which are used to build the beam dump. It should also be possible to access the cabling and change the photomultiplier base without having to remove the detector from the support. Another idea was to install a variable shield with a circular hole, so as to completely surround the detector in the area of the scintillator. Figure 5.2 shows the CAD drawing of the detector support on the left and the completed assembly on the right. The front radiation shield has been manufactured once from 45

48 Figure 5.2.: Left: CAD drawing. Right: Completed assembly with copper shield around the scintillator. Figure 5.3.: Left: CAD drawing. Right: Completed assembly with collimator filled with one copper module. lead, copper and aluminum to be flexible in the choice of material. The dimensions of the support are 10x10x30 cm 3, so that it is easy to surround with the existing lead bricks of the standard dimension 5x10x20 cm Collimator design The variable collimator which is supposed to be placed in front of the detector inside the beam dump tunnel, has been constructed in a similar way, so that it is compatible to the detector support. The main design goal was to achieve a variable, yet well-defined amount of attenuation for the impinging primary beam, which can also be changed during an experiment without having to disassemble the complete beam dump. Figure 5.3 shows the CAD drawing and the final assembly of the collimator/detector. The collimator with a length of 40 cm has enough space to hold up to seven 5 cm thick attenuation modules. A set of seven modules made from lead and one module made from copper has been manufactured. Currently, these modules are just plain absorbers, but they can be turned into pinhole collimation modules on demand. The attenuation factors for the modules have been summarized in table 5.1 for several energies (including 80 kev X-rays from Pb). Reference values for calculation have been taken from NIST data (table 3 of [Hub10]). 5.4 Setup of beam dump The planned setup of the beam dump is shown in figure 5.4. Surrounding the tunnel with the collimator and the LaBr 3 :Ce detector, the innermost layer of the beam dump is composed of 2.5 cm of Cu and 2.5 cm of Pb. The Cu lining reduces the amount of X-ray secondary radiation from the outer layers of Pb by a factor of > Along the front part Construction of an active beam dump for NEPTUN

49 Energy [MeV] 5 cm Cu 5 cm Pb 2.5 cm Cu 2.5 cm Pb 5 mm Cu 7.6 cm LaBr kev 30 Table 5.1.: Attenuation factors for γ-radiation in Cu and Pb absorbers, and estimated values for absorption in the LaBr 3 :Ce scintillator. Figure 5.4.: Left: Open. Right: Closed. of the collimator (empty area in the figure) the Cu lining is replaced by Pb, because secondaries from this region will not reach the detector. If it turns out, that neutron production inside the beam dump or neutron irradiation of the beam dump is a problem, there is enough space left to place a piece of PE for neutron attenuation in front of the collimator. The beam dump is closed on the top with either one or two layers of lead bricks. Detailed simulations of the beam dump capabilities have not yet been done, but it might also be necessary to add 5 cm of lead behind the detector. The size of the complete beam dump is 90x50x30 cm 3, and its mass about one metric ton. The actual setup of the beam dump has not yet been carried out, because reference points for alignment of the setup could not be determined. Setup will be continued in the process of setting up for the next beam time at the NEPTUN tagger. Since the estimated diameter of the beam spot at the entrance of the beam dump is smaller than 5 cm (estimated value from extrapolation of the conical opening of the main collimator in the concrete wall), and the entrance window of the beam dump is 10 cm in each direction, it is still possible to reduce the acceptance in case of too much backscattered events. This is only an option, if the geometry of the complete beam line is well known and when all parts are properly aligned Setup of beam dump 47

50

51 6 Summary and outlook In this thesis a large LaBr 3 :Ce detector has been characterized in terms of linearity, energy resolution, photo peak efficiency and temperature stability. It was shown, that the detector has a linear response to γ radiation for energies up to 4 MeV, and a higher photo peak efficiency when compared to measurements of smaller crystals. The contribution to the temperature dependent variation of the calibration is mainly due to gain drifts in the photomultiplier, so that it is advised to carry out long term measurements in a stable atmosphere. A digital pile-up correction algorithm has been used to allow γ-ray spectroscopy at count rates of more than 10 MBq with the detector. The algorithm allowed to recover a part of the resolution and efficiency losses due to pile-up. The maximum achieved gain was about a factor of two in restored photo peak events. The efficiency gain in strong background radiation was about a factor of 4-5. To investigate the behavior of the used algorithms numerous simulations with synthetic signals have been carried out. From the simulations it was apparent, that the quality of the results depends mostly on the correct determination of the signal timestamps. With uncertainty measurements it was possible to deduce a window of count rates, in which the use of pile-up correction has a significant effect. This window was shown to span the range from 500 kbq up to 10 MBq. The LaBr 3 :Ce detector is supposed to be used in measurements of the tagged photon energy and the photon flux at the end of the beam line in the low-energy photon tagger setup NEPTUN at the S-DALINAC. For this setup a detector support and collimator has been constructed, and the design of the beam dump has been carried out. The detector support allows an easy transport and integration of the detector in any existing setup. To allow for a variable amount of radiation attenuation, the collimator has been constructed to hold up to 7 modules of 5 cm absorbers. Unfortunately it was not possible to test the detector under realistic beam conditions, so that no results are available yet. It is planned to use the detector in the next beam time at the High Intensity Photon Setup (HIPS) at the S-DALINAC, and set up the active beam dump at the NEPTUN tagger for the next experiment. One specific goal of these measurements is to tune the parameters of the setup to receive a proportional response of the detector for an energy range as wide as possible. Also efficiency measurements at energies between 4 and 20 MeV have to be done. The most important test at the NEPTUN tagger will be to produce energy spectra with tagged photons to measure the tagged energy range. Also the response of the detector to an X-ray source has not been tested, because no suitable radiation source was available. 6.1 Outlook In many areas further improvements are possible. This includes digital processing of the signals, especially new techniques for baseline correction, timing and triggering in high pile-up situations. A technique to check for possibly missed triggers with a reconstructed proof signal has already been discussed. A pile-up rejection mechanism can be implemented based on this information. Rejection of signals, which do not fit the shape of the model pulse can also be done in an easy way. From the model pulse a set of relations between the values of each data point on the rising edge can be deduced. If these relations do not comply with the relations of a newly recorded signal, then the signal can be discarded. Attempts have been made to acquire better knowledge of the baseline by optimal state estimation using a Kalman filter. This can be extended by taking into account the exponential decay of the signal pulse shape. Another approach is to use a statistical approach. This technique relies on the emergent features from a histogram over all ADC channels, produced by histogramming the recorded samples. At low count rate the maximum of this histogram corresponds to the value of the baseline, while the width of the peak is a measure for the signal noise. With increasing count rate this maximum is successively shifted towards higher values, because the pulse density in the signal trace is so high, that the signal only rarely returns to the baseline. The relationship of the true value of the baseline and easily identifiable points in the histogram can be studied for a range of count rates. It may also be possible to deduce an analytical relationship. A better baseline determination results directly in better resolution and thus better performance in pile-up. More improvement can be achieved by using different hardware to amplify the detector signals without pulse distortion. For this purpose a set of fast 350 MHz bandwidth, low-noise amplifiers with a rise time of 1.2 ns have been purchased. Currently no comparative test measurements have been done, but an improvement in trigger performance and signal uncertainty is expected. Most of the small deviations in the "residue" signal are due to sub-sample trigger phase shifts. The pile-up correction has been done so far only with one model for all triggers and the differences arising from phase shifts have not been taken into account. By using a set of several pulse shapes extracted from pulses of the same trigger phases, the reconstruction can be improved. At the same time the restriction to only using large pulses with good statistics for the pulse shape model will improve the uncertainty of the pulse shape. 49

52 It is planned to not only use the LaBr 3 :Ce detector as an active beam dump at the NEPTUN tagger, but it can also be used at the High Intensity Photon Setup (HIPS) at the S-DALINAC, to measure the bremsstrahlung radiation from the radiator target. In this setup it is possible to calibrate the detector in a wider energy range up to a maximum of 11 MeV. The large electron spectrometer Q-CLAM at the S-DALINAC is another setup, where the LaBr 3 :Ce detector can be used to detect the γ radiation in (e, e γ) experiments. To allow for a steady calibration of the LaBr 3 :Ce detector over time, the use of internal radioactivity is limited both in resolution (because of the X-ray escape) and strength. Placing a moderately strong radioactive source (for instance 60 Co) next to the detector inside the beam dump can lead to a strong enough signal to monitor calibretion shifts during operation Summary and outlook

53 7 Acknowledgments I would like to thank Prof. Dr. Norbert Pietralla and Dr. Deniz Savran for giving me the topic of this thesis as an assignment. It has been a very interesting experience for me and I have gained lots of new impressions and insights into detector physics and data acquisition. Further thanks to Dr. Matjaz Vencelj and M.Sc. Mojca Miklavec from Institut Josef Stefan in Slovenia, who have made it possible to stay in Ljubljana for one month and study signal analysis for fast scintillator signals as well as take preliminary data for my Master Thesis Proposal. I would also like to thank Prof. Dr. Joachim Enders and Dr. Haik Simon for their ideas and input for improving the detector linearity and photomultiplier saturation. Dipl.-Ing. Uwe Bonnes had suggested to test amplifying the detector signals with a TFA, and the electronics group have helped me with ideas for electronics and custom cabling and adapters. Thanks to Dr. Marco Brunken and Dr. Markus Platz it was always possible to purchase a missing piece of electronics or computer hardware. Gisela Buggisch and Sven Hennings were always at my disposal for borrowing radioactive sources. Without their help this thesis would not have been possible. Peter Häckl and the IKP machine shop have helped me in constructing the detector support and the collimator. Further thanks to Dr. Matthias Kirsch from Struck SIS for his great support when working on the data acquisition hardware. Without the support from the members of my group I don t think that the last year would have been only half as fun and half as productive. Many thanks to Dipl.-phys. Linda Schnorrenberger, B.Sc. Jan Glorius and Dr. Kerstin Sonnabend for their support and interest in my work and lively discussions. I would like to thank Andreas Krugmann, Simela Aslanidou and Jonny Birkhan especially for helping with the disassembly of the old experiment, to make space and allow setting up the new beam dump. Christian Röder, Michael Reese, Martin Konrad and Matthias Fritzsche have always had time for fruitful discussion of technical and theoretical aspects regarding this thesis. A lot of ideas have been of great value. Thanks to Jens Conrad for lending me the temperature logger for the temperature dependence measurement. Special thanks go to Roland Wirth, who has helped me develop GECKO and has put a lot of effort into the framework to get it working the way it should. He is also responsible for most of the documentation in the code. In the order of their disappearance I would like to thank Michael, Jens, Janis, Kai, Sebastian and Anne for being some of the greatest co-workers I have known, and contributing to the spirit of the "TagTeam". In no particular order I would also like to thank the remaining members of the group of Prof. Pietralla, especially Markus, Cathrin, Christopher, Tommy, Timo, Jakob, Christian, Michael, Babak, Angel and Mirko for their support and friendly collaboration. My friends have been invaluable in keeping me sane and focused during the lest year, so I would like to thank Basti, Basti, Antje, Svenja, Angelo, Kay, David, Thomas, Thomas, Steffi and Sebastian. Very special thanks to Nicole for her support and appreciation and her unlimited faith in me. Of course I also want to thank my parents and my sister for always encouraging me to go on and supporting me throughout my time at university both ideally and financially. This thesis is supported by the Deutsche Forschungsgemeinschaft with the Sonderforschungsbereich

54

55 A Data acquisition The generic experiment control kit and online analysis framework GECKO is a collection of object-oriented C++classes for configuration of and data acquisition from data sources. Currently only VME modules are supported, but interfaces for different data sources can be added. Data acquisition and online analysis can be configured and controlled using a single graphical user interface (GUI). In the following an overview of the class hierarchy and the process cycle during acquisition is given. GECKO makes use of the Qt4 toolkit in version 4.4 for the GUI, the signals and slots mechanism and some convenience classes. Further explicit dependencies are the GNU Scientific Library (GSL), libusb, and the DSP library SamDSP. To access the interface drivers for the Struck SIS3104 and SIS3150usb interfaces via a generic interface also the libraries sis3100_calls and sis3150_calls are a dependency. A.1 GECKO overview The top class is ScopeMainWindow and an object of this class is instantiated when the application starts. This class constructs the main user interface by calling createui in the constructor. Also the settings object is instantiated from the Qt class QSettings. To allow application-wide access to important shared information a set of sigleton manager classes is used. The interface of these manager classes is defined in the abstract base class AbstractManager. The ModuleManager, PluginManager and the RunManager provide factory methods to dynamically create and destroy modules and plugins and to control the run (One run is understood as one complete set of successive single data acquisition cycles). The interfaces for modules and plugins are defined in AbstractModule and AbstractPlugin. To make use of multi-processor architectures, the graphical user interface, the data acquisition and the data processing are executed concurrently in separate threads. The non-gui thread classes RunThread and PluginThread are derived from QThread and are created whenever a new run is started. Data is shared between the RunThread and the PluginThread by the use of a circular buffer class ThreadBuffer. All modules inherit from the common base class BaseModule, which in turn inherits from AbstractModule. The modules are split into two groups, depending on their type. Two proxy classes BaseDAqModule and BaseInterfaceModule are used to distinguish between interfaces and data acquisition modules. To construct a GUI for each module, it has a special UI class, which inherits from the UI base class BaseUI. All plugins on the other hand are derived from BasePlugin, which is derived from AbstractPlugin. To connect one plugin to another, objects of the connection class PluginConnector are used. These are able to handle data transport between two plugins. Setup and configuration Setup can be done either by hand, by adding all modules and plugins explicitly, or a preconfigured settings file can be loaded. Either way all interfaces, modules and plugins are added as items to the treeview on the left side of the main GUI. Calls to the factory methods ModuleManager::create() and PluginManager::create() instantiate the module or the plugin and emit an appropriate signal. This signal is connected to a matching manager method to add the object as an item to the tree view. Data acquisition modules also need to be assigned to an interface module, so that they are able to access hardware through the interface. The settings of all modules and plugins are saved and restored via the virtual methods Base[Module Plugin]::saveSettings() and Base[Module Plugin]::applySettings(). The QSettings class provides all necessary methods for file-i/o. After the plugins have been added, they can be connected with the use of PluginConnector objects. Each input and each output of every plugin already has an associated PluginConnector object. These can either be of type in or out. Output connectors can be connected with the PluginConnector::connectTo() method to a different input connector and vice versa. Already existing connections can be disconnected with the PluginConnector::disconnect() method. Before a run can be started, also a "main interface" has to be set. This is usually the first interface, which has been added to the treeview. At the moment this assignment is done automatically. The concept of the "main interface" can be used to imply a preference among several available interfaces. Each of the configured modules exports a list of "triggers" and "channels", which can be accessed by calling BaseDAqModule::getChannels(). These are represented by instances of the ScopeChannel class, which provide information on the type of data which is expected from the module on the respective channel. In the Run setup page of the main GUI these "triggers" and "channels" can be seen. For a successful aquisition run, at least one "trigger" and one "channel" from the lists have to be enabled. 53

56 The run cycle Each run starts with the call to ScopeMainWindow::startAcquisition(), which then calls RunManager::start(). The RunManager then creates the RunThread and the PluginThread, writes a text file with run information and then starts the run by calling the start() method of the two threads. The RunThread constructs from the current configuration lists of the defined triggers, the configured modules and the data channels which the modules export. At the same time all modules are reset and configured with their current settings. Then it enters an infinite polling loop (Interrupt based readout is currently not possible, but is an option for the future). Within this loop the RunThread iterates over all triggers and checks with the BaseDAqModule::dataReady() method, if the module has data ready. If that is the case, then the acquisition is started by calling RunThread::acquire. This method iterates over all modules and calls BaseDAqModule::acquire() on each of them. The method is pure virtual and thus has to be implemented by each module. Each module has to provide a so-called demultiplexing (demux) plugin, which is responsible for interpreting the data acquired from the module and writing it to the ThreadBuffer. At the moment of writing every channel of every module used to have their own ThreadBuffer object, but the transition to a single buffer for all channels has already started. The use of a ThreadBuffer object allows to access the stored data from several threads simultaneously. Intelligent locking mechanisms (based on QSemaphore) make sure, that all access to the data is managed and no data is accidentally corrupted. After all acquired data has been demultiplexed and written to the buffer, the control returns to the RunThread and the signal acquisitiondone() is emitted. The polling loop continues to the next cycle and only stops, when the RunThread::stop() method is called. To access and process the data stored in the ThreadBuffer, the PluginThread is used. When it is first instantiated, a list of all demux plugin connectors of all enabled channels is created. Next a QMap is created, in which all BasePlugins are assigned a level. Using this level the order of execution for all plugins is established. Plugins with lower levels have to be executed first, because higher level plugins depend on their results. Plugins with the same level do not depend on each other and can in principle be executed simultaneously. When the PluginThread is started by calling the PluginThread::start() method, again an infinite loop is entered, which calls the PluginThread::process() method on every cycle. Within this method the number of waiting acquisitions is checked. If this number is greater than zero, each of the number of demux plugin connectors with available data is counted by calling the PluginConnectorThreadBuffered::dataAvailable() method. If all enabled channels have data available, the PluginThread::executeProcessList() method is called, which iterates over all plugins, one level after another, starting at level 1. For every plugin in every level the BasePlugin::process() method is called. This mechanism makes sure, that the data from each aqcuisition cycle is processed by all necessary plugins. After the execution of the process list the loop starts over. If all pending acquisitions are processed, the thread enters a sleep state by waiting on a QWaitCondition. To wake up the PluginThread again, the QWaitcondition::wakeAll() method has to be invoked from outside the thread. This is achieved by connecting the RunThread::acquisitionDone() signal to the PluginThread::acquisitionDone() slot. This slot wakes the PluginThread whenever a new acquisition is done and also increases the counter for pending acquisitions. To stop the run cycle ScopeMainWindow::startAcquisition() is called, which invokes RunManager::stop(). Within this function first the two additional threads are asked to stop and then an end-of-run text file is written to disk. This file contains the total run time and the number of recorded events. Data handling Since all low level hardware access is handled by each of the modules themselves, and the interpretation of the data is done by the customizend demux plugins, the subsequent data handling can be done in a generic way. The ThreadBuffer object can store a limited number of pointers to STL vector containers. Up to now these vectors contain the acquired data for each channel as 32-bit unsigned integers. The vector containers are created by each demux plugin during the respective process() call. Each demux plugin has an output connector of type PluginConnectorThreadBuffered object for each data channel. This connector contains a ThreadBuffer object, which can store the data. New data are sent to the connector via the PluginConnectorThreadBuffered::setData() method, which in turn calls ThreadBuffer::write() to store the data in the buffer, if there is enough space left. If all space in the buffer is used up, this call will block until space is freed. When a new set of data is successfully stored in the buffer, it emits the dataavailable() signal. This process happens in the RunThread. After a successful acquisition cycle is completed, the PluginThread is woken up, and data processing is started according to the order in the execution list. The first level plugins, which are directly connected to the demux plugins, need to access the data from the ThreadBuffer, so they issue a call to PluginConnectorThreadBuffered::getData(). This leads to a call to ThreadBuffer::read(), which should get a pointer to the data container and return the number of items read from the buffer. When no data is available, the call does not block and returns zero. After the data is processed by the first level plugins and has been passed on to the next level, a call to PluginConnectorThreadBuffered::useData() makes sure that the vector containers are properly destroyed and the used memory is freed again. 54 A. Data acquisition

57 Data storage So far data storage has been implemented on two levels. Any plugin has in principle the ability to store its data to disk during the BasePlugin::userProcess() method. This is already implemented for the caching plugins, which can store histograms or signals as ASCII files either on demand or periodically. The second implementation of data storage is to store each event as binary data in a contiuously growing file. This is done by the combination of a packing plugin for each module and the event builder plugin. The packing plugin decides, which information for each event must be stored, fills this data into a vector of 32-bit values and adds an identification header. All packing plugins connect directly to the event builder plugin, which concatenates all single packages and adds a global event header. This event can then be stored to disk. All information necessary to unpack the event data is contained in an automatically created description file. It is planned, that with the use of the description file and a matching interface module GECKO can be used to also process data offline. A.2 Extending GECKO Adding a new module To add a new module, the following classes have are needed. At first a <name>module class is necessary to manage configuration and data acquisition, and a user interface class <name>ui for providing the graphical input elements has to be implemented. To interpret and validate the raw data a demux plugin Demux<name>Plugin is the third mandatory class. The module class needs to derive from BaseDAqModule and implement the virtual methods setchannels(), acquire(), dataready(), reset(), configure(), createoutputplugin(), setbaseaddress(), preparefornextacquisition, savesettings() and applysettings() of the base classes. The constructor of the module class must also store a pointer to an object of the module s user interface class through the BaseModule::setUI() method. The user interface class derives from BaseUI and needs to implement the virtual methods createui() and applysettings(). The constructor has to at least call these two methods and set the pointer to the parent module. The demux plugin has to be implemented just like any other plugin, but with the exception that only output connectors of type PluginConnectorThreadBuffered are created in the constructor. Processing of raw data should be done in the BasePlugin::process() method. The resulting data vectors can then be pushed to the outputs using the PluginConnectorThreadBuffered::setData() call. Adding a new plugin A plugin consists of only one class, which is named according to the liberal naming scheme <class><name>plugin. The "class" groups several plugins together semantically (i.e. "dsp", "demux", "cache" are used so far). In order to create a working plugin, it has to be derived from BasePlugin and implement the virtual methods savesettings(), applysettings(), userprocess() and createsettings(). The constructor must set default values for all attribute, and create all necessary connectors using BasePlugin::addConnector(). The createsettings() method sets up the user interface for each plugin, and defines all necessary input controls. The actual data processing is done in userprocess(). At first a pointer to a STL vector must be set to point to input data. The input data is accessible through the list of input connectors via a call to inputs->at(i)->getdata(). Since only a void* is returned a reinterpret_cast<t*> is necessary, where T is the type of data that is expected (i.e. const std::vector<uint32_t>). The input data can then be used for processing. Output data of the plugin must be contained in a different std::vector<t>, which is then sent to the output via a call to outputs->at(i)->setdata(). A.2. Extending GECKO 55

58 Figure A.1.: Left: Run control page, Right: Run setup page. Figure A.2.: Left: Add module dialog, Right: Add plugin dialog. A.3 Quick start guide Installation At the time of writing of this thesis, the source code for GECKO is available from an SVN repository. To check out the current version of the source, issue the following command: svn checkout svn://lsfbr4/svn/taggersoftware --username <user> --password <password> Before GECKO can be compiled, the dependencies lib_sis3150 and lib_sis3100 have to be satisfied. So change to the directories trunk/sisdev/sis3150_calls and trunk/sisdev/sis3100_calls and type make. Then change to the GECKO source directory trunk/sisdev/gecko and compile the source with make. Start GECKO To start GECKO, just issue./gecko in the installation directory. The application window contains a tree view of all interfaces, modules and plugins. In the beginning these are still unconfigured, and only the "Run Setup", and "Run Control" pages are available. These can be seen in figure A.1. The right area displays the according controls for the selected item in the tree view. Adding an interface or module The first configuration step is to add an interface to the list. This has to be done before any other modules can be configured. Since interfaces are special types of modules, the setup of modules is a similar process and thus will be described only once. From the "Modules" menu or with a click of the right mouse button on the tree view, the "Add 56 A. Data acquisition

59 Figure A.3.: Left: Module configuration, Right: Plugin configuration. module..." entry opens a dialog window, where the new interface or module can be configured (refer to fig. A.2, left). A "name" has to be chosen for the module, the "type" has to be selected from the list of available modules, and the "base address" of the module has to be correctly set (not for interfaces). The name is arbitrary, but it is advised to choose a meaningful name for clarity reasons. Once at least one interface is added, the data acquisition modules can be configured in the same manner. All modules default to use the first configured interface for communication with the hardware. Adding plugins Adding plugins to the tree view can be achieved with the "Add plugin..." menu entry (see fig. A.2, right). In the dialog box the "name" and "type" of the plugin can be configured as well as additional type-dependent "attributes". Currently there is no limit to the number of plugins in the list. Connecting plugins When plugins have been added to the tree view, it is possible to connect them to each other and to the data acquisition modules. In the upper part of the configuration area for each plugin (see fig. A.3, right) there are two list boxes showing the "inputs" and "outputs" of the currently selected plugin. With a right mouse button click on an input connector, one of the channels of the configured modules or an output connector of a different plugin can be selected. The connection is established, and the name of the connected output connector is displayed in the list of inputs. Output connectors can be connected to input connectors in a similar fashion. A double click on one connected input or output in the list changes the currently selected plugin to the one that is connected. Configuring modules Usually all available configuration options of the acquisition modules have been implemented in the GUI for each module. To keep the user interface tidy, the interface does not contain much additional information on the effect of each control element. For configuration details of a specific module please refer to the reference manual. As an example the control interface for the SIS3500 ADC is shown in the left part of figure A.3. Configuring plugins If possible, plugins are configured through the available fields in the "settings" area below the connector lists (see figure A.3, right). Every plugin has its own set of settings, so no details are described here. Configuring and controlling the run On the "run setup" page two lists named "triggers" and "channels" contain entries after the modules have been added. The "triggers" list contains all modules, which export a "trigger" channel, which means that they can indicate, that new data has arrived. For a successful run at least one "trigger" has to be checked, so that the module is polled by the RunThread. The "channels" list contains all data channels exported by the configured modules. An event is only processed, when all checked channels have data. The "run control" page contains controls to configure the name of the current run and a large textbox to add "notes" to the current run. The "run name" is the directory on the disk, where all files from one run are stored. With the "start run" button the current run is started. If text has been added in the "notes" field, it will be written into the start.info A.3. Quick start guide 57

LaBr(Ce)scintillationdetector atultrahighcountrates

LaBr(Ce)scintillationdetector atultrahighcountrates γ-rayspectroscopy with a LaBr(Ce)scintillationdetector atultrahighcountrates γ-spektroskopie mit einem LaBr(Ce) Szintillationsdetektor bei sehr hohen Zählraten Project-Proposal von Bastian Löher April

More information

A digital method for separation and reconstruction of pile-up events in germanium detectors. Abstract

A digital method for separation and reconstruction of pile-up events in germanium detectors. Abstract A digital method for separation and reconstruction of pile-up events in germanium detectors M. Nakhostin a), Zs. Podolyak, P. H. Regan, P. M. Walker Department of Physics, University of Surrey, Guildford

More information

Gamma Ray Spectroscopy with NaI(Tl) and HPGe Detectors

Gamma Ray Spectroscopy with NaI(Tl) and HPGe Detectors Nuclear Physics #1 Gamma Ray Spectroscopy with NaI(Tl) and HPGe Detectors Introduction: In this experiment you will use both scintillation and semiconductor detectors to study γ- ray energy spectra. The

More information

Physics Laboratory Scattering of Photons from Electrons: Compton Scattering

Physics Laboratory Scattering of Photons from Electrons: Compton Scattering RR Oct 2001 SS Dec 2001 MJ Oct 2009 Physics 34000 Laboratory Scattering of Photons from Electrons: Compton Scattering Objective: To measure the energy of high energy photons scattered from electrons in

More information

Instructions for gg Coincidence with 22 Na. Overview of the Experiment

Instructions for gg Coincidence with 22 Na. Overview of the Experiment Overview of the Experiment Instructions for gg Coincidence with 22 Na 22 Na is a radioactive element that decays by converting a proton into a neutron: about 90% of the time through β + decay and about

More information

Real Time Pulse Pile-up Recovery in a High Throughput Digital Pulse Processor

Real Time Pulse Pile-up Recovery in a High Throughput Digital Pulse Processor Real Time Pulse Pile-up Recovery in a High Throughput Digital Pulse Processor Paul A. B. Scoullar a, Chris C. McLean a and Rob J. Evans b a Southern Innovation, Melbourne, Australia b Department of Electrical

More information

Time-of-flight PET with SiPM sensors on monolithic scintillation crystals Vinke, Ruud

Time-of-flight PET with SiPM sensors on monolithic scintillation crystals Vinke, Ruud University of Groningen Time-of-flight PET with SiPM sensors on monolithic scintillation crystals Vinke, Ruud IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you

More information

COMPTON SCATTERING. Purpose. Introduction. Fundamentals of Experiment

COMPTON SCATTERING. Purpose. Introduction. Fundamentals of Experiment COMPTON SCATTERING Purpose The purpose of this experiment is to verify the energy dependence of gamma radiation upon scattering angle and to compare the differential cross section obtained from the data

More information

Today s Outline - January 25, C. Segre (IIT) PHYS Spring 2018 January 25, / 26

Today s Outline - January 25, C. Segre (IIT) PHYS Spring 2018 January 25, / 26 Today s Outline - January 25, 2018 C. Segre (IIT) PHYS 570 - Spring 2018 January 25, 2018 1 / 26 Today s Outline - January 25, 2018 HW #2 C. Segre (IIT) PHYS 570 - Spring 2018 January 25, 2018 1 / 26 Today

More information

Ensuring Shielding adequacy in Lead shielded spent fuel transportation casks using gamma scanning

Ensuring Shielding adequacy in Lead shielded spent fuel transportation casks using gamma scanning Ensuring Shielding adequacy in Lead shielded spent fuel transportation casks using gamma scanning More info about this article: http://www.ndt.net/?id=21208 M.Ravichandra 1, P.Raghavendra 1, Dhiren Kothari

More information

High collection efficiency MCPs for photon counting detectors

High collection efficiency MCPs for photon counting detectors High collection efficiency MCPs for photon counting detectors D. A. Orlov, * T. Ruardij, S. Duarte Pinto, R. Glazenborg and E. Kernen PHOTONIS Netherlands BV, Dwazziewegen 2, 9301 ZR Roden, The Netherlands

More information

CAEN Tools for Discovery

CAEN Tools for Discovery Viareggio 5 September 211 Introduction In recent years CAEN has developed a complete family of digitizers that consists of several models differing in sampling frequency, resolution, form factor and other

More information

Silicon Photomultiplier Evaluation Kit. Quick Start Guide. Eval Kit SiPM. KETEK GmbH. Hofer Str Munich Germany.

Silicon Photomultiplier Evaluation Kit. Quick Start Guide. Eval Kit SiPM. KETEK GmbH. Hofer Str Munich Germany. KETEK GmbH Hofer Str. 3 81737 Munich Germany www.ketek.net info@ketek.net phone +49 89 673 467 70 fax +49 89 673 467 77 Silicon Photomultiplier Evaluation Kit Quick Start Guide Eval Kit Table of Contents

More information

Characterization of Large Volume 3.5 x 8 LaBr 3 :Ce Detectors. Abstract

Characterization of Large Volume 3.5 x 8 LaBr 3 :Ce Detectors. Abstract Characterization of Large Volume 3.5 x 8 LaBr 3 :Ce Detectors A. Giaz a,b, L.Pellegri a,b, S. Riboldi a,b, F.Camera a,b,**, N. Blasi b, C. Boiano b, A.Bracco a,b, S. Brambilla b, S. Ceruti a, S.Coelli

More information

Traditional analog QDC chain and Digital Pulse Processing [1]

Traditional analog QDC chain and Digital Pulse Processing [1] Giuliano Mini Viareggio April 22, 2010 Introduction The aim of this paper is to compare the energy resolution of two gamma ray spectroscopy setups based on two different acquisition chains; the first chain

More information

GAMMA-GAMMA CORRELATION Latest Revision: August 21, 2007

GAMMA-GAMMA CORRELATION Latest Revision: August 21, 2007 C1-1 GAMMA-GAMMA CORRELATION Latest Revision: August 21, 2007 QUESTION TO BE INVESTIGATED: decay event? What is the angular correlation between two gamma rays emitted by a single INTRODUCTION & THEORY:

More information

LaBr 3 :Ce, the latest crystal for nuclear medicine

LaBr 3 :Ce, the latest crystal for nuclear medicine 10th Topical Seminar on Innovative Particle and Radiation Detectors 1-5 October 2006 Siena, Italy LaBr 3 :Ce, the latest crystal for nuclear medicine Roberto Pani On behalf of SCINTIRAD Collaboration INFN

More information

Scintillation Counters

Scintillation Counters PHY311/312 Detectors for Nuclear and Particle Physics Dr. C.N. Booth Scintillation Counters Unlike many other particle detectors, which exploit the ionisation produced by the passage of a charged particle,

More information

Attenuation length in strip scintillators. Jonathan Button, William McGrew, Y.-W. Lui, D. H. Youngblood

Attenuation length in strip scintillators. Jonathan Button, William McGrew, Y.-W. Lui, D. H. Youngblood Attenuation length in strip scintillators Jonathan Button, William McGrew, Y.-W. Lui, D. H. Youngblood I. Introduction The ΔE-ΔE-E decay detector as described in [1] is composed of thin strip scintillators,

More information

Kit for building your own THz Time-Domain Spectrometer

Kit for building your own THz Time-Domain Spectrometer Kit for building your own THz Time-Domain Spectrometer 16/06/2016 1 Table of contents 0. Parts for the THz Kit... 3 1. Delay line... 4 2. Pulse generator and lock-in detector... 5 3. THz antennas... 6

More information

Electronic Instrumentation for Radiation Detection Systems

Electronic Instrumentation for Radiation Detection Systems Electronic Instrumentation for Radiation Detection Systems January 23, 2018 Joshua W. Cates, Ph.D. and Craig S. Levin, Ph.D. Course Outline Lecture Overview Brief Review of Radiation Detectors Detector

More information

XRF Instrumentation. Introduction to spectrometer

XRF Instrumentation. Introduction to spectrometer XRF Instrumentation Introduction to spectrometer AMPTEK, INC., Bedford, MA 01730 Ph: +1 781 275 2242 Fax: +1 781 275 3470 sales@amptek.com 1 Instrument Excitation source Sample X-ray tube or radioisotope

More information

K 223 Angular Correlation

K 223 Angular Correlation K 223 Angular Correlation K 223.1 Aim of the Experiment The aim of the experiment is to measure the angular correlation of a γ γ cascade. K 223.2 Required Knowledge Definition of the angular correlation

More information

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement The Lecture Contains: Sources of Error in Measurement Signal-To-Noise Ratio Analog-to-Digital Conversion of Measurement Data A/D Conversion Digitalization Errors due to A/D Conversion file:///g /optical_measurement/lecture2/2_1.htm[5/7/2012

More information

PMT Calibration in the XENON 1T Demonstrator. Abstract

PMT Calibration in the XENON 1T Demonstrator. Abstract PMT Calibration in the XENON 1T Demonstrator Sarah Vickery Nevis Laboratories, Columbia University, Irvington, NY 10533 USA (Dated: August 2, 2013) Abstract XENON Dark Matter Project searches for the dark

More information

CZT Technology: Fundamentals and Applications

CZT Technology: Fundamentals and Applications GE Healthcare CZT Technology: Fundamentals and Applications White Paper Abstract Nuclear Medicine traces its technology roots to the 1950 s, and while it has continued to evolve since the invention of

More information

PHYSICS ADVANCED LABORATORY I COMPTON SCATTERING Spring 2002

PHYSICS ADVANCED LABORATORY I COMPTON SCATTERING Spring 2002 PHYSICS 334 - ADVANCED LABORATORY I COMPTON SCATTERING Spring 00 Purposes: Demonstrate the phenomena associated with Compton scattering and the Klein-Nishina formula. Determine the mass of the electron.

More information

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES The current multiplication mechanism offered by dynodes makes photomultiplier tubes ideal for low-light-level measurement. As explained earlier, there

More information

Calibration of Scintillator Tiles with SiPM Readout

Calibration of Scintillator Tiles with SiPM Readout EUDET Calibration of Scintillator Tiles with SiPM Readout N. D Ascenzo, N. Feege,, B. Lutz, N. Meyer,, A. Vargas Trevino December 18, 2008 Abstract We report the calibration scheme for scintillator tiles

More information

Scintillators as an external trigger for cathode strip chambers

Scintillators as an external trigger for cathode strip chambers Scintillators as an external trigger for cathode strip chambers J. A. Muñoz Department of Physics, Princeton University, Princeton, NJ 08544 An external trigger was set up to test cathode strip chambers

More information

THE BENEFITS OF DSP LOCK-IN AMPLIFIERS

THE BENEFITS OF DSP LOCK-IN AMPLIFIERS THE BENEFITS OF DSP LOCK-IN AMPLIFIERS If you never heard of or don t understand the term lock-in amplifier, you re in good company. With the exception of the optics industry where virtually every major

More information

PX4 Frequently Asked Questions (FAQ)

PX4 Frequently Asked Questions (FAQ) PX4 Frequently Asked Questions (FAQ) What is the PX4? The PX4 is a component in the complete signal processing chain of a nuclear instrumentation system. It replaces many different components in a traditional

More information

e t Development of Low Cost γ - Ray Energy Spectrometer

e t Development of Low Cost γ - Ray Energy Spectrometer e t International Journal on Emerging Technologies (Special Issue on NCRIET-2015) 6(2): 315-319(2015) ISSN No. (Print) : 0975-8364 ISSN No. (Online) : 2249-3255 Development of Low Cost γ - Ray Energy Spectrometer

More information

Homework Set 3.5 Sensitive optoelectronic detectors: seeing single photons

Homework Set 3.5 Sensitive optoelectronic detectors: seeing single photons Homework Set 3.5 Sensitive optoelectronic detectors: seeing single photons Due by 12:00 noon (in class) on Tuesday, Nov. 7, 2006. This is another hybrid lab/homework; please see Section 3.4 for what you

More information

Keyser, Ronald M., Twomey, Timothy R., and Bingham, Russell D. ORTEC, 801 South Illinois Avenue, Oak Ridge, TN 37831s

Keyser, Ronald M., Twomey, Timothy R., and Bingham, Russell D. ORTEC, 801 South Illinois Avenue, Oak Ridge, TN 37831s Improved Performance in Germanium Detector Gamma Spectrometers based on Digital Signal Processing Keyser, Ronald M., Twomey, Timothy R., and Bingham, Russell D. ORTEC, 801 South Illinois Avenue, Oak Ridge,

More information

Positron Emission Tomography

Positron Emission Tomography Positron Emission Tomography UBC Physics & Astronomy / PHYS 409 1 Introduction Positron emission tomography (PET) is a non-invasive way to produce the functional 1 image of a patient. It works by injecting

More information

Time-of-flight PET with SiPM sensors on monolithic scintillation crystals Vinke, Ruud

Time-of-flight PET with SiPM sensors on monolithic scintillation crystals Vinke, Ruud University of Groningen Time-of-flight PET with SiPM sensors on monolithic scintillation crystals Vinke, Ruud IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you

More information

Physics 342 Laboratory. Scattering of Photons from Free Electrons: Compton Scattering

Physics 342 Laboratory. Scattering of Photons from Free Electrons: Compton Scattering RR Oct 2001 SS Dec 2001 Physics 342 Laboratory Scattering of Photons from Free Electrons: Compton Scattering Objective: To measure the energy of high energy photons scattered from electrons in a brass

More information

Signal Processing for Digitizers

Signal Processing for Digitizers Signal Processing for Digitizers Modular digitizers allow accurate, high resolution data acquisition that can be quickly transferred to a host computer. Signal processing functions, applied in the digitizer

More information

nanomca 80 MHz HIGH PERFORMANCE, LOW POWER DIGITAL MCA Model Numbers: NM0530 and NM0530Z

nanomca 80 MHz HIGH PERFORMANCE, LOW POWER DIGITAL MCA Model Numbers: NM0530 and NM0530Z datasheet nanomca 80 MHz HIGH PERFORMANCE, LOW POWER DIGITAL MCA Model Numbers: NM0530 and NM0530Z I. FEATURES Finger-sized, high performance digital MCA. 16k channels utilizing smart spectrum-size technology

More information

Performance Assessment of Pixelated LaBr 3 Detector Modules for TOF PET

Performance Assessment of Pixelated LaBr 3 Detector Modules for TOF PET Performance Assessment of Pixelated LaBr 3 Detector Modules for TOF PET A. Kuhn, S. Surti, Member, IEEE, J. S. Karp, Senior Member, IEEE, G. Muehllehner, Fellow, IEEE, F.M. Newcomer, R. VanBerg Abstract--

More information

ORTEC. Research Applications. Pulse-Height, Charge, or Energy Spectroscopy. Detectors. Processing Electronics

ORTEC. Research Applications. Pulse-Height, Charge, or Energy Spectroscopy. Detectors. Processing Electronics ORTEC Spectroscopy systems for ORTEC instrumentation produce pulse height distributions of gamma ray or alpha energies. MAESTRO-32 (model A65-B32) is the software included with most spectroscopy systems

More information

ORTEC Experiment 3. Gamma-Ray Spectroscopy Using NaI(Tl) Equipment Required. Purpose. Gamma Emission

ORTEC Experiment 3. Gamma-Ray Spectroscopy Using NaI(Tl) Equipment Required. Purpose. Gamma Emission ORTEC Experiment 3 Equipment Required Electronic Instrumentation o SPA38 Integral Assembly consisting of a 38 mm x 38 mm NaI(Tl) Scintillator, Photomultiplier Tube, and PMT Base with Stand o 4001A/4002D

More information

A MONTE CARLO CODE FOR SIMULATION OF PULSE PILE-UP SPECTRAL DISTORTION IN PULSE-HEIGHT MEASUREMENT

A MONTE CARLO CODE FOR SIMULATION OF PULSE PILE-UP SPECTRAL DISTORTION IN PULSE-HEIGHT MEASUREMENT Copyright JCPDS - International Centre for Diffraction Data 2005, Advances in X-ray Analysis, Volume 48. 246 A MONTE CARLO CODE FOR SIMULATION OF PULSE PILE-UP SPECTRAL DISTORTION IN PULSE-HEIGHT MEASUREMENT

More information

Cosmic Rays in MoNA. Eric Johnson 8/08/03

Cosmic Rays in MoNA. Eric Johnson 8/08/03 Cosmic Rays in MoNA Eric Johnson 8/08/03 National Superconducting Cyclotron Laboratory Department of Physics and Astronomy Michigan State University Advisors: Michael Thoennessen and Thomas Baumann Abstract:

More information

On the initiation of lightning in thunderclouds (Instrumentation, Supplementary information)

On the initiation of lightning in thunderclouds (Instrumentation, Supplementary information) On the initiation of lightning in thunderclouds (Instrumentation, Supplementary information) Ashot Chilingarian 1,2, Suren Chilingaryan 1, Tigran Karapetyan 1, Lev Kozliner 1, Yeghia Khanikyants 1, Gagik

More information

Silicon Photomultiplier

Silicon Photomultiplier Silicon Photomultiplier Operation, Performance & Possible Applications Slawomir Piatek Technical Consultant, Hamamatsu Corp. Introduction Very high intrinsic gain together with minimal excess noise make

More information

A user-friendly fully digital TDPAC-spectrometer

A user-friendly fully digital TDPAC-spectrometer Hyperfine Interact DOI 10.1007/s10751-010-0201-8 A user-friendly fully digital TDPAC-spectrometer M. Jäger K. Iwig T. Butz Springer Science+Business Media B.V. 2010 Abstract A user-friendly fully digital

More information

OPERATING CHARACTERISTICS OF THE GEIGER COUNTER

OPERATING CHARACTERISTICS OF THE GEIGER COUNTER OPERATING CHARACTERISTICS OF THE GEIGER COUNTER OBJECTIVE The objective of this laboratory is to determine the operating voltage for a Geiger tube and to calculate the effect of the dead time and recovery

More information

arxiv: v2 [physics.ins-det] 17 Oct 2015

arxiv: v2 [physics.ins-det] 17 Oct 2015 arxiv:55.9v2 [physics.ins-det] 7 Oct 25 Performance of VUV-sensitive MPPC for Liquid Argon Scintillation Light T.Igarashi, S.Naka, M.Tanaka, T.Washimi, K.Yorita Waseda University, Tokyo, Japan E-mail:

More information

CAEN. Electronic Instrumentation. CAEN Silicon Photomultiplier Kit

CAEN. Electronic Instrumentation. CAEN Silicon Photomultiplier Kit CAEN Tools for Discovery Electronic Instrumentation CAEN Silicon Photomultiplier Kit CAEN realized a modular development kit dedicated to Silicon Photomultipliers, representing the state-of-the art in

More information

Atomic and Nuclear Physics

Atomic and Nuclear Physics Atomic and Nuclear Physics Nuclear physics -spectroscopy LEYBOLD Physics Leaflets Detecting radiation with a scintillation counter Objects of the experiments Studying the scintillator pulses with an oscilloscope

More information

Goal of the project. TPC operation. Raw data. Calibration

Goal of the project. TPC operation. Raw data. Calibration Goal of the project The main goal of this project was to realise the reconstruction of α tracks in an optically read out GEM (Gas Electron Multiplier) based Time Projection Chamber (TPC). Secondary goal

More information

Physics Experiment N -17. Lifetime of Cosmic Ray Muons with On-Line Data Acquisition on a Computer

Physics Experiment N -17. Lifetime of Cosmic Ray Muons with On-Line Data Acquisition on a Computer Introduction Physics 410-510 Experiment N -17 Lifetime of Cosmic Ray Muons with On-Line Data Acquisition on a Computer The experiment is designed to teach the techniques of particle detection using scintillation

More information

Total Absorption Dual Readout Calorimetry R&D

Total Absorption Dual Readout Calorimetry R&D Available online at www.sciencedirect.com Physics Procedia 37 (2012 ) 309 316 TIPP 2011 - Technology and Instrumentation for Particle Physics 2011 Total Absorption Dual Readout Calorimetry R&D B. Bilki

More information

Amptek Inc. Page 1 of 7

Amptek Inc. Page 1 of 7 OPERATING THE DP5 AT HIGH COUNT RATES The DP5 with the latest firmware (Ver 6.02) and Amptek s new 25 mm 2 SDD are capable of operating at high rates, with an OCR greater than 1 Mcps. Figure 1 shows a

More information

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image

Introduction. Chapter 16 Diagnostic Radiology. Primary radiological image. Primary radiological image Introduction Chapter 16 Diagnostic Radiology Radiation Dosimetry I Text: H.E Johns and J.R. Cunningham, The physics of radiology, 4 th ed. http://www.utoledo.edu/med/depts/radther In diagnostic radiology

More information

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 22.

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 22. FIBER OPTICS Prof. R.K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture: 22 Optical Receivers Fiber Optics, Prof. R.K. Shevgaonkar, Dept. of Electrical Engineering,

More information

Radionuclide Imaging MII 3073 RADIONUCLIDE IMAGING SYSTEM

Radionuclide Imaging MII 3073 RADIONUCLIDE IMAGING SYSTEM Radionuclide Imaging MII 3073 RADIONUCLIDE IMAGING SYSTEM Preamplifiers and amplifiers The current from PMT must be further amplified before it can be processed and counted (the number of electrons yielded

More information

First Applications of the YAPPET Small Animal Scanner

First Applications of the YAPPET Small Animal Scanner First Applications of the YAPPET Small Animal Scanner Guido Zavattini Università di Ferrara CALOR2 Congress, Annecy - FRANCE YAP-PET scanner Scintillator: YAP:Ce Size: matrix of 2x2 match like crystals

More information

nanomca-sp datasheet I. FEATURES

nanomca-sp datasheet I. FEATURES datasheet nanomca-sp 80 MHz HIGH PERFORMANCE, LOW POWER DIGITAL MCA WITH BUILT IN PREAMPLIFIER Model Numbers: SP0534A/B to SP0539A/B Standard Models: SP0536B and SP0536A I. FEATURES Built-in preamplifier

More information

Application Note (A12)

Application Note (A12) Application Note (A2) The Benefits of DSP Lock-in Amplifiers Revision: A September 996 Gooch & Housego 4632 36 th Street, Orlando, FL 328 Tel: 47 422 37 Fax: 47 648 542 Email: sales@goochandhousego.com

More information

Simulation of Algorithms for Pulse Timing in FPGAs

Simulation of Algorithms for Pulse Timing in FPGAs 2007 IEEE Nuclear Science Symposium Conference Record M13-369 Simulation of Algorithms for Pulse Timing in FPGAs Michael D. Haselman, Member IEEE, Scott Hauck, Senior Member IEEE, Thomas K. Lewellen, Senior

More information

A Real Time Digital Signal Processing Readout System for the PANDA Straw Tube Tracker

A Real Time Digital Signal Processing Readout System for the PANDA Straw Tube Tracker A Real Time Digital Signal Processing Readout System for the PANDA Straw Tube Tracker a, M. Drochner b, A. Erven b, W. Erven b, L. Jokhovets b, G. Kemmerling b, H. Kleines b, H. Ohm b, K. Pysz a, J. Ritman

More information

Preliminary simulation study of the front-end electronics for the central detector PMTs

Preliminary simulation study of the front-end electronics for the central detector PMTs Angra Neutrino Project AngraNote 1-27 (Draft) Preliminary simulation study of the front-end electronics for the central detector PMTs A. F. Barbosa Centro Brasileiro de Pesquisas Fsicas - CBPF, e-mail:

More information

Ph 3324 The Scintillation Detector and Gamma Ray Spectroscopy

Ph 3324 The Scintillation Detector and Gamma Ray Spectroscopy Ph 3324 The Scintillation Detector and Gamma Ray Spectroscopy Required background reading Attached are several pages from an appendix on the web for Tipler-Llewellyn Modern Physics. Read the section on

More information

Soft X-Ray Silicon Photodiodes with 100% Quantum Efficiency

Soft X-Ray Silicon Photodiodes with 100% Quantum Efficiency PFC/JA-94-4 Soft X-Ray Silicon Photodiodes with 1% Quantum Efficiency K. W. Wenzel, C. K. Li, D. A. Pappas, Raj Kordel MIT Plasma Fusion Center Cambridge, Massachusetts 2139 USA March 1994 t Permanent

More information

(Refer Slide Time: 00:10)

(Refer Slide Time: 00:10) Fundamentals of optical and scanning electron microscopy Dr. S. Sankaran Department of Metallurgical and Materials Engineering Indian Institute of Technology, Madras Module 03 Unit-6 Instrumental details

More information

Single-photon excitation of morphology dependent resonance

Single-photon excitation of morphology dependent resonance Single-photon excitation of morphology dependent resonance 3.1 Introduction The examination of morphology dependent resonance (MDR) has been of considerable importance to many fields in optical science.

More information

CHAPTER 11 HPD (Hybrid Photo-Detector)

CHAPTER 11 HPD (Hybrid Photo-Detector) CHAPTER 11 HPD (Hybrid Photo-Detector) HPD (Hybrid Photo-Detector) is a completely new photomultiplier tube that incorporates a semiconductor element in an evacuated electron tube. In HPD operation, photoelectrons

More information

Solid State Photomultiplier: Noise Parameters of Photodetectors with Internal Discrete Amplification

Solid State Photomultiplier: Noise Parameters of Photodetectors with Internal Discrete Amplification Solid State Photomultiplier: Noise Parameters of Photodetectors with Internal Discrete Amplification K. Linga, E. Godik, J. Krutov, D. Shushakov, L. Shubin, S.L. Vinogradov, and E.V. Levin Amplification

More information

Method for digital particle spectrometry Khryachkov Vitaly

Method for digital particle spectrometry Khryachkov Vitaly Method for digital particle spectrometry Khryachkov Vitaly Institute for physics and power engineering (IPPE) Obninsk, Russia The goals of Analog Signal Processing Signal amplification Signal filtering

More information

LaBr 3 :Ce scintillation gamma camera prototype for X and gamma ray imaging

LaBr 3 :Ce scintillation gamma camera prototype for X and gamma ray imaging 8th International Workshop on Radiation Imaging Detectors Pisa 2-6 July 2006 LaBr 3 :Ce scintillation gamma camera prototype for X and gamma ray imaging Roberto Pani On behalf of SCINTIRAD Collaboration

More information

Enhanced Sample Rate Mode Measurement Precision

Enhanced Sample Rate Mode Measurement Precision Enhanced Sample Rate Mode Measurement Precision Summary Enhanced Sample Rate, combined with the low-noise system architecture and the tailored brick-wall frequency response in the HDO4000A, HDO6000A, HDO8000A

More information

HF Upgrade Studies: Characterization of Photo-Multiplier Tubes

HF Upgrade Studies: Characterization of Photo-Multiplier Tubes HF Upgrade Studies: Characterization of Photo-Multiplier Tubes 1. Introduction Photomultiplier tubes (PMTs) are very sensitive light detectors which are commonly used in high energy physics experiments.

More information

Digital Signal Processing for HPGe Detectors

Digital Signal Processing for HPGe Detectors Digital Signal Processing for HPGe Detectors David Radford ORNL Physics Division July 28, 2012 HPGe Detectors Hyper-Pure Ge (HPGe) detectors are the gold standard for gamma-ray spectroscopy Unsurpassed

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

Charge Loss Between Contacts Of CdZnTe Pixel Detectors

Charge Loss Between Contacts Of CdZnTe Pixel Detectors Charge Loss Between Contacts Of CdZnTe Pixel Detectors A. E. Bolotnikov 1, W. R. Cook, F. A. Harrison, A.-S. Wong, S. M. Schindler, A. C. Eichelberger Space Radiation Laboratory, California Institute of

More information

INDEX. Firmware for DPP (Digital Pulse Processing) DPP-PSD Digital Pulse Processing for Pulse Shape Discrimination

INDEX. Firmware for DPP (Digital Pulse Processing) DPP-PSD Digital Pulse Processing for Pulse Shape Discrimination Firmware for DPP (Digital Pulse Processing) Thanks to the powerful FPGAs available nowadays, it is possible to implement Digital Pulse Processing (DPP) algorithms directly on the acquisition boards and

More information

PCS-150 / PCI-200 High Speed Boxcar Modules

PCS-150 / PCI-200 High Speed Boxcar Modules Becker & Hickl GmbH Kolonnenstr. 29 10829 Berlin Tel. 030 / 787 56 32 Fax. 030 / 787 57 34 email: info@becker-hickl.de http://www.becker-hickl.de PCSAPP.DOC PCS-150 / PCI-200 High Speed Boxcar Modules

More information

PET Detectors. William W. Moses Lawrence Berkeley National Laboratory March 26, 2002

PET Detectors. William W. Moses Lawrence Berkeley National Laboratory March 26, 2002 PET Detectors William W. Moses Lawrence Berkeley National Laboratory March 26, 2002 Step 1: Inject Patient with Radioactive Drug Drug is labeled with positron (β + ) emitting radionuclide. Drug localizes

More information

Tutors Dominik Dannheim, Thibault Frisson (CERN, Geneva, Switzerland)

Tutors Dominik Dannheim, Thibault Frisson (CERN, Geneva, Switzerland) Danube School on Instrumentation in Elementary Particle & Nuclear Physics University of Novi Sad, Serbia, September 8 th 13 th, 2014 Lab Experiment: Characterization of Silicon Photomultipliers Dominik

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Peculiarities of the Hamamatsu R photomultiplier tubes

Peculiarities of the Hamamatsu R photomultiplier tubes Peculiarities of the Hamamatsu R11410-20 photomultiplier tubes Akimov D.Yu. SSC RF Institute for Theoretical and Experimental Physics of National Research Centre Kurchatov Institute 25 Bolshaya Cheremushkinskaya,

More information

FFT 1 /n octave analysis wavelet

FFT 1 /n octave analysis wavelet 06/16 For most acoustic examinations, a simple sound level analysis is insufficient, as not only the overall sound pressure level, but also the frequency-dependent distribution of the level has a significant

More information

PMT tests at UMD. Vlasios Vasileiou Version st May 2006

PMT tests at UMD. Vlasios Vasileiou Version st May 2006 PMT tests at UMD Vlasios Vasileiou Version 1.0 1st May 2006 Abstract This memo describes the tests performed on three Milagro PMTs in UMD. Initially, pulse-height distributions of the PMT signals were

More information

A Novel Design of a High-Resolution Hodoscope for the Hall D Tagger Based on Scintillating Fibers

A Novel Design of a High-Resolution Hodoscope for the Hall D Tagger Based on Scintillating Fibers A Novel Design of a High-Resolution Hodoscope for the Hall D Tagger Based on Scintillating Fibers APS Division of Nuclear Physics Meeting October 25, 2008 GlueX Photon Spectrum Bremsstrahlung in diamond

More information

The Benefits of Photon Counting... Page -1- Pitfalls... Page -2- APD detectors... Page -2- Hybrid detectors... Page -4- Pitfall table...

The Benefits of Photon Counting... Page -1- Pitfalls... Page -2- APD detectors... Page -2- Hybrid detectors... Page -4- Pitfall table... The Benefits of Photon Counting......................................... Page -1- Pitfalls........................................................... Page -2- APD detectors..........................................................

More information

nanomca datasheet I. FEATURES

nanomca datasheet I. FEATURES datasheet nanomca I. FEATURES Finger-sized, high performance digital MCA. 16k channels utilizing smart spectrum-size technology -- all spectra are recorded and stored as 16k spectra with instant, distortion-free

More information

Chemistry 985. Some constants: q e 1.602x10 19 Coul, ɛ x10 12 F/m h 6.626x10 34 J-s, c m/s, 1 atm = 760 Torr = 101,325 Pa

Chemistry 985. Some constants: q e 1.602x10 19 Coul, ɛ x10 12 F/m h 6.626x10 34 J-s, c m/s, 1 atm = 760 Torr = 101,325 Pa Chemistry 985 Fall, 2o17 Distributed: Mon., 17 Oct. 17, 8:30AM Exam # 1 OPEN BOOK Due: 17 Oct. 17, 10:00AM Some constants: q e 1.602x10 19 Coul, ɛ 0 8.854x10 12 F/m h 6.626x10 34 J-s, c 299 792 458 m/s,

More information

Uniformity and Crosstalk in MultiAnode Photomultiplier Tubes

Uniformity and Crosstalk in MultiAnode Photomultiplier Tubes Uniformity and Crosstalk in MultiAnode Photomultiplier Tubes A thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science degree in Physics from the College of William

More information

Conductance switching in Ag 2 S devices fabricated by sulphurization

Conductance switching in Ag 2 S devices fabricated by sulphurization 3 Conductance switching in Ag S devices fabricated by sulphurization The electrical characterization and switching properties of the α-ag S thin films fabricated by sulfurization are presented in this

More information

nanomca-ii-sp datasheet

nanomca-ii-sp datasheet datasheet nanomca-ii-sp 125 MHz ULTRA-HIGH PERFORMANCE DIGITAL MCA WITH BUILT IN PREAMPLIFIER Model Numbers: SP8004 to SP8009 Standard Models: SP8006B and SP8006A I. FEATURES Finger-sized, ultra-high performance

More information

EKA Laboratory Muon Lifetime Experiment Instructions. October 2006

EKA Laboratory Muon Lifetime Experiment Instructions. October 2006 EKA Laboratory Muon Lifetime Experiment Instructions October 2006 0 Lab setup and singles rate. When high-energy cosmic rays encounter the earth's atmosphere, they decay into a shower of elementary particles.

More information

Meshing Challenges in Simulating the Induced Currents in Vacuum Phototriode

Meshing Challenges in Simulating the Induced Currents in Vacuum Phototriode Meshing Challenges in Simulating the Induced Currents in Vacuum Phototriode S. Zahid and P. R. Hobson Electronic and Computer Engineering, Brunel University London, Uxbridge, UB8 3PH UK Introduction Vacuum

More information

CHAPTER 8 GENERIC PERFORMANCE MEASURES

CHAPTER 8 GENERIC PERFORMANCE MEASURES GENERIC PERFORMANCE MEASURES M.E. DAUBE-WITHERSPOON Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America 8.1. INTRINSIC AND EXTRINSIC MEASURES 8.1.1.

More information

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication INTRODUCTION Digital Communication refers to the transmission of binary, or digital, information over analog channels. In this laboratory you will

More information

ORTEC. Time-to-Amplitude Converters and Time Calibrator. Choosing the Right TAC. Timing with TACs

ORTEC. Time-to-Amplitude Converters and Time Calibrator. Choosing the Right TAC. Timing with TACs ORTEC Time-to-Amplitude Converters Choosing the Right TAC The following topics provide the information needed for selecting the right time-to-amplitude converter (TAC) for the task. The basic principles

More information

High granularity scintillating fiber trackers based on Silicon Photomultiplier

High granularity scintillating fiber trackers based on Silicon Photomultiplier High granularity scintillating fiber trackers based on Silicon Photomultiplier A. Papa Paul Scherrer Institut, Villigen, Switzerland E-mail: angela.papa@psi.ch Istituto Nazionale di Fisica Nucleare Sez.

More information

A new operative gamma camera for Sentinel Lymph Node procedure

A new operative gamma camera for Sentinel Lymph Node procedure A new operative gamma camera for Sentinel Lymph Node procedure A physicist device for physicians Samuel Salvador, Virgile Bekaert, Carole Mathelin and Jean-Louis Guyonnet 12/06/2007 e-mail: samuel.salvador@ires.in2p3.fr

More information