Signal Reconstruction of the ATLAS Hadronic Tile Calorimeter: implementation and performance

Similar documents
THE Hadronic Tile Calorimeter (TileCal) is the central

Design of the Front-End Readout Electronics for ATLAS Tile Calorimeter at the slhc

Level-1 Calorimeter Trigger Calibration

ATLAS [1] is a general purpose experiment for the Large

Data acquisition and Trigger (with emphasis on LHC)

LHC Experiments - Trigger, Data-taking and Computing

Data acquisition and Trigger (with emphasis on LHC)

KLauS4: A Multi-Channel SiPM Charge Readout ASIC in 0.18 µm UMC CMOS Technology

Field Programmable Gate Array (FPGA) for the Liquid Argon calorimeter back-end electronics in ATLAS

LHCb Preshower(PS) and Scintillating Pad Detector (SPD): commissioning, calibration, and monitoring

Operation and Performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC

Calorimeter Monitoring at DØ

ATLAS Muon Trigger and Readout Considerations. Yasuyuki Horii Nagoya University on Behalf of the ATLAS Muon Collaboration

ATLAS NOTE ATL-COM-TILECAL February 6, Time Calibration of the ATLAS Hadronic Tile Calorimeter using the Laser System.

Installation, Commissioning and Performance of the CMS Electromagnetic Calorimeter (ECAL) Electronics

Data acquisi*on and Trigger - Trigger -

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

DAQ & Electronics for the CW Beam at Jefferson Lab

PoS(LHCP2018)031. ATLAS Forward Proton Detector

The design and performance of the ATLAS jet trigger

Upgrade of the ATLAS Thin Gap Chamber Electronics for HL-LHC. Yasuyuki Horii, Nagoya University, on Behalf of the ATLAS Muon Collaboration

arxiv: v2 [physics.ins-det] 20 Oct 2008

CALICE AHCAL overview

The Status of ATLAS. Xin Wu, University of Geneva On behalf of the ATLAS collaboration. X. Wu, HCP2009, Evian, 17/11/09 ATL-GEN-SLIDE

Final Results from the APV25 Production Wafer Testing

A Modular Readout System For A Small Liquid Argon TPC Carl Bromberg, Dan Edmunds Michigan State University

Noise Characteristics Of The KPiX ASIC Readout Chip

The Commissioning of the ATLAS Pixel Detector

Front-end Electronics for the ATLAS Tile Calorimeter

CMS Silicon Strip Tracker: Operation and Performance

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Firmware development and testing of the ATLAS IBL Read-Out Driver card

PoS(ICRC2017)449. First results from the AugerPrime engineering array

The trigger system of the muon spectrometer of the ALICE experiment at the LHC

Diamond sensors as beam conditions monitors in CMS and LHC

Overview of the ATLAS Trigger/DAQ System

First-level trigger systems at LHC. Nick Ellis EP Division, CERN, Geneva

1 Detector simulation

Hardware Trigger Processor for the MDT System

PoS(EPS-HEP2017)476. The CMS Tracker upgrade for HL-LHC. Sudha Ahuja on behalf of the CMS Collaboration

Timing Measurement in the CALICE Analogue Hadronic Calorimeter.

The Calice Analog Scintillator-Tile Hadronic Calorimeter Prototype

The LUCID-2 Detector RICHARD SOLUK, UNIVERSITY OF ALBERTA FOR THE ATLAS- LUCID GROUP

The Bessel Filter Simulation

Performance of 8-stage Multianode Photomultipliers

Trigger and data acquisition

Track Triggers for ATLAS

The LHCb trigger system

Readout electronics for LumiCal detector

A Fast Waveform-Digitizing ASICbased DAQ for a Position & Time Sensing Large-Area Photo-Detector System

Trigger and Data Acquisition (DAQ)

Understanding the Properties of Gallium Implanted LGAD Timing Detectors

The CMS electromagnetic calorimeter barrel upgrade for High-Luminosity LHC

MuLan Experiment Progress Report

The CMS ECAL Laser Monitoring System

Contents. Why waveform? Waveform digitizer : Domino Ring Sampler CEX Beam test autumn 04. Summary

The Fermilab Short Baseline Program and Detectors

Testing the Electronics for the MicroBooNE Light Collection System

50 MHz Voltage-to-Frequency Converter

The ATLAS Trigger in Run 2: Design, Menu, and Performance

Construction and first beam-tests of silicon-tungsten prototype modules for the CMS High Granularity Calorimeter for HL-LHC

Hardware Trigger Processor for the MDT System

A tracking detector to study O(1 GeV) ν μ CC interactions

Data Quality Monitoring of the CMS Pixel Detector

High collection efficiency MCPs for photon counting detectors

Performance of the ATLAS Muon Trigger in Run I and Upgrades for Run II

The Run-2 ATLAS Trigger System

The Trigger System of the MEG Experiment

Electronic Readout System for Belle II Imaging Time of Propagation Detector

The CMS HGCAL detector for HL-LHC upgrade

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Characterizing the Noise Performance of the KPiX ASIC. Readout Chip. Jerome Kyrias Carman

CALICE Software. Data handling, prototype reconstruction, and physics analysis. Niels Meyer, DESY DESY DV Seminar June 29, 2009

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 24. Optical Receivers-

Towards an ADC for the Liquid Argon Electronics Upgrade

Data Acquisition System for the Angra Project

The LHCb Upgrade BEACH Simon Akar on behalf of the LHCb collaboration

The CMS Silicon Strip Tracker and its Electronic Readout

LHCb Trigger & DAQ Design technology and performance. Mika Vesterinen ECFA High Luminosity LHC Experiments Workshop 8/10/2016

The CMS ECAL Laser Monitoring System

Trigger and Data Acquisition Systems. Monika Wielers RAL. Lecture 3. Trigger. Trigger, Nov 2,

Commissioning Status and Results of ATLAS Level1 Endcap Muon Trigger System. Yasuyuki Okumura. Nagoya TWEPP 2008

Monika Wielers Rutherford Appleton Laboratory

Traditional analog QDC chain and Digital Pulse Processing [1]

Development of a Highly Selective First-Level Muon Trigger for ATLAS at HL-LHC Exploiting Precision Muon Drift-Tube Data

Electronic Instrumentation for Radiation Detection Systems

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Spectrometer cavern background

Calibration of Scintillator Tiles with SiPM Readout

Current Status of ATLAS Endcap Muon Trigger System

Particle ID in the Belle II Experiment

Scintillators as an external trigger for cathode strip chambers

ATLAS LAr Electronics Optimization and Studies of High-Granularity Forward Calorimetry

Mitigating high energy anomalous signals in the CMS barrel Electromagnetic Calorimeter

Cosmic Rays with LOFAR

Expected Performance of the ATLAS Inner Tracker at the High-Luminosity LHC

Calorimetry in particle physics experiments

The 1st Result of Global Commissioning of the ATALS Endcap Muon Trigger System in ATLAS Cavern

TRIGGER & DATA ACQUISITION. Nick Ellis PH Department, CERN, Geneva

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Transcription:

Signal Reconstruction of the ATLAS Hadronic Tile Calorimeter: implementation and performance G. Usai (on behalf of the ATLAS Tile Calorimeter group) University of Texas at Arlington E-mail: giulio.usai@cern.ch Abstract. TileCal, the central hadronic section of the ATLAS Calorimeter, is a sampling calorimeter made of steel and scintillating tiles. The TileCal front-end electronics read out about 10000 photo-multipliers at 40 MHz measuring energies ranging from 30 MeV to 2 T ev. The read-out system is designed to provide the ATLAS High Level Trigger with reconstructed PMT signals within the time budget allowed by the First Level Trigger (LVL1) maximum trigger rate of 75 KHz. The signal amplitude, time and a reconstruction quality factor are obtained for each PMT using Optimal Filtering techniques implemented in the Digital Signal Processors (DSP). 1. Introduction ATLAS [1] is a general purpose experiment designed to explore the physics landscape in protonproton collisions at the unprecedentedly high energy regime of the Large Hadron Collider at CERN. After many years of careful preparation ATLAS received the first collisions from LHC in December 2009. After a short overview of the Tile Calorimeter read-out system we will discuss the implementation of Optimal Filtering algorithms highlighting the constraints imposed by the use of DSPs. We will report on the validation of the DSP algorithm and present the performances as measured in calibration and collision events. 2. Tile Calorimeter Read Out The Tile Calorimeter [2] is required to measure particle energies in a dynamic range corresponding to 16 bits, extending from typical muon energy deposition of a few hundreds of MeV to the highest energetic jet of particles, which in rare cases can deposit up to two TeVs in a single cell. A scheme with a double readout using two independent 10-bit ADCs was chosen to cover this range. The PMT pulse is shaped, then fanned-out and amplified in two separate branches with a nominal gain ratio of 64 [3]. The two output pulses, referred to as high gain and low gain, have a fixed width (FWHM) of about 50 ns and an amplitude that is proportional to the energy deposited in the cell. The two pulses are digitized simultaneously by two ADCs at 40 MSPS; the time series (samples) for each pulse is stored in the Data Management Unit [4] that also perform some first processing. Each pulse is sampled seven times in physics mode; up to nine samples may be recorded for calibration purposes. The high gain ADC is normally used unless the time series contain measurements out of the ADC range that trigger the use of the low gain ADC readout. The samples are keept in digital pipelines on the detector, and if the event is accepted by the LVL1 Trigger system, are sent to the back-end electronics, the Read Out

Driver boards (ROD) [11]. Each ROD (see figure 1) receive data from eight calorimeter modules through optical links and synchronize the data so that all the event fragments are associated with the LHC bunch crossing (BCID) selected by the LVL1 trigger. Digital Signal Processors (DSP) are used for the implementation of the reconstruction algorithms, as described below. After the reconstruction the event fragments are partially assembled and sent to the Read Out Buffers (ROBin) for further processing by the Second Level Trigger. Figure 1. Scheme of the Read Out Driver dataflow. Figure 2. Pulse functional form showing the ADC samples (dots) and definition of the reconstructed quantities. 3. Optimal Filtering Figure 2 shows an analog signal pulse and the ADC measurement samples, and illustrate the main characteristics of the pulse: amplitude, arrival phase and baseline level, or pedestal. The LHC is a synchrotron, the phase of the calorimeter signals of interaction events is expected to be synchronized with the LHC clock and constant within very small fluctuations. The residual fluctuation are mainly due to the longitudinal spread of bunches. The ADC measurement phase can be adjusted to compensate for delays and particle time of flight. In this condition the pulse time dependence can be linearized with a good approximation and the signal reconstructed with linear techniques. The Optimal Filtering method have been chosen for the expected performance and its simplicity [5]. The algorithm extract the three main parameters of the shaped signal: the amplitude A, the phase τ and the baseline level p using linear combinations of the samples S i with a set of weights a i,b i and c i : A = n i=1 a i S i, Aτ = n i=1 b i S i and p = n i=1 c i S i, (1) where n is the total number of samples. The weights are obtained minimizing the variance of the parameters against the electronic noise and Minimum Bias pileup fluctuations [6]. To give an example, the weights a i, used in the amplitude computation are defined by the system of n + 3 equations: ni=1 a i g i = 1, ni=1 a i g i = 0, ni=1 (2) a i = 0, nj=1 a j R ij λg i κg i ν = 0 for i = 1, n.

where g i = g(t i ) is the known signal pulse function, g i the pulse first derivative, λ, κ, ν are Lagrange multipliers and R ij is the noise autocorrelation matrix defined as: R ij = (ni n i ) (n j n j ) (ni n i ) 2 (n j n j ) 2, (3) here we indicate with n i the samples recorded on the electronic noise or other fluctuation due to MB pileup. Two similar system of n + 3 equations hold for the weights b i and c i used for the phase and pedestal computation. Note in equation 1 that the baseline level is not subtracted by the samples, but the constraint ni=1 a i = 0 guarantee that any constant added to the samples S i will not contribute to the amplitude. In order to flag possible reconstruction failures, a reconstruction quality factor is defined as the square sum of the residuals: Q F = n i=1 [ ( S i Ag i + Aτg 2 i + ped)]. (4) The pulse shape function g(t i ) used to define the weights and the QF, is measured from the data. A single pulse shape template is used to describe all the channels in the calorimeter since channel by channel differences were found to be not important [7]. Different pulse templates extracted at different amplitudes, when rescaled, show a slightly deformation of the pulse height in the tail region. The distortion is small ( 1%) and proved to have a negligible effect on the reconstructed amplitude and time linearity [8],[7]. The QF is more sensitive to this distorsion and show a clear increase with the amplitude. Figure 3. Behaviour of the weight a 4 as a function of phase. Figure 4. Behaviour of the weight b 4 as a function of phase. 4. Implementation The weights a i,b i,c i, as defined above, depend on the phase used to sample the signals, as shown in figures 3 and 4. The detector have been timed-in using calibration (laser) and collisions events [9] and the phase offsets measured for each channel are stored in the database. Set of weights are computed for all phases covering the needed range with 0.1 ns steps.

To reconstruct asynchronous data (e.g. cosmic rays), or to avoid the use of a priori definition of phases, an iterative method can be used in the reconstruction. In this mode the equations 1 are initially evaluated with an arbitrary phase, and the successive selection of the weight is based on the result of the previous step. The iterative method is slower and more sensitive to noise fluctuations. It is worth noting that the sample acquisition window is larger than the design LHC bunches separation and iterative algorithm can pick up signals generated in bunch crossing different than the triggered one. For this reason the default method is the non-iterative one and we seldom use the iterative method for purposes of detector commissioning. Due to the execution time limits, the maximum LVL1 rate sustainable using iterations is about 30 KHz, the non-iterative method is well within the LVL1 trigger rate time requirement. All the parameters needed by the reconstruction algorithm, like weights, phases and calibration constants are downloaded into the ROD/DSPs at the configuration time. The reconstructed amplitude provided to the LVL2 trigger algorithms is calibrated to the calorimeter EM scale by applying channel based calibration factors in the DSP. The DSP reconstruction is necessarily limited by use of fixed point arithmetic and the internal precision available to describe the weights and calibration factors. Since the division is a time consuming operation in the DSP the phase from equation 1 is computed using a look-up table with the energy reciprocal pre-defined and stored in the DSP memory. One other limitation is in the size of output data fragments [10]. The reconstruction results are packed in a 32-bit word for each channel:15 bits are used to write the energy, 11 bits for the time and 4 bits to encode the quality factor. The remaining bits encode the gain and an overal channel reconstruction good/bad status. This will discussed more in the next section. Figure 5. reconstructed in the ROD/DSP in the HG range evaluated using collision data. Numerical precision of the energy Figure 6. Numerical precision of the energy reconstructed in the ROD/DSP in the LG range evaluated using collision data. 5. Validation The ROD can be configured to send out both the reconstructed quantities and the raw data samples. Due to the output link bandwidth, this configuration is sustainable only up to a LVL1 trigger rates of about 45 KHz. The raw data obtained in this way can be reconstructed using well tested offline algorithms and used to validate the DSP implementation, to study the reproducibility of the results and to fully commission the detector in the harsh environment of the LHC. When the ATLAS data taking rate increases only a small fraction of the data will contain samples. Therefore data validation at the start is important. The first step in the validation of the DSP results is to verify the consistency of the online and offline implementations. Figures 5 and 6 show the residuals between the energy reconstructed online by the DSP and the one reconstructed offline by an equivalent algorithm showing the loss

of precision due to the DSP limitation discussed above. Note that the degradation is slightly energy dependent. Since this error scale with the channel level calibration factors, a band is shown covering from the 99% to the worst case. The worst cases arise in few PMTs with defective HV settings. The response needs to be boosted by a large factor to compensate for the gain deficit. The maximum deviation is about 2 MeV in the high gain. For comparison the electronic noise RMS level is about 30 MeV. The low gain have a precision of about 50 MeV, fully adequate in the range where signals are larger than approximately 8 GeV. Figure 7 shows the difference between the time reconstructed in the DSP and offline as a function of the signal amplitude. Pseudo data are used for this study: data samples are generated according to the expected pulse shape, using a special module that emulate the front end electronics and cover a wide range of phases and amplitudes. The ROD is interfaced with the module and reconstruct the data as the real one. The precision in the reconstructed time depends on the energy and phase granularity used in the look-up table; the oscillations apparent in figure 7 are an artifact of this granularity. Figure 8 shows the same difference as a function of the phases of the pulse. The maximum rounding error is about 0.5 ns. The larger differences up to 3 ns arise for small amplitudes and very large phases (up to 60 ns). In these conditions the numerical precision is not an issue since the reconstructed time is highly inaccurate in any case. The linear approximation discussed in section 3 does not hold. To cross-check the Figure 7. Numerical precision of the time Figure 8. Numerical precision of the time reconstructed in the ROD/DSP as a function of reconstructed in the ROD/DSP as a function of the signal amplitude. the signal arrival time. DSP implementation, the fixed point arithmetic and the precision used in the DSP constants and weights can be emulated in the offline reconstruction. The differences in energy and time between DSP and offline reconstruction are identically zero when the DSP emulation is used [12]. One important further step in the validation is to evaluate the linearity and the resolution of the online non-iterative algorithm with respect to the offline iterative methods. In this way we evaluate all uncertainties due to numerical precision, understanding and description of the detector response and timing and the assumptions we made in section 2. These studies have just started, and we are not able yet to give fully quantitative results. Figure 5.1 show the DSP reconstructed time as a function of the offline time reconstructed with iterative methods. The DSP shows a good linearity for phases within 10 ns around the expected mean time. For larger time the deviation from linearity start becoming important. Figure 10 shows the relative error in the DSP reconstructed energy as a function of the phase. We have a maximum bias of about 1%/ns for pulses arriving out of the expected time. If needed, the bias can be easily corrected online also after finalizing the understanding of the tails in the time distributions.

5.1. Handling of unexpected conditions Failures in the reconstruction, or unexpected results, can trigger the conditional dumping of the raw data sample on channel by channel basis in order to make the raw data available for further offline processing. Example of these conditions are: ADC saturation, unexpected reconstructed time, bad quality of the reconstruction, etc. These conditions are defined by a programmable logic based on the comparison of the reconstructed quantities with thresholds and other external conditions. Figure 10. Bias in the DSP energy reconstruction as a function of the reconstructed Figure 9. Time reconstructed in the DSP and time reconstructed offline with iterative time and second order correction. method 6. Summary and Conclusions The energy reconstruction performed online in the Read Out Drivers is an important component of the ATLAS high level trigger. The evaluation of the algorithm performance started before collisions, using calibration, cosmic ray and pseudo data. The first collision data at 900 GeV and at 7 TeV have provided an estimate of the performance of the algorithm. The DSP results can be used in the High level trigger selection. Since the LVL1 trigger rate is currently lower than the design value, we can run in a transparent mode without losing any information that can be recorded and made available for offline reprocessing. New results based on increased statistics and better understanding of the data are expected. References [1] ATLAS Detector and Physics Performance:Technical Design Report, CERN-LHCC-99-014 (1999); CERN- LHCC-99-015 (1999). [2] Atlas Tile Calorimeter: Technical Design Report, CERN-LHCC-98-015 (1998). [3] K. Anderson et al. Design of the front-end analog electronics for the ATLAS Tile Calorimeter. NIM A, 551:469476, 2005. [4] C. Bohm et al. Atlas TileCal digitizer test system and quality control. ATLAS Note, ATL- TILECAL-2004-009, 2004. [5] W. E. Cleland and E. G. Stern. Signal processing considerations for liquid ionization calorimeters in a high rate environment. NIM A, 338:467497, 1994. [6] E. Fullana et al. Optimal Filtering in the ATLAS Hadronic Tile Calorimeter. ATLAS Note, ATL-TILECAL- 2005-001, 2005. [7] T. Carli et al. Effect of Pulse-Shape Variations on the Energy Reconstruction in the ATLAS Tile Calorimeter. Atlas Note, ATL-COM-TILECAL-2009-033, 2009. [8] K. Anderson et al. Performance and Calibration of the TileCal Fast Readout Using the Charge Injection System. ATlas Note, ATL-COM-TILECAL-2008-003. 2008. [9] C. Clement et al. Time Calibration of the ATLAS Hadronic Tile Calorimeter using the Laser System. Atlas Note, ATL-TILECAL-PUB-2009-003, 2009.

[10] B. Salvachua et al. Online energy and phase reconstruction: commissioning phase of the ATLAS Tile Calorimeter. Atlas Note, ATL-TILECAL-2008-004, 2008. [11] A. Valero et al., ATLAS TileCal read out driver production, JINST, vol.2, p.p05003, 2007. [12] Compare with page 9,10,12 of the slides shown at the Conference.