Triggering at ATLAS Vortrag von Johannes Haller, Uni HH Am ATLAS-D Meeting, September 2006 Trigger Challenge at the LHC Technical Implementation Trigger Strategy, Trigger Menus, Operational Model, Physics Analyses and all that 1
Physics Goals at the LHC EW symmetry breaking? - search for the Higgs Boson p µ+ µ+ HZ γ µ- p H p γ Z µ- e- χ 1~ q ~ g p ~ q q What else? - top, EW, QCD, B-physics What events do we need to take? νe q Extensions of the Standard Model? - search for SUSY or other BSM physics The trigger question: p ~ χ 20 q p µ+ µ ~χ 0 1 - physics events: µ, γ, e, τ, jets, ET,miss -high pt objects (un-pre-scaled) -low pt objects (pre-scaled or in exclusive selection) - monitor events - calibration events s r? e w ns a e l im p 2
Event Rates and Multiplicities cross section of p-p collisions σtot(14 TeV) 100 mb σinel(14 TeV) 70 mb R= LHC cm energy (GeV) = event rate σinel N t = luminosity = 1034 cm-2 s-1 = inel. Cross section = 70 mb = interactions / bunch crossing = bunch crossing interval = 25 ns x σinel = 1034 cm-2 s-1 x 70mb = 7 108 Hz N = R / t = 7 108 s-1 x 25 10-9 s = 17.5 = 17.5 x 3564 / 2808 (not all bunches filled) = 23 interactions / bunch crossing (pileup) With every bunch crossing 23 Minimum Bias events with ~1725 particles produced R nch = charged particles / interaction Nch = charged particles / BC Ntot = all particles / BC nch 50 Nch= nch x 23 = ~ 1150 Nto= Nch x 1.5 = ~ 1725 3
Looking for Interesting Events Higgs ZZ 2e+2µ 23 min bias events 4
another Constraint: ATLAS Event Size pile-up, adequate precision need small granularity detectors Detector Channels Fragment size [KB] Pixels 1.4*108 60 SCT 6.2*106 110 TRT 3.7*105 307 LAr 1.8*105 576 Tile 104 48 MDT 3.7*105 154 CSC 6.7*104 256 RPC 3.5*105 12 TGC 4.4*105 6 LVL1 28 Atlas event size: 1.5 MB (140 million channels) at 40 MHz: 1 PB/sec affordable mass storage: 300 MB/sec storage rate: < 200 Hz 3 PB/year for offline analysis 5
The Trigger Challenge rate total interaction rate IA rate:~ 1 GHz; BC rate: 40 MHz; storage:~ 200 Hz online rejection: 99.9995% (!) crucial for physics (!) powerful trigger needed: enormous rate reduction retaining the rare events in the very tough LHC environment discoveries storage rate remember: 0.000005 must be shared: ET physics triggers - high pt physics (un-pre-scaled) - low pt physics (pre-scaled, excl.) technical triggers: - monitor triggers - calibration triggers - 6
Technical Implementation 7
ATLAS Trigger: Overview software hardware 3-Level Trigger System: 1) LVL1 decision based on data from calorimeters and muon trigger chambers; synchronous at 40 MHz; bunch crossing identification 2.5 µs 2) LVL2 uses Regions of ~ 10 ms Interest (identified by LVL1) data (ca. 2%) with full granularity from all detectors ~ sec. 3) Event Filter has access to full event and can perform more refined event reconstruction 8
LVL1 Trigger Overview Muon Barrel Trigger (RPC) Pre-Processor (analogue ET) Jet / Energy-sum Processor Cluster Processor (e/γ, τ/h) multiplicities of e/γ, τ/h, jet for 8 pt thresholds each; flags for Σ ET, ΣET j, ETmiss over thresholds Muon End-cap Trigger (TGC) Muon-CTP Interface (MuCTPI) Central Trigger Processor (CTP) multiplicities of µ for 6 pt thresholds L1A signal TTC TTC TTC TTC TTC 9 LVL1 latency: 2.5 µs = 100 BC Muon trigger Calorimeter trigger
available thresholds: LVL1 Calorimeter Trigger electronic components (installed in counting room outside the cavern; heavily FPGA based): EM (e/gamma): 8-16 Tau/ hadron: 0-8 Jets: 8 example: e/γ algorithm: fwd. Jets: 8 goal: good discrimination sum sum E, E (jets), ETmiss : 4 (each) T T e/γ jets identify 2x2 RoI with local ET maximum cluster/ isolation cuts on various ET sums PPM crate output: at 40 MHz: multiplicities for e/γ, jets, τ/had and flags for energy sums to Central Trigger (CTP) accepted events: position of objects (RoIs) to LVL2 and additional information to DAQ 7 JEMs 6 CPMs 10
LVL1 Muon Trigger algorithm: dedicated muon chambers with good timing resolution for trigger: Resistive Plate Barrel <1.0 : Chambers (RPCs) End-caps 1.0< <2.4 : Thin Gap Chambers (TGCs) local track finding for LVL1 done ondetector (ASICs) looking for coincidences in chamber layers programmable widths of 6 coincidence windows determines pt threshold Available thresholds: Muon: 6 11
LVL1 Trigger Decision in CTP CTP: (one 9U VME64x crate, FPGA based) signals from LVL1 systems: 8-16 EM, 0-8 TAU 8 JET, 8 FWDJET 4 XE, 4 JE, 4 TE, 6 Muon other external signals e.g. MB scintillator, calculation of trigger decision for up to 256 trigger items: e.g. XE70+JET70 raw trigger bits internal signals: 2 random rates 2 pre-scaled clocks 8 bunch groups central part of LVL1 trigger system CTP in USA15: note: 2 different dead-time settings: trigger groups with high and low priority will see different luminosities! application of prescale factors actual trigger bits application of veto/ dead time CTP L1A all of these steps need to be taken into account in offline data analysis 12
Interface to HLT: RoI Mechanism LVL1 triggers on (high) pt objects L1Calo and L1Muon send Regions of Interest (RoI) to LVL2 for e/γ/τ-jet-µ candidates above thresholds LVL2 uses Regions of Interest as seed for reconstruction (full granularity) only data in RoI are used advantage: total amount of transfered data is small ~2% of the total event data can be dealt with at 75 khz EF runs after event building, full access to event 13
ATLAS Trigger & DAQ Architecture HLT HW : DESY, Hu m bol d t LVL2 and EF run in large PC farms on the surface DAQ and HLT closely coupled pre-series (corr. ~10% of HLT) 14
Staging of HLT Components 2006 2007 2008 2009 L2P LVL2 PC 30 270 510 510 SFI EventBuilder 32 48 96 128 EFP EventFilter PC 93 837 1581 1922 SFO Storage element 3 10 10 10 deferred due to financial constraints max LVL1 rate per L2P: 150 Hz EventBuilder rate per SFI: 40 Hz max EB rate per EFP: 2 Hz physics storage rate per EFP: 0.1 Hz storage rate per storage element: 60 MB/s 40 Hz for 1.5 MB SFOs non-deferred; allow b/w for calib., debug, etc consequences for physics: e.g. in 2007/2008: LVL1 rate: ~40 KHz (cf. design:75/100 KHz) physics storage: ~80 Hz (cf. design: 200 Hz) 15
Trigger Strategy 16
HLT Selection Strategy Example: Dielectron Trigger fundamental principles: 1) step-wise processing and decision inexpensive (data, time) algorithms first, complicated algorithms last. 2) seeded reconstruction algorithms use results from previous steps initial seeds for LVL2 are LVL1 RoIs LVL2 confirms & refines LVL1 EF confirms & refines LVL2 note: EF tags accepted events according to physics selection ( streams, offline analysis!) ATLAS trigger terminology: Trigger chain Trigger signature (called item in LVL1) Trigger element 17
in parallel: Trigger Chains HLT Steering enables running of Trigger Chains in parallel w/o interference Trigger Chains are independent: easy to calculate trigger efficiencies easy to operate the trigger (finding problems, predictable behavior) scalable system ATLAS follows early reject principle: - Look at signatures one by one i.e. do not try to reconstruct full event upfront in principle: N-Level trigger system but: Only one pre-scale per chain per level. (to be discussed if used in HLT) if no signatures left, reject event - Save resources datahu transfer Martinminimize zur Nedden, Berlinand required CPU power 18
Physics Analysis: the Trigger Part Every physics analysis needs dedicated thoughts about the trigger: trigger rejects 0.999995 more or less hard cuts (in the signal region) (each) trigger has an inefficiency that needs to be corrected (turn-on curve) Similar to offline reconstruction efficiency, but important difference: no retrospective optimization: The events are lost forever. trigger optimization (as early as possible) trigger data quality during data-taking is crucial Example: trigger optimisation: typical turn-on curve: L2Calo 19
Physics Analysis: the Trigger Part analysis preparation: setup/ optimize a trigger for your physics signal define a trigger strategy (based on the available resources) convert to trigger chain (already existing?) determine rates and efficiencies from MC define a monitoring strategy define trigger chain to be used for monitoring of your physics trigger (efficiency from data) rates of the monitoring trigger (pre-scales?) integrate this in the overall trigger menu (done by Trigger Coordination for online running) threshold? more exclusive? pre-scaling? not OK OK use the trigger online (take data) monitor trigger quality determine trigger eff. (from data) correct your measurement20
Trigger Efficiency from Data example: possible monitoring of inclusive lepton triggers: reconstruct good Z0 candidates offline (triggered by at least one electron trigger) Count second electrons fulfilling trigger rec. Z0-peak electron positron trigger effi. time-evolution of accuracy eta note: - selection bias to be carefully checked! - trigger efficiency may depend on physics sample (e.g. electrons in W eν and top) investigate in physics groups studies of this kind are important and are just starting in ATLAS total efficiency for muons other methods: - di-object samples (J/Ψ, Z0, Z0+jets) - minimum bias and pre-scaled low-threshold triggers ( bootstrap ) - orthogonal selections in HLT (ID, muon, calo) - number of events 21
LVL1 Menu (as of today, TDR) general trigger problem: cover as much as possible of the kinematic phase space for physics low trigger thresholds keep the trigger rate low high trigger thresholds trigger menu is a compromise LVL1 rate is dominated by electromagnetic clusters: 78% of physics triggers Note: large uncertainties on predicted rates study of the global aspects needed: load balancing (e.g. jet triggers) 22
HLT Menu (as of today, TDR) e/γ rate reduced mainly in LVL2 (full granularit y in RoI) Note: large uncertainties on predicted rates (no data!) these menu give an rough impression of what we will select. details of the menu are not yet worked out (pre-scales, monitoring, ) but first examples of realistic trigger menus needed soon 23
towards a more complete Menu aim: get concrete examples of more complete and realistic trigger menus for discussion at the next trigger and physics weeks. ad-hoc-group: started rethinking about the trigger menus invites input from physics, combined performance and detector groups study slice-wise: - optimization of cuts need distributions of rates, rate vs. eff more realism to algorithms detailed studies of threshold behaviour, noise consequences on physics reach study of the global aspects: - load balancing (e.g. jet triggers balancing) overlap between selections, optimization the important details of the menu - priorities: consolidate work on menu for 14 TeV and 1031. in parallel: limited study for 0.9 TeV and 1029 later look at 1032 and above - monitoring strategy pre-scaling strategy (dynamic, static) triggers concurrent data-taking (pre-scales) or sequentially (i.e. dedicated runs)? time evolution (luminosity, background, etc.) pre-scale changes ala H1/CDF? technical triggers (bunch-groups, etc.) - 24
Ideas for early Data Taking conditions of early data-taking: initial luminosity: 1031(1029), bunch spacing 75ns (~500ns) BCID not critical, can relax the trigger timing windows trigger commissioning understanding of LVL1 is crucial at startup first phase: rates are low DAQ can stand 400 MB/s LVL1 only, HLT transparent some pre-scaling needed only for very low thresholds. HLT selections studied offline second phase: insert HLT start with very simple and basic algorithms minimum bias events: important esp. at the beginning: - crucial for timing-in of the experiment - for commissioning of detectors/ trigger/ offline selection - physics: as bkg. (important for 14 TeV), per se possible implementation: BC LVL1 trigger + selection on LVL2/EF bias free at LVL1 MBTS trigger at LVL1 + selection in HLT some bias at LVL1 ( range; efficiency for MIPS; multiplicity requirements; etc.) needed where interactions per BC << 1 25
The technical Side: Trigger Configuration Unique key TrigConf system under development real data-taking: trigger menu can change between runs optimization, falling luminosity during a fill (pre-scales, cuts) book-keeping of all settings crucial TriggerDB is central part: stores all information for the online selection stores all versions of trigger settings. identified with a unique key to be stored in CondDB. LVL1 HLT Offline data analyzer users will have to look up the TriggerDB to interpret the trigger result in the events, e.g. to find the settings for their triggers and the corresponding run ranges. 26
The technical side: Trigger Configuration Java front-end for the TriggerDB under development: TriggerTool three modes are foreseen: experts: construct consistent menus in TriggerDB shift-crew: choice of predefined options (menus, pre-scale sets) offline user: extract menus in text file for development, or simulation etc, browse DB to find settings of triggers and run ranges 27
German Contributions Contributions: Hardware: Institutes: Heidelberg Mainz DESY/Humboldt/HH (Siegen) (Wuppertal) (MPI) Heidelberg L1Calo Preprocessor L1Calo Jet-Energy Module Mainz DESY, Humboldt HLT computing racks Technical software around trigger: Trigger Configuration Trigger Monitoring DESY/HH DESY/Humboldt Simulation, algorithms, performance: CTP Simulation MB Trigger Jets, ETmiss B-physics B-tagging on LVL2 Muons DESY/HH DESY/Humboldt Mainz Siegen (planned), Wuppertal (finished) MPI (planned for SLHC) Trigger strategy: Operation, HLT Steering Combined Trigger Menu Pre-scaling DESY/HH Mainz, DESY/HH DESY/HH Heidelberg, Mainz, 28
Summary triggering at the LHC is crucial for physics only 0.000005 of the events selected cuts and efficiencies affect the results each data analyzer must understand the trigger choice of trigger, trigger optimization trigger (in-)efficiency - how to measure it (from data)? how to correct for it? need to develop more complete and realistic trigger menus for (early) data taking German contributions in many areas (HW+SW) very good collaboration! 29