Introduc*on. Trigger Hands- On Advance Tutorial Session. A. Ave*syan, Tulika Bose (Boston University)

Similar documents
Trigger and DAQ at the LHC. (Part II)

Data acquisi*on and Trigger - Trigger -

Data acquisition and Trigger (with emphasis on LHC)

Data acquisition and Trigger (with emphasis on LHC)

LHC Experiments - Trigger, Data-taking and Computing

Overview of the ATLAS Trigger/DAQ System

Opera&on of the Upgraded ATLAS Level- 1 Central Trigger System

Trigger and Data Acquisition at the Large Hadron Collider

Triggers For LHC Physics

Real-time flavour tagging selection in ATLAS. Lidija Živković, Insttut of Physics, Belgrade

Trigger and Data Acquisition Systems. Monika Wielers RAL. Lecture 3. Trigger. Trigger, Nov 2,

Monika Wielers Rutherford Appleton Laboratory

The Run-2 ATLAS. ATLAS Trigger System: Design, Performance and Plans

LHCb Trigger & DAQ Design technology and performance. Mika Vesterinen ECFA High Luminosity LHC Experiments Workshop 8/10/2016

Electronics, trigger and physics for LHC experiments

First-level trigger systems at LHC. Nick Ellis EP Division, CERN, Geneva

Trigger and data acquisition

TRIGGER & DATA ACQUISITION. Nick Ellis PH Department, CERN, Geneva

The LHCb trigger system

ATLAS Phase-II trigger upgrade

EPJ C direct. The ATLAS trigger system. 1 Introduction. 2 The ATLAS experiment. electronic only. R. Hauser, on behalf of the ATLAS collaboration

ATLAS Muon Trigger and Readout Considerations. Yasuyuki Horii Nagoya University on Behalf of the ATLAS Muon Collaboration

Triggers: What, where, why, when and how

Performance of the ATLAS Muon Trigger in Run I and Upgrades for Run II

arxiv: v2 [physics.ins-det] 13 Oct 2015

The Commissioning of the ATLAS Pixel Detector

The design and performance of the ATLAS jet trigger

The CMS Muon Trigger

Exo$ca Hotline. Tulika Bose. Boston University. (On behalf of the Exo$ca Group) Physics Plenary (May 19th, 2010)

Operation and Performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC

The LHC Situation. Contents. Chris Bee. First collisions: July 2005! Centre de Physique des Particules de Marseille, France,

The LHCb trigger system: performance and outlook

LHCb Trigger System and selection for Bs->J/Ψ(ee)φ(KK)

The ATLAS Trigger in Run 2: Design, Menu, and Performance

CTEQ Summer School. Wesley H. Smith U. Wisconsin - Madison July 19, 2011

PoS(EPS-HEP2017)476. The CMS Tracker upgrade for HL-LHC. Sudha Ahuja on behalf of the CMS Collaboration

Introduction to Trigger and Data Acquisition

First-level trigger systems at LHC

SLHC Trigger & DAQ. Wesley H. Smith. U. Wisconsin - Madison FNAL Forward Pixel SLHC Workshop October 9, 2006

Track Triggers for ATLAS

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Layout and prototyping of the new ATLAS Inner Tracker for the High Luminosity LHC

The CMS ECAL Laser Monitoring System

CMS Silicon Strip Tracker: Operation and Performance

ATLAS and CMS Upgrades and the future physics program at the LHC D. Contardo, IPN Lyon

Triggering at ATLAS. Vortrag von Johannes Haller, Uni HH Am ATLAS-D Meeting, September 2006

The Run-2 ATLAS Trigger System

What do the experiments want?

The LHCb Upgrade BEACH Simon Akar on behalf of the LHCb collaboration

Trigger. Wesley H. Smith U. Wisconsin CMS Trigger Coordinator CMS Plenary Session February 25, 2008

Data Quality Monitoring of the CMS Pixel Detector

Upgrade of the ATLAS Thin Gap Chamber Electronics for HL-LHC. Yasuyuki Horii, Nagoya University, on Behalf of the ATLAS Muon Collaboration

Physics at the LHC and Beyond Quy Nhon, Aug 10-17, The LHCb Upgrades. Olaf Steinkamp. on behalf of the LHCb collaboration.

Trigger Overview. Wesley Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Review April 12, 2000

Totem Experiment Status Report

ATLAS ITk and new pixel sensors technologies

CMS SLHC Tracker Upgrade: Selected Thoughts, Challenges and Strategies

Expected Performance of the ATLAS Inner Tracker at the High-Luminosity LHC

Status of the LHCb Experiment

Simula'on of e-astrogam

LHCb Preshower(PS) and Scintillating Pad Detector (SPD): commissioning, calibration, and monitoring

GPU-accelerated track reconstruction in the ALICE High Level Trigger

DAQ & Electronics for the CW Beam at Jefferson Lab

Development of a Highly Selective First-Level Muon Trigger for ATLAS at HL-LHC Exploiting Precision Muon Drift-Tube Data

The Status of ATLAS. Xin Wu, University of Geneva On behalf of the ATLAS collaboration. X. Wu, HCP2009, Evian, 17/11/09 ATL-GEN-SLIDE

Signal Reconstruction of the ATLAS Hadronic Tile Calorimeter: implementation and performance

The Phase-II ATLAS ITk Pixel Upgrade

Attilio Andreazza INFN and Università di Milano for the ATLAS Collaboration The ATLAS Pixel Detector Efficiency Resolution Detector properties

Calorimeter Monitoring at DØ

L1 Trigger Activities at UF. The CMS Level-1 1 Trigger

The online muon identification with the ATLAS experiment at the LHC

The upgrade of the LHCb trigger for Run III

CMS electron and _ photon performance at s = 13 TeV. Francesco Micheli on behalf of CMS Collaboration

Machine learning and parallelism in the reconstruction of LHCb and its upgrade

Development and Test of a Demonstrator for a First-Level Muon Trigger based on the Precision Drift Tube Chambers for ATLAS at HL-LHC

L1 Track Finding For a TiME Multiplexed Trigger

9. TRIGGER AND DATA ACQUISITION

Field Programmable Gate Array (FPGA) for the Liquid Argon calorimeter back-end electronics in ATLAS

Installation, Commissioning and Performance of the CMS Electromagnetic Calorimeter (ECAL) Electronics

James W. Rohlf. Super-LHC: The Experimental Program. Boston University. Int. Workshop on Future Hadron Colliders Fermilab, 17 October 2003

The upgrade of the LHCb trigger for Run III

Hardware Trigger Processor for the MDT System

CMS Phase 2 Upgrade: Preliminary Plan and Cost Estimate

Computing Software and Analysis Challenge 2006

Phase 1 upgrade of the CMS pixel detector

Optimization, Synchronization, Calibration and Diagnostic of the RPC Trigger System for the CMS detector.

Monte Carlo integration and event generation on GPU and their application to particle physics

The trigger system of the muon spectrometer of the ALICE experiment at the LHC

Commissioning Status and Results of ATLAS Level1 Endcap Muon Trigger System. Yasuyuki Okumura. Nagoya TWEPP 2008

Upgrade of the CMS Tracker for the High Luminosity LHC

3.1 Introduction, design of HERA B

BaBar and PEP II. Physics

Calorimetry in particle physics experiments

8.882 LHC Physics. Detectors: Muons. [Lecture 11, March 11, 2009] Experimental Methods and Measurements

Status of the LHCb experiment

Streaming Readout for EIC Experiments

PoS(Vertex 2007)034. Tracking in the trigger: from the CDF experience to CMS upgrade. Fabrizio Palla 1. Giuliano Parrini

PoS(ICPAQGP2015)098. Common Readout System in ALICE. Mitra Jubin, Khan Shuaib Ahmad

Hardware Trigger Processor for the MDT System

A High Granularity Timing Detector for the Phase II Upgrade of the ATLAS experiment

Transcription:

Introduc*on Trigger Hands- On Advance Tutorial Session A. Ave*syan, Tulika Bose (Boston University) On behalf of the Trigger HATS team: JulieFe Alimena, Len Apanasevich, Inga Bucinskaite, Darren Puigh, Dylan Rankin, Clint Richardson August 13 th, 2014

LHC The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again. The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to Bunch Proton Proton - Proton ~3600 bunch/beam Protons/bunch ~10 11 Beam energy ~6.5 TeV (6.5x10 12 ev) Luminosity >10 34 cm -2 s -1 Parton (quark, gluon) Particle l jet l e Higgs + o e Z - e Z o e jet - + SUSY...

Summary of opera*ng condi*ons: A good event (say containing a Higgs decay) + ~25 extra bad minimum bias interac*ons Beam crossings: LEP, Tevatron & LHC LHC: ~3600 bunches (or ~2800 filled bunches) And same length as LEP (27 km) Distance between bunches: 27km/3600=7.5m Distance between bunches in *me: 7.5m/c=25ns LEP: e + e - Crossing rate 30 khz 22µs œ Tevatron Run I 3.5µs œ Tevatron Run II 396ns LHC: pp Crossing rate 40 MH z 25ns

pp collisions at 14 TeV at 10 34 cm - 2 s - 1 25 min bias events overlap H ZZ ( Z µµ ) H 4 muons: the cleanest ( golden ) signature Re constru cted tracks wi th pt > 25 Ge V And this (not the H though ) repeats every 25 ns

50 100 200 500 1000 2000 M [GeV] Physics Selec*on @ LHC σ mb µb nb pb fb σ inelastic qq bb W Z t t OBSERVED gg H SM qqh SM H SM γγ h γγ Bunch Crossing rate storage rate scalar LQ DISCOVERIES Rate GHz MHz khz Hz mhz µhz Cross sec*ons for various processes vary over many orders of magnitude Bunch crossing frequency: 40MHz Storage rate ~ 400-1000 Hz à online rejec*on: > 99.99% à crucial impact on physics reach Keep in mind that what is discarded is lost forever

The Challenge @ LHC The Challenge The Solu*on Process σ (nb) Production rates (Hz) Inelastic ~108 ~109 bb 5 105 5 106 W ℓν 15 100 Z ℓℓ 2 20 tt 1 10 H (100 GeV) 0.05 Z ʹ (1 TeV) 0.05 g~g~ (1 TeV ) 0.1 0.05 0.1 H (500 GeV) 10-3 10-2 0.1

The Trigger The Challenge The Solu*on Process σ (nb) Production rates (Hz) Inelastic ~108 ~109 bb 5 105 5 106 W ℓν 15 100 Z ℓℓ 2 20 tt 1 10 H (100 GeV) 0.05 Z ʹ (1 TeV) 0.05 g~g~ (1 TeV ) 0.1 0.05 0.1 H (500 GeV) 10-3 10-2 0.1

Trigger/DAQ challenges @ LHC # of channel ~ O(10 7 ). ~25-50 interac*ons every 25ns Need large number of connec*ons Need informa*on super- highway Calorimeter informa*on should correspond to tracker informa*on Need to synchronize detectors to befer than 25ns Some*mes detector signal/*me of flight > 25ns Integrate informa*on from more than one bunch crossing Need to correctly iden*fy bunch crossing Can store data at O(100 Hz) Need to reject most events Selec*on is done Online in real- *me Cannot go back and recover events Need to monitor selec*on 8

Trigger/DAQ Challenges 40 MHz COLLISION RATE LEVEL-1 TRIGGER Charge Time Pattern Computing Services 16 Million channels 3 Gigacell buffers 100-50 khz 1 MB EVENT DATA 1 Terabit/s READOUT 50,000 data channels 500 Gigabit/s 300 Hz FILTERED EVENT Gigabit/s SERVICE LAN DETECTOR CHANNELS SWITCH NETWORK Energy Tracks 200 GB buffers ~ 400 Readout memories EVENT BUILDER. A large switching network (400+400 ports) with total throughput ~ 400Gbit/s forms the interconnection between the sources (deep buffers) and the destinations (buffers before farm CPUs). ~ 400 CPU farms EVENT FILTER. A set of high performance commercial processors organized into many farms convenient for on-line and off-line applications. 5 TeraIPS Petabyte ARCHIVE Challenges: 1 GHz of Input Interac*ons Beam- crossing every 25 ns with ~ 25 interac*ons produces over 1 MB of data Archival Storage at about 300 Hz of 1 MB events

Triggering

General trigger strategy Needed: An efficient selec*on mechanism capable of selec*ng interes*ng events - this is the TRIGGER Needle in a haystack General strategy: System should be as inclusive as possible Robust Redundant Need high efficiency for selec*ng interes*ng processes for physics: selec*on should not have biases that affect physics results (understand biases in order to isolate and correct them) Need large reduc*on of rate from unwanted high- rate processes instrumental background high- rate physics processes that are not relevant (min. bias) This complicated process involves a mul*- level trigger system 11

Mul*- level trigger systems L1 trigger: Selects 1 out of 10000 (max. output rate ~100kHz) This is NOT enough Typical ATLAS and CMS event size is 1MB 1MB x 100 khz = 100 GB/s! What is the amount of data we can reasonably store these days? O(100) MB/s Addi*onal trigger levels are needed to reduce the frac*on of less interes*ng events before wri*ng to permanent storage 12

Mul*- *ered trigger systems Level- 1 trigger: Integral part of all trigger systems always exists reduces rate to ~50-100kHz. Upstream: further reduc*on needed typically done in 1 or 2 steps Detectors" Detectors" Lvl-1" Front end pipelines" Lvl-1" Front end pipelines" Lvl-2" Readout buffers" Readout buffers" Switching network" Switching network" Lvl-3" Processor farms" HLT" Processor farms" ATLAS: 3 physical levels CMS: 2 physical levels 13

A mul*- *ered Trigger System Tradi+onal 3- +ered system Accelerator: x ns between bunch crossings Level 1 In: 1/x GHz Out: O(10) khz Level 2 In: L1 out Out: O(1) khz Level 3 In: L2 out Out: O(100) Hz Pipelined, Hardware only, coarse readout, ~few µs latency Hardware/Sowware mix, L1 inputs, ~100 µs latency CPU farm, access to full event informa*on, O(1)s/event

Two- *ered system Two- level processing: Reduce number of building blocks Rely on commercial components for processing and communica*on

Comparison Three physical en**es Invest in Control logic Specialized processors Two physical en**es Invest in Bandwidth Commercial processors

Level- 1 algorithms Physics concerns: pp collisions produce mainly low pt hadrons with pt ~ 1 GeV Interes*ng physics has par*cles with large transverse momentum W- >eν : M(W) = 80 GeV; pt (e) ~ 30-40 GeV H(120 GeV) à γγ ; pt(γγ) ~ 50-60 GeV Requirements Impose high thresholds Implies dis*nguishing par*cles possible for electrons, muons and jets; beyond that need complex algorithms Some typical thresholds from 2012: Single muon with pt > 16 GeV Double e/γ trigger with pt > 17, 8 GeV Single jet with pt > 128 GeV Total of 128 physics algorithms possible at L1 Candidates energy, kinema*cs, quality, correla*ons See Len s talk 17

Par*cle signatures

ATLAS & CMS Level 1: Only Calorimeter & Muon Simple Algorithms Small amounts of data Complex Algorithms Huge amounts of data High Occupancy in high granularity tracking detectors

High Level Trigger 20

HLT Processing High Level Triggers ( > Level 1) are implemented more or less as advanced sowware algorithms using CMSSW Run on standard processor farms with Linux as OS cost effec*ve since Linux is free Different Intel Xeon genera*ons (2008-2012) HLT filter algorithms are setup in various steps: L1 seeds L2 unpacking (MUON/ECAL/HCAL) Local Reco (RecHit) L2 Algorithm Each HLT trigger path is a sequence of modules Producer: creates/produces a new object eg. unpacking, reconstruc*on Filter: makes a true/false [pass/fail] decision eg. muon p T > X GeV? Processing of the trigger path stops once a module returns false See talks by Juliette and Dylan Filter L2.5 unpacking (Pixels) Local Reco (RecHit) L2.5 Algorithm 21

HLT Menu Many algorithms running in parallel Logically independent Determine trigger decision how to split the events, online and offline (Streams and Primary Datasets more on this later)

HLT Guidelines Strategy/design: Use offline sowware as much as possible Easy to maintain (sowware can be easily updated) Uses our best (bug- free) understanding of the detector Op*mize for running online (~100 *mes faster than offline) Run the fastest algorithms first, reject events as early as possible, regional unpacking/reconstruc*on, reduce combinatorics/pileup Boundary condi*ons: Have access to full event data (full granularity and resolu*on) Take advantage of regions of interest to speed up reconstruc*on Limita*ons: CPU *me See Clint s talk Output selec*on rate: ~400-1000 Hz See Inga s talk Precision of calibra*on constants (While keeping physics acceptance as high as possible) See talks by Dylan and Darren 23

HLT Requirements Flexible: Working condi*ons at 14 TeV are difficult to evaluate (prepare for different scenarios) Robust: HLT algorithms should not depend in a cri*cal way on alignment and calibra*on constants Inclusive selec*on: Rely on inclusive selec*on to guarantee maximum efficiency to new physics Fast event rejec*on: Event not selected should be rejected as fast as possible (i.e. early on in the processing) Quasi- offline sowware: Offline sowware used online should be op*mized for performance (we need to select events that are interes*ng enough )

Need to address the following ques*ons: Trigger Menus What to save permanently on mass storage? Which trigger streams should be created? What is the bandwidth allocated to each stream? (Usually the bandwidth depends on the status of the experiment and its physics priori*es) What selec*on criteria to apply? Inclusive triggers (to cover major known or unknown physics channels) Exclusive triggers (to extend the physics poten*al of certain analyses say b- physics) Prescaled triggers, triggers for calibra*on & monitoring General rule : Trigger tables should be flexible, extensible (to different luminosi*es for eg.), and allow the discovery of unexpected physics. 25

The HLT is responsible for spling the data into different streams Different purposes Different event content Different rates Streams Stream A collects all the data for physics analysis Is further sub- divided into Primary Datasets (PDs)

High Level Trigger @ 13 TeV in 2015 The higher collision energy leads to a higher cross- sec*on comparing 8 TeV and 13 TeV MC simula*on we observe: a factor 1.5-2 for leptons a factor > 4 for jets! assume an average increase by a factor ~ 2 higher luminosity: ~ 1.4e34 cm- 2s- 1 a factor ~2 higher than the peak luminosity in 2012 => a factor ~4 increase in the expected HLT rate Pileup will be higher too Max. av. Pileup ~40 (compared to ~30 for 2012) HLT rate ~robust against pileup but HLT *ming increases linearly with pileup BoFomline: need to make befer use of the available bandwidth, improve online reconstruc*on, calibra*on, design smarter and befer triggers

Trigger Coordina*on Trigger Coordinators (Tulika Bose) Roberto Carlin Depu+es Andrea Bocci, Simone Gennai Strategy Trigger Evalua+on And Monitoring Roberta Arcidiacono Muriel Vander Donckt SoJware Tools Online Release Menu Mar*n Grunewald Andrea PerroFa Field Opera+ons Group Aram Ave*syan Marina Passaseo Rates & Prescales: I. Bucinskaite, L. Apanasevich, TBD Menu Development and OpenHLT: Z. Demiragli, H. Gamsizkan Data & MC Release Valida+on: D. Puigh, TBD Offline DQM: D. Puigh, TBD Menu Integra+on & Valida+on: J. Alimena, G. Smith Framework & Tools: M. Rieger ConfDB: V. Daponte, S. Ventura Online Deployment: TBD Rate/CPU Monitoring: C. Richardson, D. Salerno, Y. Yang Online DQM: TBD Calibra+on/Alignment: J. Fernandez

POG/PAG Trigger Conveners

TSG Open Posi*ons FOG: Online Deployment: development of sowware and tools for DAQ2 On- call expert training, documenta*on Online DQM On- call experts for Run 2 STEAM: Rates & Prescales Rate and *ming studies for the overall HLT menu Valida*on/DQM Coordinate the valida*on of new HLT menus, new sowware releases, and AlCa condi*ons Maintenance of group sowware tools