Run coordination Summary C. Gemme INFN Genova on behalf of Run Coordination August 16 th, 2016
LHC Cardiogram 2076 2173b 2172 b Slow Dump of ATLAS Toroid due to electrical glitch during work on the network. 7h recovery in the interfill. 1. 2. 1. Dipole A31L2 inves8ga8ons and mi8ga8ons 2. PS vacuum leak 2
LHC Cardiogram 3x3 2172b 2220 b 2220b should be the maximum this year 3
The problem: Two quenches while ramping down RB.A12 from 6 ka with -10 A/s 10 June 2016 @ 547 A 3 Aug 2016 @ 295 A à The second event triggered detailed inves8ga8ons. Could be explained with inter-turn short in dipole A31L2 Follow-up (Wednesday) AddiBonal instrumentabon Measurement campaign Various types of cycles evening and overnight. Thursday: analysis of measurements A31L2 No sign of changes in short High current quenches and Fast Power Abort must be avoided: would destroy the magnet Mi8ga8ons: ü Remove global protecbon mechanism: implemented on Thursday and validated ü Reduce BLM thresholds: changed on Thursday/ Friday ü Increase QPS thresholds on A31L2: new board installed on Thursday and successfully validated. 4
LHC plans Plan: Con8nue physics with 2220 bunches Slowly increase bunch intensity up to 1.2e11 bunch intensity Targe8ng restricted range for bunch fla^ening for LHCb (from current fill: 0.95-1.15 ns à 0.95-1.05 ns) In discussion Decrease crossing angle from 375 to 300 urad à 10% more luminosity, z-length of luminous region, pile-up Special fill request by CMS: low-mu running Remove week 43 for pp running to have one more week for training 2 sectors to 7TeV Luminosity levelling test already this year 5
ATLAS Week 5181 2076b 5183 2173b 5187 2172b 5194 3b 5197 2172b 5198 2172b 5199 2220b Slow Dump ATLAS Toroid Cosmis Data taking
TDAQ Silvia Fressard- Batraneanu, Jeremy Love Smooth running. Patch for HLTSV available to fix occasional resource starva8on. Patch pending installa8on, needs at least one run with high rates before pufng in produc8on Problema8c Tile ROL fibres In collabora8on with Tile, organizing the installa8on of new spares Weekly physics efficiency : 94.03% Trigger held by system
IBL 8ming Pixel IBL 8ming not op8mal since TIM replacement aier MD. Since then, running with 2BC. Special fill 5183 on Monday with Individual bunch and a 8ming scan of -10ns/+10ns performed. Timing constants in the TIM have been applied: we recovered most of the hits but we s8ll seem to have a frac8on of clusters on tracks from the neighboring bins. 1 BC on-8me hit efficiency > 99% è 1 BC in the next fill. IBL calibra8on IBL calibra8on was finally recovered aier the sw upgrade during MD re-tuning needed to compensate from TID effects New version of fw for IBL and Layer2 with debugging capabili8es to analyze the TTC-TIM stream Marcello Bindi Best week of the year for Pixel in term of data taking efficiency è dead- Bme = 0.41%
SCT/TRT Chiara Debenede?, A. Bocci, D. Derendarz, A. Romaniouk SCT In general Very quiet running. In Fill 5183, one problema8c link not shown as removable by the shiier and not automa8cally recoverable. Set ROD forma^er overflow limit to 1600 (>max number of hits per link to protect against non-physical pa^erns). deployed on Few RODs in Stable beam, ok. TRT Stable data taking DAQ: Needs also replacement of one TTC board that works correctly during data taking but fails in test pulses FastOR: Observed lower rate than expected. Caused by the change of the readout mode for high threshold bits (from Mar 2016), for three bit readout to single middle bit. May be reverted in future cosmics run.
L1 Calo/ L1Topo Kate Whalen L1Calo: Generally a quiet week for L1Calo! Monitoring improvements. L1Topo Test of complex dead8me with random triggers: bucket 3 changed from 7/260 -> 14/260 -> 15/260 (not deployed yet, need more tes8ng) Muon items enabled since Saturday morning, Fill 5197 Total rate ~ 15 khz (L1), 60 Hz (HLT) Timing checks: all algorithms are well-8med
LAr Jose Benitez HW An HV module (EMEC A) exchanged on Sunday due to 4 problema8c channels. To be followed up if readings were correct. M167.C3 (HEC C) HV decreased due to new short HECA cell 0x3b1a5200 (tower 0x4110400) disabled at each run, consider if permanently disable this shaper switch Reprocessing to consider it DQ/Monitoring under test: online flagging of Mini Noise Bursts (prev. offline) added DQMD checks for PU removal: turning yellow if 1-2 PU s are disabled, red if >2 - to complement informa8on in Shiier Assistant
TILE link failure Silvia Fracchia Repeated stopless removals of ROL ROD5 EBC33-36 (4 neighbouring modules, intolerable defect for DQ) Star8ng from Sunday from 21.18 during stable beam, with consequent HLT problems Caused >3 hours interrup8on in data taking Several tests and a^empts to recover it (power cycles, TTC restarts, turning off affected modules) Finally turned out to be a problem with ROD-ROS link, similar to what occurred on 13th July in LBA The fiber was eventually replaced with a working one out of two spares the subs8tute fiber has low op8cal power due to fiber end misaligned in the connector. Reflectometry measurement on Monday spot problem in the same loca8on for the downstream links. Emergency plan: restore a spare fiber from two bad ones Short term plan: install few addi8onal spare fibers (2 per ROD crate)
TILE link failure Rafał Bielski With the Tile ROL disabled any chain trying to read data from Tile sending events to debug stream at L1 rate. 21:25 Switched to standby keys to mi8gate the HLT backpressure for 40 22:00 Disabled all jet/met chains Recovery completed at 01:00
Muons Claudio Luci CSC MDT CSC latency is now set using ATLAS latency (Instead of sefng it manually and comparing it to Atlas latency) RCD crash has been fixed. This was also the cause for some failed recovery. The fake chamber drop reported to CHIP by RCD is under study. RPC Access to cavern to disconnect a module from HV channel TGC Few other cases under close monitoring and inves8ga8on. new Sector Logic firmware was deployed to reduce L1_MU4 rate.
Other subsystems Trigger New sw release deployed on Tuesday à fine Mul8ple keys deployed during the week, following LHC programs, including overnight Cosmics run decrease in TRT rate wrt earlier cosmic runs due to change in readout (March), which can be reverted for the next cosmic run Data prepara8on Lucid Everything is working fine on the data processing side GM infrastructure is working smoothly too Main point now is to update references: Running smoothly. Only pending issue is the automa8c recovery from all SEU occurences ongoing. BCM/BLM/DBM BCM/BLM/: Some minor hw interven8on done or in progress DBM: debug and commissioning when there was no LHC running.
Conclusions In general smooth data-taking with increasing efficiency. Only serious problem Tile link on Sunday evening. Next week MD2 and 2.5 km commissioning. 6/7 weeks of pp running lei