4.2 Description of the system

Size: px
Start display at page:

Download "4.2 Description of the system"

Transcription

1 4 T H E P I L E - U P S Y S T E M 4.1 Introduction The Pile-Up (PU) system was originally designed to detect multiple interactions in the same bunch crossing and to remove crowded events at the hardware trigger level (Pile-Up Veto). In fact, events containing more than one interaction were expected to create a difficult environment for b-physics analyses. The following aspects concerning events with multiple vertices were considered [84]: They have a higher track multiplicity, therefore being more difficult to reconstruct and leading to a high rate of ghost tracks 1 and a lower track finding efficiency. They may lead to ambiguities in assigning secondary b-vertices to their primary interaction vertex, resulting in incorrect decay-time reconstruction. They could result in a wrong labelling of b-decay vertices as primary interaction vertices, or vice versa. They may cause a reduced flavour tag performance for the opposite side flavour tag method 2. The Pile-Up Veto was specifically introduced to veto multiple interactions [85] [86]. The decision to apply the veto at the hardware level was taken to free trigger bandwidth for other L0 trigger lines, for instance lines with low thresholds on the transverse energy or momentum of candidates. For this reason the PU is required to provide information to the L0 trigger system at the 40 MHz bunch crossing frequency. Among such information there is the number of detected pp collision vertices per bunch crossing. This would also allow to use the PU to provide an online relative luminosity measurement, based on the observation of single, double or multiple vertices. Moreover, the detector was foreseen to provide a trigger for beam-gas events, being the only subdetector able to give a track (multiplicity) measurement in the negative z-direction. As such, it could trigger beam2-gas events, i.e. collisions between the beam travelling along negative z and gas molecules present in the beam pipe. At the LHCb design luminosity of cm 2 s 1 and with 1300 bunches, the average number of visible pp interactions (µ) 3 is In 2012 the experiment has been running at an operational luminosity of cm 2 s 1, with a value of µ Nevertheless, we have not vetoed events based on the PU vertexing decision, since a new efficient trigger selection was introduced in the High Level Trigger and crowded events turned out to be less harmful 1 Ghost tracks are random combinations of hits collected in the LHCb detector and wrongly matched during the reconstruction of track patterns. 2 Flavour tagging is the identification of the flavour of reconstructed B 0 and B 0 mesons at production. To achieve so, one of the algorithms implemented at LHCb infers the flavour of the signal B from the identification of the flavour of the other b-hadron (tagging B), and it is therefore named opposite-side tagging algorithm. 3 See Sec for a description of the relation between luminosity, bunch scheme and number of visible interactions in LHCb. 37

2 38 The Pile-Up System than expected. Hence, the Pile-Up vertexing information is no longer used according to its original veto design, but the PU has been employed in the following tasks: In the L0 trigger, it supplied a hit multiplicity measurement in the negative z-direction, providing a first order estimate of the backward track multiplicity. This information was used in the first year of data taking as a global event cut for minimum bias triggers, and was subsequently implemented in the beam-gas trigger lines (see Sec. 7.1). In fact, the PU hit multiplicity allows to distinguish between beam1-gas and beam2-gas events, where the collision with gas molecules involves the beam travelling along positive and negative z directions, respectively. In the measurement of the offline luminosity, the total backward PU hit multiplicity has been used to cross-check the offline relative luminosity measurement, as described in Sec In the online system, the PU provides an estimate of the number of empty crossings, i.e. crossing with zero vertices reconstructed at 40 MHz frequency. This information was used in 2011 to monitor the instantaneous luminosity (see Sec. 7.3). The detector was successfully commissioned and has been operational since the beginning of LHCb data taking. In this chapter we describe the Pile-Up data taking architecture, the implementation of the raw data decoding and the tests performed to tune the system. In Chaps. 5 and 6 we respectively present a study of the sensor alignments and the optimisation of the Pile-Up Veto algorithm. 4.2 Description of the system The Pile-Up detector consists of two parallel measurement planes, perpendicular to the beam-line and positioned upstream of the VELO in the backward region of the LHCb interaction point. Each plane consists of two 300 µm thick silicon microstrip stations, or halves, displaced by 15 mm in z. The property of the Pile-Up detector is its double output: similarly to the VELO, it provides analog output data formed by strip (cluster) measurements, but due to the requirement of being implemented in the L0-trigger, it also provides digital output data obtained from the OR of four neighbouring strips. In Fig. 17 we show a sketch of a detector half carrying VELO and PU sensors, together with an enlarged schematic view of the z-location of the PU stations. The detector stations on the positive (negative) x-side are placed at respectively z= -300 (-315) mm and z = -220 (-235) mm from the centre of the nominal interaction region; thus, they are separated by 80 mm from each other 4. While each VELO station consists of a pair of R and φ-sensors mounted back to back to a support, together forming a so-called module, a PU station has only one R-sensor attached to the module support. In this sense, the PU modules are treated as VELO modules without a φ-sensor. Each PU sensor has the same half-disk geometry of a VELO R-sensor and counts 2048 strips, 512 per sector, as shown in a scheme in Fig. 18 (a). The strips are read-out by custom designed Beetle-chips [70], visible in Fig. 18 (b). The Beetles are able to simultaneously provide analog signals in VELO mode read-out (i.e. at each positive L0 trigger), and digital signals in Pile-Up mode read-out (i.e. every 25 ns clock cycle). Before describing in more detail the readout system, in Tab. 2 we list the PU module and sensor numbers, as they are defined in the LHCb software, together with the corresponding slot labels. 4 The station s z position is defined from the centre plane of the module base.

3 4.2 Description of the system 39 Figure Top: sketch of one half of the VELO detector; the PU modules are the two modules in red and yellow in the back. Bottom: schematic representation of the two PU stations and first two VELO stations viewed from top. The left stations correspond to positive values of the x coordinate in the LHCb global frame (A-side), while the right stations correspond to negative values of x (C-side). The VELO sensors are arranged in pairs of r and φ-sensors (in blue and red, respectively) and mounted back to back on the module support; the cooling block is always attached to the downstream side of the module. Similarly, each PU sensor is mounted to the module support with the z position defined at the module centre plane, even though the corresponding φ-sensor is missing. Here the cooling block is always on the opposite side of the sensor. To be able to use identical designs for cables and modules, half of the PU modules are mounted mirrored and the two modules closer to the VELO have cooling blocks facing backward.

4 40 The Pile-Up System Figure In Fig. (a), scheme of a PU (VELO) r-measurement sensor; in Fig. (b), picture of one of the VELO R-sensors. Table 2... Module and sensor nomenclature for the Pile-Up detector. The software numbers for modules, Nmod, are even (odd) numbers for modules at positive (negative) x, from 0 (1) and increasing with the z position (see Fig. 17). The software numbers for sensors, NSi, are defined as NSi =Nmod The module slot labels consist of two letters to describe the type (namely PU), two digits to number the slot (starting from 01 for the most upstream) and a last letter to identify the detector half (L=left, R=right). The sensor orientation (t=top, b=bottom) is an indication of the side where the cables come out of the vacuum tank. 4.3 SW sensor nr. (NSi ) sw module nr. (Nmod ) slot label side and orientation PU01L PU01R PU02L PU02R + x (A) b x (C) t + x (A) t x (C) b Digital readout of the Pile-Up The analog output of the Pile-Up detector is processed in the same way as the analog signals provided by a VELO module [87], while a dedicated electronic chain takes care of the digital signal processing. Figure 19 shows an overview of the electronic boards and the system architecture, with both analog and digital-related components. Figure 20 focuses instead on the digital components, which are described in detail below. Hybrids. The four PU hybrids carry the silicon sensors and the Beetle readout chips. The strips are concentric circles covering about 45 in the φ-coordinate. They are placed at increasing pitch, from 40 µm at the inner radius of 8 mm, to µm at the outer radius of 41.9 mm [68]. There are 2048 strips per sensor and they are read-out by 16 custom designed Beetle-chips, with fast comparator outputs. The signals coming from four sequential strips are OR-ed after discrimination, and combined into a single digital output. The 2048 strips per sensor then give 512 digital output channels,

5 4.3 Digital readout of the Pile-Up 41 Analog TELL1 DAQ Hybrid VELO tank 4 Copper Cable 20m LVDS I2C clocks reset testpulse trigger Optical Tx Board Velo Control Board TTCrq Optical Station Crate SPECS slave 8 2 Optical Ribbon VEPROB TTCrq CCPC L0 Buf. 4 Vertex Processor Crate Output Board TTCrq CCPC Optical Digital TELL1 1 L0DU DAQ ECS Figure Overview of the Pile-Up system architecture. The coloured blocks represent unique PU boards, while the other boards are also used by other subdetector systems. The dashed boxes represent the locations of the various components. with an effective digital channel pitch corresponding to four times that of the analog channel. Optical Transmission (OPTO Tx) Boards. The signals coming from the hybrids are sent to 8 Optical Transmission Boards, two per sensor. The OPTO Tx Boards align the signals in time versus the LHC bunch crossing and multiplex the input signals into the Vertex Processing Boards. The multiplexed signals are referred to as hit maps. OPTO Tx Boards also attach Bunch Crossing IDentifier labels to the data 5. Vertex Processing Boards (VEPROBs). There are four Vertex Processing Boards, each connected to the OPTO Tx boards via 24 high speed optical links and controlled by a credit card PC (CCPC). The first VEPROB receives hit maps from one bunch crossing in four consecutive clock cycles, the second VEPROB receives hit maps from the following bunch crossing, again in four consecutive clock cycles, and so on. The bunch-crossing data distribution follows a round-robin approach. As such, only one VEPROB has valid data for a given bunch crossing 6. These boards represent the key component of the PU detector, as they carry the FPGAs 7 running the Pile-Up algorithm, as described in Ch. 6. The trigger decision is sent via the Output Board to the Level-0 Decision Unit L0DU. Moreover, the VEPROBs store the binary data of the PU for each bunch crossing, while waiting for a L0-yes decision, and subsequently send it to the Optical Digital TELL1 Board. Output Board. A single Output Board multiplexes the Pile-Up decisions from the four VEPROBs and sends the trigger information, e.g. the number of primary interactions per bunch crossing 8, to the Level-0 Decision Unit. Moreover, the board makes histograms of the Pile-Up trigger decisions, which are accessible through the ECS interface. Optical Digital TELL1 (OPTO DIGI) Board. The VEPROBs send the binary data to a digital TELL1 Board, named OptoDigi, via 8 Optical Links in total. OptoDigi is a standard TELL1 Board with optical receiver cards, but with a Pile-Up specific firmware. 5 See Sec. 3.1 for more information on the BCID. 6 VEPROB s data are sent to the L0DU when a L0-yes is received from ODIN; this happens on average at a 1 MHz frequency. 7 FPGAs are Field Programmable Gate Arrays. 8 See Sec. 6.1 for a complete description of the trigger information.

6 42 The Pile-Up System Figure Overview of the components dedicated to the PU digital signal processing. OPTO Digi is the Optical Digital TELL1, L0DU is the Level-0 Decision Unit and VEPROBs are the Vertex Processing Boards. The 6 symbol indicates how many links/cables are represented by the straight line.

7 4.4 PU data processing and decoding 43 The signals of the receiver cards are sent to four PreProcessor FPGAs (PP-FPGAs) - the main processing units on the board - via 24 links. Of these, only 8 are actually connected, of which 2 give valid data (from the VEPROB with valid hit maps for a given bunch crossing). The valid data are buffered into the PU zero suppressed (ZS) digital banks, while the data carried by all 24 links 9 are buffered into the PU non-zero suppressed (NZS) digital banks. Therefore, NZS banks contain partially empty data. We describe both types of PU banks in the next section. 4.4 PU data processing and decoding In this section we present the overall data processing flow of the Pile-Up system. After a general overview of the data types, we describe the different data banks generated by the PU. The banks are decoded into a set of software objects in a more human-readable format, used for monitoring applications. We also give a brief description of the decoding algorithms PU data and bank types The signals collected from the front-end chips processor boards of the Pile-Up are transferred to TELL1 boards. These are LHCb common readout boards which allow to implement different data collection algorithms for various subdetectors, thanks to the extensive use of large FPGAs [88]. The output data format of the PU TELL1 boards is described by the RawBanks format [89]. All LHCb subdetector s RawBanks are collected in a larger bank called RawEvent, which is implemented as a map of all existing banks. Figure 21 shows a scheme of the PU banks and corresponding data types. The digitised PU analog signals are encoded in banks identical to those of the VELO, while the banks collecting data from the digital output stream are specific for the PU system. Even though the analog stream is not used for reconstruction purposes in LHCb, it is provided to facilitate the tuning of the digital stream, as explained in Sec For both streams, the transmitted data consist of almost purely zero suppressed (ZS) banks, at a 1 MHz L0-trigger frequency. Only 1 Hz of the transmitted data is accumulated in non-zero suppressed (NZS) mode. For the analog signals, the ZS data stream is obtained from the NZS one by applying pedestal subtraction, common mode noise subtraction and clustering algorithms. The zero suppressed data therefore consist of cluster information. Similarly, the binary PU signals are streamed into both NZS and ZS data banks, but in this case they contain the same hit map information. The main difference between the two lies in the bank format. In fact, here zero-suppression just stands for the removal of empty bank words, which are words not carrying any information, as explained in detail in Sec The zero suppression is hardly beneficial for the PU algorithm, but the binary NZS stream is provided to fulfil the experiment requirements in matter of data transmission. Below we list the existing PU bank types: VeloFull, data bank containing NZS digitised analog signals from the PU. Velo, standard bank collecting all analog ZS data after the TELL1 clusterisation process, see Ref. [90] for a description of the algorithm links are empty by default and other 6 are dummies.

8 44 The Pile-Up System Pile Up Data analog output digital output non zero suppressed zero suppressed non zero suppressed zero suppressed VeloFull Velo L0PUFull L0PU Figure Sketch of Pile-Up data streams (white boxes) and corresponding data banks (grey boxes). L0PUFull, bank containing NZS signals collected from the digital output of the Pile- Up system. L0PU, bank storing ZS data from the digital output of the PU. L0PUError, bank containing information on synchronisation errors on the TELL1 boards, currently disabled Processing of the PU data: analog output Data stored in VeloFull and Velo RawBanks are decoded by appropriate software packages in the LHCb framework. The Velo bank decoder reads the information directly from the clusters, while the unpacking process of the VeloFull bank consists of two main steps: an ordering phase to take care of arranging the data segments in the proper order, and a decoding phase to organise the output in software objects called VeloTELL1Data. A TELL1 emulator can be executed to create a Velo bank from a VeloFull one: it uses VeloTELL1Data and emulates the algorithms implemented on the TELL1 board, in order to correctly tune the algorithm parameters. For a more detailed description of VELO data processing, see Ref. [91] Processing of the PU data: digital output In Fig. 22 and 23 we present the format of the Pile-Up data banks L0PUFull and L0PU: the information is encoded in blocks of 32-bit words and the size of these blocks depends on the bank type. The PU non-zero suppressed bank (Fig. 22) has 2 header-words and a data-body of 4 blocks (PP0 PP4). The bank header contains information on size, version and type of the bank. All the non-header words carry the event by event hit map information, i.e. each

9 4.4 PU data processing and decoding 45 Figure Format of the Pile-Up raw data bank L0PUFull; see text for explanation. Figure Format of the Pile-Up raw data bank L0PU; see text for explanation.

10 46 The Pile-Up System bit corresponds to a certain hit position on the detector, in terms of binary channel number and sensor. The four data blocks correspond to the output of the four PP-FPGAs of the OPTO DIGI Board. Every PP-FPGA processes 6 Optical Links (OL); every OL receives 35 words per L0-yes trigger, to which the OptoDigi TELL1 adds one initial word set to 0. Since only the first 8 links are actually connected and only 2 carry useful information (due to the round-robin scheme), a part of the NZS bank necessarily contains empty and dummy data. A 5-words section is also placed at the end of each block to encode additional event and error information. In total, the non-header part of the bank consists of 4 sections. The PU zero-suppressed bank has instead a simpler structure: again a bank header of two words, identical to the L0PUFull bank header, plus two data sections of 34 words each. These sections contain the non-empty and non-dummy data of the NZS bank, corresponding to the only VEPROB output carrying valid data for a given bunch crossing. We implemented a decoder in the LHCb framework 10 to unpack the data stored in the raw banks and translate it into software objects with a human-readable format. We first reorder the raw bank words according to the needed specifications [92], then we convert the content of the words into a detector hit-map and finally we store the hits into VeloClusters, such that the output of the algorithm complies to the standard zero-suppressed VELO data format. Since there is no clustering process performed by the PU TELL1s on the digital side, the purpose of the cluster emulation is to give an output compatible with the analog output format. The output of the L0PU and L0PUFull decoder is stored in clusters that are accessible by other algorithms and aim at monitoring the detector performance. A few examples of such monitoring tests are described in the next section PU data checks Several procedures were performed to optimise the efficiency of the detector and tune the comparator thresholds. Firstly, we compared digital and analog readout, inspecting the single strip signal collected from PU analog channels in correspondence to the signal collected from PU binary channels. In Fig. 24, the red curve shows the ADC value of all PU strips above threshold, while the blue curve shows the ADC value of the strip with highest signal, in four neighbouring strips. As a comparison, the histogram in yellow is filled if a positive digital signal is observed in correspondence to the ADC value. The figure proves that for signals above 16 ADC counts there is a very good match between analog and binary outputs. Clearly the choice of the binary threshold depends on the desired balance between efficiency and noise rate. The lower is the threshold, the higher is the efficiency obtained, but for ADC values below 13 counts the binary output starts covering the analog noise shoulder. We performed more detailed comparisons of digital and analog channels, in particular dedicated to the binary thresholds tuning; see Sec for a description of this study. Secondly, we compared zero suppressed and non-zero suppressed data of the PU digital output. As expected, such comparison shows a 100% match between the data stored in the L0PU and in the L0PUFull data banks. Thirdly, we tested the front-back correlation. Figure 25 shows the correlation between hits recorded by the front Pile-Up sensors, i.e. those closer to the VELO system (on the y-axis), and hits recorded by the back sensors, i.e. farther away from the VELO (on the x- axis). The 2-dimensional histogram is obtained by decoding the L0PU bank for about collision events from 2010 data. The hit count is given on the colour scale: the diagonal shows a higher occupancy, which corresponds to the correctly correlated hits (i.e. hits be- 10 The decoder is implemented in the Velo/VeloDAQ package of the Lbcom library, via the class DecodePileUpData.

11 PU data processing and decoding PU channel, digital readout - sensor 130 PU channels, digital readout - sensor 131 Figure The red curve shows the ADC value of the sum of four neighbouring PU strips, while the blue curve shows the ADC value of the highest of those four strips. As a comparison, the histogram filled in yellow is the ADC value collected (before discrimination) by the corresponding PU binary channel PU channels, digital readout - sensor 129 (a) PU C-side PU channel, digital readout - sensor 128 (b) PU A-side Figure Correlation between hits recorded by the front Pile-Up sensors, i.e. sensors 130 and 131, closer to the VELO system (on the y-axis), and hits recorded by the back sensors, i.e. sensors 128 and 129, farther away from VELO (on the x-axis). The hit count is shown on the colour scale. The white columns appearing in histogram (a) correspond to a known group of dead strips on sensor 129. longing to the same track), while the combinatorics appear as a quasi-uniform background. The white parts correspond to masked digital channels on sensor 129. The occupancy along the diagonal has gaps of about 64 strips, exactly matching those hits on the front sensors 47

12 48 The Pile-Up System that have no corresponding hits on the back sensor. This is due to the different θ-acceptance of the planes. As illustrated in a sketch in Fig. 26, tracks producing hits on the inner area of a back sensor end up in the hole of the corresponding front sensor, while tracks crossing the outer region of a front sensor have no hit on the back sensor. The background reveals a pattern which reflects the sensor geometry: for instance, the occupancy peaks at channels 0, 512, 1024, 1536, which correspond to the strips at lower r-coordinate, it is higher for channels in the sensor inner region and decreases for channels belonging to the outer area. Figure Sketch (not to scale) of PU sensors 128 and 130. A track originates from the interaction point V at a small polar angle θ with the horizontal axis (beam line). It produces a hit on the inner area of back sensor 128 and the corresponding position on front sensor 130 ends up in the centre hole of the sensor. Similarly, a track originating from V at a large polar angle α produces a hit on the outer area of front sensor 130. The same track crosses the back plane outside of sensor 128. In the following section we focus on another subset of system checks and we give an insight on the Pile-Up commissioning work accomplished. 4.5 Commissioning The Pile-Up system has been installed in 2007, together with the VELO detector, and has been operational since the beginning of LHCb data taking in It went through a careful commissioning phase covering several activities, including system calibration with test pulses, tests with injection beam data 11 and time-alignment procedures. This section describes some of the tests performed during the final part of the PU commissioning, in 2009 and The focus is on the time alignment of both analog (Sec ) and digital signals (Secs , 4.5.4) and on the optimisation of the thresholds for the digital readout (Sec ). All results presented here are currently implemented as configuration parameters of the PU system. After we tested the Pile-Up detector with both calibration pulses and LHC particles, the first collision events were recorded. We used beam collision data to determine the position of 11 These tests used particles resulting from LHC injection tests, commonly referred to as Transition-line End Dump (TED) data.

13 4.5 Commissioning 49 the sensors and reconstruct vertices at the trigger level. The space alignment and vertexing procedures are presented separately, in Chaps. 5 and 6 respectively Time alignment: introduction To assign the detector output signal to the correct bunch crossing, we need to time align it well within 25 ns 12. The PU output consists of an analog and a digital data stream. The tuning of the timing for the analog output is equivalent to the tuning performed for the VELO detector [93], while additional steps are necessary to time align the digital output to the analog one. Finally, both outputs need to be tuned to the LHC clock. Figure 27 shows a scheme of the different clocks contributing to the electronic readout, while the scans performed to tune each delay are schematically presented in Fig. 28. The main clock signal, common to all LHC experiments, corresponds to the bunch crossing frequency and is received in LHCb by the Timing and Fast Control system (TFC), which controls the entire readout of the detector. Both the Analog TELL1 board and the Control Board receive the clock signal from the TFC via a TTCR (Timing, Trigger and Control Receiver). The Control Board provides the clock signal to hybrids and Optical Tx Boards. Moreover, each hybrid receives a clock signal for the analog readout, here named Beetle Clock (1), and a phase-shifted clock for the digital readout, named Comparator (Comp) Clock (2); the clock signal used on the Optical Tx Board, or Opto Clock, is the result of the contribution of two delays: the clock delay (3), unique for each Optical Tx Board, and the clock phase (4), tunable per link 13. The Opto Clock is received by the FlipFlops (FF), which synchronise the digital output from the Beetle comparators. Initially, all relative delays are determined by setting a test pulse on the input of the Beetle and tuning the Beetle Clock to sample on the peak of the signal (test in red in Fig. 28). By injecting test pulses 14, we produce a signal similar to the pulse generated by a particle passing through the sensor. Afterwards, for the analog output of the Beetle, we perform ADC delay scans to tune the phase of the Tell Clock (5) delay versus the Beetle Clock and capture the Beetle analog output at the optimal time (test in pink in Fig. 28). For the digital signal path, we need to tune the Comp Clock to find the centre of the digitised pulse and assign the optimal digital sampling time (see delay in blue in Fig. 28). The digitised pulse is obtained from the analog signal by applying a channel comparator threshold; only afterwards it is sampled by the Comp Clock and its output is OR-ed to form the binary signal. Therefore, we achieve a first Comp Clock tuning by lowering all thresholds, to obtain a long enough signal-over-threshold time; later, we improve the sampling time by using higher thresholds and we equalise the channel responses via threshold scans. Finally, we tune the phase of the Opto Clock with respect to the phase of the Comp Clock. This can also be done when the Comp Clock is not perfectly tuned, by setting a fixed pattern, typically a square wave, on the digital output of the Beetle and performing Opto Delay scans (see delay in green in Fig. 28). Once the internal delays are set, the PU system has to be time aligned to the LHC beam, that is the Beetle clock needs to be tuned to the bunch crossing time. Tuning this last delay requires a corresponding shift in all other delays (2, 3, 5). 12 The requirement is more stringent for certain delays, as explained later in this section. For instance, the analog pulse has a flat top of about 1 ns, hence the analog output requires a time alignment fine-tuning within 2 ns. 13 This is a copper link connecting Beetles to OPTO Tx boards, see Fig The charge is injected directly to the Beetle amplifier, at a fixed time.

14 50 The Pile-Up System hit HYBRID Velo Control Board Beetle Clock 1 TFC BEETLE HYBRID Comp Clock 2 TTCR TTCT 3 Analog Digital Opto Clock ADC 5 Tell Clock TTCR FF FF FF 4 FF Analog TELL1 Optical Tx Board Figure Scheme of the different clocks used in the electronic readout of the Pile-Up system; the circular symbol refers to the tunable delays.

15 4.5 Commissioning 51 More information on the different scans performed are provided in the next paragraphs Time alignment of the analog output: Test Pulse (TP) delay scan and ADC delay scan The signal obtained from each strip, referred to as pulse shape, is the result of the original charge collected from the ionising particle and the subsequent front-end signal amplification and shaping. The pulse shape lasts roughly 100 ns, with a peaking time of 25 ns 15, but it is sampled only once per bunch crossing, i.e. every 25 ns. We first need a proper setting of the time to sample on the peak of the pulse, with a precision of few ns (Beetle Clock tuning). This is initially done with test pulse scans and afterwards, when all the other delays are tuned, with the LHC beam (see Sec ). Once the sampling time is roughly determined, we need to tune the ADC clock phase of the TELL1 (hereby Tell1 Clock ) with respect to the Beetle Clock one. For this purpose we use ADC delay scans with test pulse data. The method applied is the same as developed for the VELO timing and described in Refs. [93, 94]. During an ADC delay scan, we pulse two out of 32 strips forming an analog link with a calibration signal, while varying the ADC sampling time via the Tell1 Clock, in steps of 25/16 ns. A rectangular pulse, ideally 25 ns long, is set on the output of the Beetle such that we can select the optimal sampling point from the 16 steps of the scan. As an illustrative example, Fig. 29 shows the signal sampled by the ADC during an ADC delay scan. All sampled signals obtained from the scan are superimposed on the histogram and the alternate polarity test pulse results in two shapes at higher (lower) ADC values. The signal at the output of the Beetle remains roughly stable for about 20 ns and might be registered by two consecutive ADC samples, (a) and (b). We choose one of the 16 steps of the scan, and corresponding clock phase, as the optimal sampling time for the channel, typically in the middle of the plateau (step 12 in the figure). Since the plateau is about 20 ns long, small variations of a few ns in the relative phase between Beetle Clock and Tell1 Clock are still acceptable [94]. There is only one delay setting per analog link, hence the chosen setting is the average of the optimal sampling times for the two pulsed strips Time alignment of the digital output: Opto delay scan The OPTO Tx Boards are connected to the hybrids via copper links. To obtain well synchronised output digital signals, we need to account for differences in the lengths of such cables. The first step is tuning the Opto clock with respect to the Comparator Clock. This is done fixing a pattern, typically a 40 MHz square wave, on the digital output of the Beetle, and then scanning the Opto clock via an Opto delay scan. As mentioned before, the Control Board controls both the absolute time delay for the OPTO Tx Board, or clock delay, and the relative delay for each copper link, or clock phase. The overall phase of the Opto Clock is the sum of clock delay and clock phase and both delays are determined from the same scan. For the clock delay fine tuning, we scan the Opto clock with 50 steps of 0.5 ns each. This delay is common for all links in one OPTO Tx Board. An additional delay of 0, 3, 6 or 9 ns can be added to each copper link individually, to minimise link to link differences. Therefore, from an Opto delay scan over all links of an OPTO Tx Board, we obtain four different signal distributions, one for each value of clock phase. Figure 30 shows such distribution for the two OPTO Tx Boards connected to PU sensor 131 and for one clock phase. The strip 15 We call peaking time the time needed for the signal to rise to its maximum.

16 52 The Pile-Up System Analog output (pulse shape) BEETLE Beetle clock (1) TestPulse analog scan, beam Digital/Comparator output Comp clock (2) TestPulse digital scan TELL1 Board Tell1 clock (5) ADC delay scan OPTO Board Opto clock (3)+(4) Opto delay scan Figure Scheme of the different clock signals of the PU electronics, with corresponding tunable delays and performed tests. The numbering is the same as used in Fig. 27. ADC ADC ADC delay scan step (25/16 ns) ADC delay scan step (25/16 ns) 0 (a) (b) Figure ADC delay scan: signal registered in two consecutive ADC samples while pulsing PU analog channel 24 of sensor 129. The pulse is injected at a fixed time; the Beetle Clock is kept fixed while the ADC sampling time is varied in steps of 25/16 ns. All sampled signals obtained from the data sample are superimposed on the histogram and the alternate polarity test pulse results in two shapes at higher (lower) ADC values. The optimal sampling time for this channel is chosen in correspondence to step 12.

17 4.5 Commissioning 53 number divided by 8 gives the copper link identification number on the x-axis 16, while the applied clock delay is on the y-axis. The colour scale refers to the number of times the signal is registered, when sampling the Beetle output 100 times. Clock delay (ns) Link identification number Figure Opto delay scan: signal distribution obtained from a scan over all links of PU 131, for clock phase 3 (delay of 9 ns); the strip number divided by 8 gives the copper link identification number, on the x-axis, while the clock delay is on the y-axis. The colour scale refers to the number of times the signal is registered, when sampling the Beetle output 100 times. Similar histograms are produced for all PU sensors and all possible clock phases. We want to extract the optimal combination of clock delay per OPTO Tx Board and clock phase per link of that Board. The fastest way to proceed would be to look at histograms as the one in Fig. 30 and extract a good combination by hand. For instance, from such a distribution we see that a clock delay of 13 ns would be a good choice for the first OPTO Tx Board of that sensor, together with a clock phase of 9 ns for all links of the board. Nevertheless, the goal is to compare all possible combinations to find the optimal one, with an objective procedure which could be quickly repeated in the future. For this reason we set up an algorithm to analyse Opto delay scan data. It proceeds in steps, as follows: By slicing the distribution as the one of Fig. 30 along the x-axis, we obtain the signal pulse shape per link over all possible clock delays. An example of a slice is shown in Fig. 31. One clock delay value is then chosen per link per clock phase (d link best phase), from the midpoint of the flat top. The midpoint is measured from the position of the rising (falling) edge d link min (dlink max) of the histogram. Links showing an anomalous behaviour are discarded and analysed separately. We estimate the best clock phase per link, this time at fixed value of clock delay. To do so, we minimise the difference between the value obtained earlier d link best phase and each possible value of clock delay d, with respect to the clock phase. The corresponding phase is the best clock phase per link p link best delay, given the clock delay. Now we know what is the best clock phase per link, given the clock delay of its OPTO Tx Board, and we know what is the best clock delay per link, given the clock phase, 16 As shown in Fig.20, each copper link corresponds to the multiplexed signal of two digital channels, that are 8 strips in total.

18 54 The Pile-Up System Figure Opto delay scan: slice of Fig. 30 at link number 71; the y-axis represents the number of times the link registers a signal, when sampling the Beetle output 100 times. The histogram represents the digital signal shape of that link for clock phase 3. The clock delay should be tuned such that it falls on the flat top of the distribution. Similar histograms are produced for all links and all four clock phases. so we need to combine the information and determine the best clock delay per OPTO Tx board. Firstly, for every link, we calculate the difference between each possible value of clock delay d and the time of the rising edge of the signal, at fixed best phase for that clock delay p link best delay. The obtained values are plotted in a distribution over all links of an OPTO Tx Board. We have 50 of these distributions in total, one per possible value of d. Similarly, we calculate the difference between the time of the signal falling edge and the delay d, at fixed best phase for that delay, and we plot the results over all links. The distributions are fitted with a Gaussian. Figure 32 shows an example of such distributions for all links of the top half of sensor 131 (one OPTO Tx Board), for two different values of d. We select the optimal clock delay per OPTO Tx Board d board best as the value of d that minimises the difference between the Gaussian means of the two distributions, that is the value of d for which the top and bottom histograms align. This is equivalent to assign d to the average between the mean of all rising edge times and the mean of all falling edge times. We also perform a manual check on the links showing anomalous behaviour: they correspond to links with known connector problems, or flagged as possible problematic channels. These links are assigned the same clock phase as the clock phase of the neighbouring links. To summarise, we collect data by fixing a square wave on the digital output of the Beetle and scanning the Opto clock with 50 steps of 0.5 ns delay each; we also vary the delay of each single link on the OPTO Tx Board, on a range of 4 different values. A data analysis algorithm was designed to create an automated procedure allowing to extract an optimal clock delay value for each OPTO Tx Board, and a corresponding optimal clock phase for

19 4.5 Commissioning 55 (a) (b) Figure Opto delay scan: the histograms show the difference between a certain value of clock delay d and the time of the rising (a) or falling (b) edge of the signal, at fixed best phase for that clock delay p link best delay, for all links of half sensor 131 (one OPTO Tx Board). In grey are the plots obtained for a clock delay d = 16 ns and in white for d = 3 ns. The distributions are fitted with a Gaussian and the optimal clock delay per OPTO Tx Board is the value of d minimising the difference between the two Gaussian means, that is the value of d for which the top and bottom histograms align. each link of that board. This automation allowed to implement the optimisation at different times of the system commissioning and reduced the mistake rate Time alignment of the digital output: Test Pulse (TP) digital scan. We complete the synchronisation of the PU digital output by tuning the delay for the Comparator Clock to find the optimal sampling time of the digitised pulse. As mentioned, this

20 56 The Pile-Up System also depends on the comparator threshold, since the signal is digitised before being sampled. The sketch in Fig. 33 illustrates the relation between test pulse sampling time and channel threshold, both crucial for the optimisation of the PU digital output. The width of the digitised pulse corresponds to the time-over-threshold of the signal, so it depends on the value of such threshold. The optimisation of the pulse sampling time depends on this width. Assuming to have an already optimal Comp Clock sampling time, a high comparator threshold is preferable to ensure a high efficiency on low signals. At the same time, if the threshold is fixed at a very high value, the majority of signals will result in very narrow digital pulses, requiring a Comp Clock accuracy at the ns level and leading to a more difficult tuning. On the other hand, if the threshold is set at a low value, the output after digitisation will be generally wide enough to allow for a less precise tuning. For this reason, although we would prefer to apply a high threshold to find the most accurate sampling time, the easiest way to scan the pulse is to start from a wide digital output, obtainable with a low threshold. TP Different thresholds Digital signal, lower threshold TP sampling Optimal pulse sampling Digital signal, higher threshold Figure Sketch of a pulse shape, with two different threshold settings applied to extract the digital signal. The figure shows how the choice of the pulse sampling time depends on the threshold. With a high threshold the optimal sampling will fall roughly in the middle of the digital signal; this is not true if the threshold is low, because of the asymmetrical shape of the pulse. Ideally we would need to apply a very high threshold to find the most accurate sampling time, but the easiest way to scan the pulse is to start from a wide digital output, obtainable with a low threshold. At first, we set all comparator main thresholds 17 to low values: this guarantees a long signal- 17 See Sec for a description of the comparator threshold.

21 4.5 Commissioning 57 over-threshold time of the digital signal and helps to easily locate the test pulse peak. Then we perform a TP digital scan, for an initial Comp Clock tuning. We inject 100 test pulses for each channel and we scan the Comp Clock over 4 consecutive clock cycles of 25 ns, in steps of 1 ns each, moving the Opto Clock by the same amount. The number of times a hit is registered per channel is used to build a histogram as the one in Fig. 34: the plot shows the distribution obtained for all channels of PU sensor 128, for a scan with 25 Comp Clock steps, of 1 ns each. On the y-axis is the step of the Comp Clock scan in ns, while the colour scale represents the number of entries per bin (number of hits per channel). The bin at highest occupancy thus counts 100 entries, while the bins with lower entries correspond to the edges of the time-over-threshold signal. By setting the BCId * 25 + step [ns] PU channel 1 Figure Test pulse signal distributed over the digital channels of PU sensor 128, shown for a scan with 4 cycles of 25 steps each, every step being 1 ns. On the y-axis, the step of the Comp Clock scan is obtained from the clock cycle Id multiplied by 25 ns and increased by the additional step number. The colour scale represents the number of entries per bin (number of hits per channel). Given the comparator thresholds, By setting the Comp Clock delay at the value corresponding to the histogram mean in y, the pulse is collected by almost all channels at the same time. Comp Clock delay at the value which corresponds to the histogram mean in y, the pulse is collected by almost all channels at the same time. We use such a plot to extract one optimal Comp Clock delay per sensor. Hence, the selected delay is the best compromise for all channels of the sensor. Afterwards, a better Comp Clock tuning can be achieved by raising all main thresholds. In fact, the higher is the threshold, the smaller is the time over threshold of the pulse to sample, and the more accurate is the determination of the peak position. The two steps, raising of the thresholds and Comp Clock scan, are iterated a few times until the optimal sampling time is fine-tuned. We must notice that, when rising the main thresholds, we rely on the channels giving an equal response. This is achieved by determining the channel-by-channel threshold offset before tuning the Comp Clock, with a threshold scan. Later, we perform a second threshold scan to fine-tune the thresholds, as explained in the following paragraph Tuning of the digital output: threshold scan As described in Section 4.2, the digital output of the Pile-Up system is derived from the analog signal at the Beetle front-end, where the signals coming from four input strips are OR-ed and combined into a single digital channel. This process relies on the tuning of the comparator threshold, since digitisation is performed before OR-ing the strip signals.

22 58 The Pile-Up System Table 3... Offset scan: list of anomalies found while studying the signal distribution versus the threshold value for each channel. Characterisation Probable cause Action S = 0 dead channel mask applied 5 < R < 0 or 32 R < 36 channel with extreme offset threshold = 0 or 31 R 5 channel sensitive to cross talk from neighbouring channels mask applied R 36 noisy channel, for instance if bonded to a neighbouring one mask applied The per-channel threshold consists of a main threshold, common to all channels of a Beetle, and an individual threshold or local offset, tunable for each channel 18. For this reason, we first need to equalise the response to the same global offset of all channels in a Beetle, by tuning the local offsets. This can be initially done with an offset scan without test pulse before any time tuning, since the equalisation does not rely on a tuned digital timing. However, later it must be repeated with test pulses, in order to effectively calibrate the thresholds in terms of number of electrons. In this section we focus on the analysis of threshold scans performed with test pulses, after the digital output is properly time aligned. The data are processed with a Pile-Up dedicated monitoring algorithm, implemented in the LHCb software framework 19. We inject a test pulse, channel by channel for 100 times, and we scan the corresponding local offset over a range of 32 possible values. By counting the number of times that the channel registers a hit for each setting, we collect a distribution of total number of hits per individual threshold. This distribution is fitted with a complementary error function [95] er f c(x) = N ( 2 e t2 dt + C), (55) π Ax S where N = 50. The parameter A is a measure of the noise, while the ratio R = S A is a measure of the local offset, so we set the individual threshold, channel by channel, to its nearest integer value. Figure 35 shows an example of such a fitted distribution for a channel of PU sensor 129. Since some channels show an anomalous distribution, we check all individual fit parameters. Table 3 lists the most common anomalies found, together with their probable cause and the procedure we choose to cure those channels. Some channels are problematic because they have an extreme offset and hence are assigned a maximum or minimum threshold, others are noisy or dead channels linked to hardware problems and they need to be masked. Figure 36 shows two examples of anomalous distributions. After all the local offsets are set for each channel, we complete the tuning by scanning the main threshold per Beetle, in order to obtain an acceptable number of noise hits and to maximise the efficiency. On average, the main threshold is set to a value of about 35% of the signal of a minimum ionizing particle, corresponding to 8000 electrons. 18 The channel to channel offsets are compensated by a 5-bit trim DAC. 19 The algorithm is implemented in the VeloDataMonitor package of the Vetra application, see Sec. 3.6.

23 4.5 Commissioning 59 Figure Threshold scan: distribution of total number of hits collected per threshold for channel 126 of PU sensor 129. In absence of noise, the distribution would be a step function. The distribution is fitted with a complementary error function. The parameter A is a measure of the noise, while the ratio R = S A is a measure of the local channel offset. (a) (b) Figure Offset scan: example of distributions of collected signal versus threshold for problematic channels. Case (a) shows a channel with extreme offset and (b) a channel sensitive to cross talk. A more detailed description of the anomalies is given in Tab. 3.

24 60 The Pile-Up System Once the thresholds calibration is completed, the comparison between digital data (L0PU) and analog zero-suppressed data (VeloBank) gives a close to 100% match, as already shown in Fig. 24. Moreover, the noise hit rate is measured to be at the permil level Global time alignment After all the relative delays and thresholds are tuned, we need to tune the Beetle clock with respect to the time of the bunch crossing, that is to align the system timing to the LHC orbit signal received by the TFC. Moreover, we need to correctly label the data with the proper bunch crossing number (BCID). We perform several beam timing scans during the LHCb data taking, with one bunch circulating in the machine. For simplification, data are acquired in so called Time Aligned Events (TAE) mode, i.e. by reading out the detector for typically 5 consecutive BCIDs. This allows to record PU data from a wide time range around the clock cycle containing the colliding bunch (in a time gate of 125 ns, instead of the nominal 25 ns). The Beetle Clock phase is then tuned such that we sample on the peak of the shaped signal. Note that when shifting the phase of the Beetle Clock, all other clock signals are shifted by the same amount in order to preserve the relative timing of the boards. Finally, we properly label the data using the BCID provided by the machine bunch crossing scheme. 4.6 Conclusion The Pile-Up system was operational since the beginning of LHCb data acquisition. It was employed for various tasks described in Sec. 4.1, thanks to the success of the various tuning and testing exercises accomplished. In particular, both analog and digital signals have been time aligned one with respect to another, and globally with respect to the LHC clock, well within the 25 ns requirement. The thresholds for the digital readout have been calibrated, achieving a match between digital data and analog zero-suppressed data close to 100%. Additionally, a dedicated study has been performed to optimise the sensors space alignment; this will be described in details in the following chapter.

LHCb Preshower(PS) and Scintillating Pad Detector (SPD): commissioning, calibration, and monitoring

LHCb Preshower(PS) and Scintillating Pad Detector (SPD): commissioning, calibration, and monitoring LHCb Preshower(PS) and Scintillating Pad Detector (SPD): commissioning, calibration, and monitoring Eduardo Picatoste Olloqui on behalf of the LHCb Collaboration Universitat de Barcelona, Facultat de Física,

More information

Commissioning the LHCb VErtex LOcator (VELO)

Commissioning the LHCb VErtex LOcator (VELO) University of Liverpool E-mail: Mark.Tobin@cern.ch The LHCb VErtex LOcator (VELO) is designed to reconstruct primary and secondary vertices in b-hadron decays. It is a silicon microstrip detector situated

More information

VErtex LOcator (VELO)

VErtex LOcator (VELO) Commissioning the LHCb VErtex LOcator (VELO) Mark Tobin University of Liverpool On behalf of the LHCb VELO group 1 Overview Introduction LHCb experiment. The Vertex Locator (VELO). Description of System.

More information

The Architecture of the BTeV Pixel Readout Chip

The Architecture of the BTeV Pixel Readout Chip The Architecture of the BTeV Pixel Readout Chip D.C. Christian, dcc@fnal.gov Fermilab, POBox 500 Batavia, IL 60510, USA 1 Introduction The most striking feature of BTeV, a dedicated b physics experiment

More information

Upgrade tracking with the UT Hits

Upgrade tracking with the UT Hits LHCb-PUB-2014-004 (v4) May 20, 2014 Upgrade tracking with the UT Hits P. Gandini 1, C. Hadjivasiliou 1, J. Wang 1 1 Syracuse University, USA LHCb-PUB-2014-004 20/05/2014 Abstract The performance of the

More information

The LHCb trigger system

The LHCb trigger system IL NUOVO CIMENTO Vol. 123 B, N. 3-4 Marzo-Aprile 2008 DOI 10.1393/ncb/i2008-10523-9 The LHCb trigger system D. Pinci( ) INFN, Sezione di Roma - Rome, Italy (ricevuto il 3 Giugno 2008; pubblicato online

More information

The LHCb Upgrade BEACH Simon Akar on behalf of the LHCb collaboration

The LHCb Upgrade BEACH Simon Akar on behalf of the LHCb collaboration The LHCb Upgrade BEACH 2014 XI International Conference on Hyperons, Charm and Beauty Hadrons! University of Birmingham, UK 21-26 July 2014 Simon Akar on behalf of the LHCb collaboration Outline The LHCb

More information

The LHCb Vertex Locator : Marina Artuso, Syracuse University for the VELO Group

The LHCb Vertex Locator : Marina Artuso, Syracuse University for the VELO Group The LHCb Vertex Locator : status and future perspectives Marina Artuso, Syracuse University for the VELO Group The LHCb Detector Mission: Expore interference of virtual new physics particle in the decays

More information

1 Detector simulation

1 Detector simulation 1 Detector simulation Detector simulation begins with the tracking of the generated particles in the CMS sensitive volume. For this purpose, CMS uses the GEANT4 package [1], which takes into account the

More information

arxiv:physics/ v1 [physics.ins-det] 19 Oct 2001

arxiv:physics/ v1 [physics.ins-det] 19 Oct 2001 arxiv:physics/0110054v1 [physics.ins-det] 19 Oct 2001 Performance of the triple-gem detector with optimized 2-D readout in high intensity hadron beam. A.Bondar, A.Buzulutskov, L.Shekhtman, A.Sokolov, A.Vasiljev

More information

Hardware Trigger Processor for the MDT System

Hardware Trigger Processor for the MDT System University of Massachusetts Amherst E-mail: tcpaiva@cern.ch We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system for the Muon Spectrometer of the ATLAS Experiment.

More information

The Commissioning of the ATLAS Pixel Detector

The Commissioning of the ATLAS Pixel Detector The Commissioning of the ATLAS Pixel Detector XCIV National Congress Italian Physical Society Genova, 22-27 Settembre 2008 Nicoletta Garelli Large Hadronic Collider MOTIVATION: Find Higgs Boson and New

More information

The VELO Upgrade. Eddy Jans, a (on behalf of the LHCb VELO Upgrade group) a

The VELO Upgrade. Eddy Jans, a (on behalf of the LHCb VELO Upgrade group) a The VELO Upgrade Eddy Jans, a (on behalf of the LHCb VELO Upgrade group) a Nikhef, Science Park 105, 1098 XG Amsterdam, The Netherlands E-mail: e.jans@nikhef.nl ABSTRACT: A significant upgrade of the LHCb

More information

arxiv: v2 [physics.ins-det] 20 Oct 2008

arxiv: v2 [physics.ins-det] 20 Oct 2008 Commissioning of the ATLAS Inner Tracking Detectors F. Martin University of Pennsylvania, Philadelphia, PA 19104, USA On behalf of the ATLAS Inner Detector Collaboration arxiv:0809.2476v2 [physics.ins-det]

More information

Attilio Andreazza INFN and Università di Milano for the ATLAS Collaboration The ATLAS Pixel Detector Efficiency Resolution Detector properties

Attilio Andreazza INFN and Università di Milano for the ATLAS Collaboration The ATLAS Pixel Detector Efficiency Resolution Detector properties 10 th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors Offline calibration and performance of the ATLAS Pixel Detector Attilio Andreazza INFN and Università

More information

Field Programmable Gate Array (FPGA) for the Liquid Argon calorimeter back-end electronics in ATLAS

Field Programmable Gate Array (FPGA) for the Liquid Argon calorimeter back-end electronics in ATLAS Field Programmable Gate Array (FPGA) for the Liquid Argon calorimeter back-end electronics in ATLAS Alessandra Camplani Università degli Studi di Milano The ATLAS experiment at LHC LHC stands for Large

More information

PoS(EPS-HEP2017)476. The CMS Tracker upgrade for HL-LHC. Sudha Ahuja on behalf of the CMS Collaboration

PoS(EPS-HEP2017)476. The CMS Tracker upgrade for HL-LHC. Sudha Ahuja on behalf of the CMS Collaboration UNESP - Universidade Estadual Paulista (BR) E-mail: sudha.ahuja@cern.ch he LHC machine is planning an upgrade program which will smoothly bring the luminosity to about 5 34 cm s in 228, to possibly reach

More information

DAQ & Electronics for the CW Beam at Jefferson Lab

DAQ & Electronics for the CW Beam at Jefferson Lab DAQ & Electronics for the CW Beam at Jefferson Lab Benjamin Raydo EIC Detector Workshop @ Jefferson Lab June 4-5, 2010 High Event and Data Rates Goals for EIC Trigger Trigger must be able to handle high

More information

Data acquisition and Trigger (with emphasis on LHC)

Data acquisition and Trigger (with emphasis on LHC) Lecture 2 Data acquisition and Trigger (with emphasis on LHC) Introduction Data handling requirements for LHC Design issues: Architectures Front-end, event selection levels Trigger Future evolutions Conclusion

More information

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS CR -2015/213 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 05 October 2015 (v2, 12 October 2015)

More information

ATLAS Tracker and Pixel Operational Experience

ATLAS Tracker and Pixel Operational Experience University of Cambridge, on behalf of the ATLAS Collaboration E-mail: dave.robinson@cern.ch The tracking performance of the ATLAS detector relies critically on the silicon and gaseous tracking subsystems

More information

The LHCb VELO Upgrade

The LHCb VELO Upgrade Available online at www.sciencedirect.com Physics Procedia 37 (2012 ) 1055 1061 TIPP 2011 - Technology and Instrumentation in Particle Physics 2011 The LHCb VELO Upgrade D. Hynds 1, on behalf of the LHCb

More information

Performance of 8-stage Multianode Photomultipliers

Performance of 8-stage Multianode Photomultipliers Performance of 8-stage Multianode Photomultipliers Introduction requirements by LHCb MaPMT characteristics System integration Test beam and Lab results Conclusions MaPMT Beetle1.2 9 th Topical Seminar

More information

Data Quality Monitoring of the CMS Pixel Detector

Data Quality Monitoring of the CMS Pixel Detector Data Quality Monitoring of the CMS Pixel Detector 1 * Purdue University Department of Physics, 525 Northwestern Ave, West Lafayette, IN 47906 USA E-mail: petra.merkel@cern.ch We present the CMS Pixel Data

More information

Readout electronics for LumiCal detector

Readout electronics for LumiCal detector Readout electronics for Lumial detector arek Idzik 1, Krzysztof Swientek 1 and Szymon Kulis 1 1- AGH niversity of Science and Technology Faculty of Physics and Applied omputer Science racow - Poland The

More information

The CMS electromagnetic calorimeter barrel upgrade for High-Luminosity LHC

The CMS electromagnetic calorimeter barrel upgrade for High-Luminosity LHC Journal of Physics: Conference Series OPEN ACCESS The CMS electromagnetic calorimeter barrel upgrade for High-Luminosity LHC To cite this article: Philippe Gras and the CMS collaboration 2015 J. Phys.:

More information

Hardware Trigger Processor for the MDT System

Hardware Trigger Processor for the MDT System University of Massachusetts Amherst E-mail: tcpaiva@cern.ch We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit

More information

Phase 1 upgrade of the CMS pixel detector

Phase 1 upgrade of the CMS pixel detector Phase 1 upgrade of the CMS pixel detector, INFN & University of Perugia, On behalf of the CMS Collaboration. IPRD conference, Siena, Italy. Oct 05, 2016 1 Outline The performance of the present CMS pixel

More information

Pixel hybrid photon detectors

Pixel hybrid photon detectors Pixel hybrid photon detectors for the LHCb-RICH system Ken Wyllie On behalf of the LHCb-RICH group CERN, Geneva, Switzerland 1 Outline of the talk Introduction The LHCb detector The RICH 2 counter Overall

More information

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS CR -2017/402 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 06 November 2017 Commissioning of the

More information

Operation and Performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC

Operation and Performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC Operation and Performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC Kirchhoff-Institute for Physics (DE) E-mail: sebastian.mario.weber@cern.ch ATL-DAQ-PROC-2017-026

More information

Firmware development and testing of the ATLAS IBL Read-Out Driver card

Firmware development and testing of the ATLAS IBL Read-Out Driver card Firmware development and testing of the ATLAS IBL Read-Out Driver card *a on behalf of the ATLAS Collaboration a University of Washington, Department of Electrical Engineering, Seattle, WA 98195, U.S.A.

More information

The trigger system of the muon spectrometer of the ALICE experiment at the LHC

The trigger system of the muon spectrometer of the ALICE experiment at the LHC The trigger system of the muon spectrometer of the ALICE experiment at the LHC Francesco Bossù for the ALICE collaboration University and INFN of Turin Siena, 09 June 2010 Outline 1 Introduction 2 Muon

More information

Data acquisition and Trigger (with emphasis on LHC)

Data acquisition and Trigger (with emphasis on LHC) Lecture 2! Introduction! Data handling requirements for LHC! Design issues: Architectures! Front-end, event selection levels! Trigger! Upgrades! Conclusion Data acquisition and Trigger (with emphasis on

More information

PoS(VERTEX2015)008. The LHCb VELO upgrade. Sophie Elizabeth Richards. University of Bristol

PoS(VERTEX2015)008. The LHCb VELO upgrade. Sophie Elizabeth Richards. University of Bristol University of Bristol E-mail: sophie.richards@bristol.ac.uk The upgrade of the LHCb experiment is planned for beginning of 2019 unitl the end of 2020. It will transform the experiment to a trigger-less

More information

CMS Silicon Strip Tracker: Operation and Performance

CMS Silicon Strip Tracker: Operation and Performance CMS Silicon Strip Tracker: Operation and Performance Laura Borrello Purdue University, Indiana, USA on behalf of the CMS Collaboration Outline The CMS Silicon Strip Tracker (SST) SST performance during

More information

M.Pernicka Vienna. I would like to raise several issues:

M.Pernicka Vienna. I would like to raise several issues: M.Pernicka Vienna I would like to raise several issues: Why we want use more than one pulse height sample of the shaped signal. The APV25 offers this possibility. What is the production status of the FADC+proc.

More information

Silicon Sensor and Detector Developments for the CMS Tracker Upgrade

Silicon Sensor and Detector Developments for the CMS Tracker Upgrade Silicon Sensor and Detector Developments for the CMS Tracker Upgrade Università degli Studi di Firenze and INFN Sezione di Firenze E-mail: candi@fi.infn.it CMS has started a campaign to identify the future

More information

The Run-2 ATLAS. ATLAS Trigger System: Design, Performance and Plans

The Run-2 ATLAS. ATLAS Trigger System: Design, Performance and Plans The Run-2 ATLAS Trigger System: Design, Performance and Plans 14th Topical Seminar on Innovative Particle and Radiation Detectors October 3rd October 6st 2016, Siena Martin zur Nedden Humboldt-Universität

More information

Beam Condition Monitors and a Luminometer Based on Diamond Sensors

Beam Condition Monitors and a Luminometer Based on Diamond Sensors Beam Condition Monitors and a Luminometer Based on Diamond Sensors Wolfgang Lange, DESY Zeuthen and CMS BRIL group Beam Condition Monitors and a Luminometer Based on Diamond Sensors INSTR14 in Novosibirsk,

More information

Calorimeter Monitoring at DØ

Calorimeter Monitoring at DØ Calorimeter Monitoring at DØ Calorimeter Monitoring at DØ Robert Kehoe ATLAS Calibration Mtg. December 1, 2004 Southern Methodist University Department of Physics Detector and Electronics Monitoring Levels

More information

Level-1 Calorimeter Trigger Calibration

Level-1 Calorimeter Trigger Calibration December 2004 Level-1 Calorimeter Trigger Calibration Birmingham, Heidelberg, Mainz, Queen Mary, RAL, Stockholm Alan Watson, University of Birmingham Norman Gee, Rutherford Appleton Lab Outline Reminder

More information

Study of the ALICE Time of Flight Readout System - AFRO

Study of the ALICE Time of Flight Readout System - AFRO Study of the ALICE Time of Flight Readout System - AFRO Abstract The ALICE Time of Flight Detector system comprises about 176.000 channels and covers an area of more than 100 m 2. The timing resolution

More information

Production of HPDs for the LHCb RICH Detectors

Production of HPDs for the LHCb RICH Detectors Production of HPDs for the LHCb RICH Detectors LHCb RICH Detectors Hybrid Photon Detector Production Photo Detector Test Facilities Test Results Conclusions IEEE Nuclear Science Symposium Wyndham, 24 th

More information

Resolution studies on silicon strip sensors with fine pitch

Resolution studies on silicon strip sensors with fine pitch Resolution studies on silicon strip sensors with fine pitch Stephan Hänsel This work is performed within the SiLC R&D collaboration. LCWS 2008 Purpose of the Study Evaluate the best strip geometry of silicon

More information

VELO: the LHCb Vertex Detector

VELO: the LHCb Vertex Detector LHCb note 2002-026 VELO VELO: the LHCb Vertex Detector J. Libby on behalf of the LHCb collaboration CERN, Meyrin, Geneva 23, CH-1211, Switzerland Abstract The Vertex Locator (VELO) of the LHCb experiment

More information

What do the experiments want?

What do the experiments want? What do the experiments want? prepared by N. Hessey, J. Nash, M.Nessi, W.Rieger, W. Witzeling LHC Performance Workshop, Session 9 -Chamonix 2010 slhcas a luminosity upgrade The physics potential will be

More information

Characterisation of the VELO High Voltage System

Characterisation of the VELO High Voltage System Characterisation of the VELO High Voltage System Public Note Reference: LHCb-2008-009 Created on: July 18, 2008 Prepared by: Barinjaka Rakotomiaramanana a, Chris Parkes a, Lars Eklund a *Corresponding

More information

How different FPGA firmware options enable digitizer platforms to address and facilitate multiple applications

How different FPGA firmware options enable digitizer platforms to address and facilitate multiple applications How different FPGA firmware options enable digitizer platforms to address and facilitate multiple applications 1 st of April 2019 Marc.Stackler@Teledyne.com March 19 1 Digitizer definition and application

More information

The BaBar Silicon Vertex Tracker (SVT) Claudio Campagnari University of California Santa Barbara

The BaBar Silicon Vertex Tracker (SVT) Claudio Campagnari University of California Santa Barbara The BaBar Silicon Vertex Tracker (SVT) Claudio Campagnari University of California Santa Barbara Outline Requirements Detector Description Performance Radiation SVT Design Requirements and Constraints

More information

The upgrade of the LHCb trigger for Run III

The upgrade of the LHCb trigger for Run III The upgrade of the LHCb trigger for Run III CERN Email: mark.p.whitehead@cern.ch The LHCb upgrade will take place in preparation for data taking in LHC Run III. An important aspect of this is the replacement

More information

Track Triggers for ATLAS

Track Triggers for ATLAS Track Triggers for ATLAS André Schöning University Heidelberg 10. Terascale Detector Workshop DESY 10.-13. April 2017 from https://www.enterprisedb.com/blog/3-ways-reduce-it-complexitydigital-transformation

More information

Diamond sensors as beam conditions monitors in CMS and LHC

Diamond sensors as beam conditions monitors in CMS and LHC Diamond sensors as beam conditions monitors in CMS and LHC Maria Hempel DESY Zeuthen & BTU Cottbus on behalf of the BRM-CMS and CMS-DESY groups GSI Darmstadt, 11th - 13th December 2011 Outline 1. Description

More information

A Readout ASIC for CZT Detectors

A Readout ASIC for CZT Detectors A Readout ASIC for CZT Detectors L.L.Jones a, P.Seller a, I.Lazarus b, P.Coleman-Smith b a STFC Rutherford Appleton Laboratory, Didcot, OX11 0QX, UK b STFC Daresbury Laboratory, Warrington WA4 4AD, UK

More information

2008 JINST 3 S Implementation The Coincidence Chip (CC) Figure 8.2: Schematic overview of the Coincindence Chip (CC).

2008 JINST 3 S Implementation The Coincidence Chip (CC) Figure 8.2: Schematic overview of the Coincindence Chip (CC). 8.2 Implementation Figure 8.2: Schematic overview of the Coincindence Chip (CC). 8.2.1 The Coincidence Chip (CC) The Coincidence Chip provides on-detector coincidences to reduce the trigger data sent to

More information

Test Beam Measurements for the Upgrade of the CMS Phase I Pixel Detector

Test Beam Measurements for the Upgrade of the CMS Phase I Pixel Detector Test Beam Measurements for the Upgrade of the CMS Phase I Pixel Detector Simon Spannagel on behalf of the CMS Collaboration 4th Beam Telescopes and Test Beams Workshop February 4, 2016, Paris/Orsay, France

More information

KLauS4: A Multi-Channel SiPM Charge Readout ASIC in 0.18 µm UMC CMOS Technology

KLauS4: A Multi-Channel SiPM Charge Readout ASIC in 0.18 µm UMC CMOS Technology 1 KLauS: A Multi-Channel SiPM Charge Readout ASIC in 0.18 µm UMC CMOS Technology Z. Yuan, K. Briggl, H. Chen, Y. Munwes, W. Shen, V. Stankova, and H.-C. Schultz-Coulon Kirchhoff Institut für Physik, Heidelberg

More information

Multianode Photo Multiplier Tubes as Photo Detectors for Ring Imaging Cherenkov Detectors

Multianode Photo Multiplier Tubes as Photo Detectors for Ring Imaging Cherenkov Detectors Multianode Photo Multiplier Tubes as Photo Detectors for Ring Imaging Cherenkov Detectors F. Muheim a edin]department of Physics and Astronomy, University of Edinburgh Mayfield Road, Edinburgh EH9 3JZ,

More information

`First ep events in the Zeus micro vertex detector in 2002`

`First ep events in the Zeus micro vertex detector in 2002` Amsterdam 18 dec 2002 `First ep events in the Zeus micro vertex detector in 2002` Erik Maddox, Zeus group 1 History (1): HERA I (1992-2000) Lumi: 117 pb -1 e +, 17 pb -1 e - Upgrade (2001) HERA II (2001-2006)

More information

Commissioning Status and Results of ATLAS Level1 Endcap Muon Trigger System. Yasuyuki Okumura. Nagoya TWEPP 2008

Commissioning Status and Results of ATLAS Level1 Endcap Muon Trigger System. Yasuyuki Okumura. Nagoya TWEPP 2008 Commissioning Status and Results of ATLAS Level1 Endcap Muon Trigger System Yasuyuki Okumura Nagoya University @ TWEPP 2008 ATLAS Trigger DAQ System Trigger in LHC-ATLAS Experiment 3-Level Trigger System

More information

The LHCb Silicon Tracker

The LHCb Silicon Tracker Journal of Instrumentation OPEN ACCESS The LHCb Silicon Tracker To cite this article: C Elsasser 214 JINST 9 C9 View the article online for updates and enhancements. Related content - Heavy-flavour production

More information

Status of the LHCb Experiment

Status of the LHCb Experiment Status of the LHCb Experiment Werner Witzeling CERN, Geneva, Switzerland On behalf of the LHCb Collaboration Introduction The LHCb experiment aims to investigate CP violation in the B meson decays at LHC

More information

arxiv: v1 [physics.ins-det] 25 Feb 2013

arxiv: v1 [physics.ins-det] 25 Feb 2013 The LHCb VELO Upgrade Pablo Rodríguez Pérez on behalf of the LHCb VELO group a, a University of Santiago de Compostela arxiv:1302.6035v1 [physics.ins-det] 25 Feb 2013 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

More information

EUDET Pixel Telescope Copies

EUDET Pixel Telescope Copies EUDET Pixel Telescope Copies Ingrid-Maria Gregor, DESY December 18, 2010 Abstract A high resolution beam telescope ( 3µm) based on monolithic active pixel sensors was developed within the EUDET collaboration.

More information

THE LHCb experiment [1], currently under construction

THE LHCb experiment [1], currently under construction The DIALOG Chip in the Front-End Electronics of the LHCb Muon Detector Sandro Cadeddu, Caterina Deplano and Adriano Lai, Member, IEEE Abstract We present a custom integrated circuit, named DI- ALOG, which

More information

Goal of the project. TPC operation. Raw data. Calibration

Goal of the project. TPC operation. Raw data. Calibration Goal of the project The main goal of this project was to realise the reconstruction of α tracks in an optically read out GEM (Gas Electron Multiplier) based Time Projection Chamber (TPC). Secondary goal

More information

First-level trigger systems at LHC. Nick Ellis EP Division, CERN, Geneva

First-level trigger systems at LHC. Nick Ellis EP Division, CERN, Geneva First-level trigger systems at LHC Nick Ellis EP Division, CERN, Geneva 1 Outline Requirements from physics and other perspectives General discussion of first-level trigger implementations Techniques and

More information

PoS(Vertex 2016)071. The LHCb VELO for Phase 1 Upgrade. Cameron Dean, on behalf of the LHCb Collaboration

PoS(Vertex 2016)071. The LHCb VELO for Phase 1 Upgrade. Cameron Dean, on behalf of the LHCb Collaboration The LHCb VELO for Phase 1 Upgrade, on behalf of the LHCb Collaboration University of Glasgow E-mail: cameron.dean@cern.ch Large Hadron Collider beauty (LHCb) is a dedicated experiment for studying b and

More information

Data Acquisition System for the Angra Project

Data Acquisition System for the Angra Project Angra Neutrino Project AngraNote 012-2009 (Draft) Data Acquisition System for the Angra Project H. P. Lima Jr, A. F. Barbosa, R. G. Gama Centro Brasileiro de Pesquisas Físicas - CBPF L. F. G. Gonzalez

More information

LHCb Trigger & DAQ Design technology and performance. Mika Vesterinen ECFA High Luminosity LHC Experiments Workshop 8/10/2016

LHCb Trigger & DAQ Design technology and performance. Mika Vesterinen ECFA High Luminosity LHC Experiments Workshop 8/10/2016 LHCb Trigger & DAQ Design technology and performance Mika Vesterinen ECFA High Luminosity LHC Experiments Workshop 8/10/2016 2 Introduction The LHCb upgrade will allow 5x higher luminosity and with greatly

More information

ATLAS Muon Trigger and Readout Considerations. Yasuyuki Horii Nagoya University on Behalf of the ATLAS Muon Collaboration

ATLAS Muon Trigger and Readout Considerations. Yasuyuki Horii Nagoya University on Behalf of the ATLAS Muon Collaboration ATLAS Muon Trigger and Readout Considerations Yasuyuki Horii Nagoya University on Behalf of the ATLAS Muon Collaboration ECFA High Luminosity LHC Experiments Workshop - 2016 ATLAS Muon System Overview

More information

Upgrade of the ATLAS Thin Gap Chamber Electronics for HL-LHC. Yasuyuki Horii, Nagoya University, on Behalf of the ATLAS Muon Collaboration

Upgrade of the ATLAS Thin Gap Chamber Electronics for HL-LHC. Yasuyuki Horii, Nagoya University, on Behalf of the ATLAS Muon Collaboration Upgrade of the ATLAS Thin Gap Chamber Electronics for HL-LHC Yasuyuki Horii, Nagoya University, on Behalf of the ATLAS Muon Collaboration TWEPP 2017, UC Santa Cruz, 12 Sep. 2017 ATLAS Muon System Overview

More information

arxiv: v1 [physics.ins-det] 5 Sep 2011

arxiv: v1 [physics.ins-det] 5 Sep 2011 Concept and status of the CALICE analog hadron calorimeter engineering prototype arxiv:1109.0927v1 [physics.ins-det] 5 Sep 2011 Abstract Mark Terwort on behalf of the CALICE collaboration DESY, Notkestrasse

More information

ITk silicon strips detector test beam at DESY

ITk silicon strips detector test beam at DESY ITk silicon strips detector test beam at DESY Lucrezia Stella Bruni Nikhef Nikhef ATLAS outing 29/05/2015 L. S. Bruni - Nikhef 1 / 11 Qualification task I Participation at the ITk silicon strip test beams

More information

Traditional analog QDC chain and Digital Pulse Processing [1]

Traditional analog QDC chain and Digital Pulse Processing [1] Giuliano Mini Viareggio April 22, 2010 Introduction The aim of this paper is to compare the energy resolution of two gamma ray spectroscopy setups based on two different acquisition chains; the first chain

More information

A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras

A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras Paul Gallagher, Andy Brewster VLSI Vision Ltd. San Jose, CA/USA Abstract VLSI Vision Ltd. has developed the VV6801 color sensor to address

More information

THE OFFICINE GALILEO DIGITAL SUN SENSOR

THE OFFICINE GALILEO DIGITAL SUN SENSOR THE OFFICINE GALILEO DIGITAL SUN SENSOR Franco BOLDRINI, Elisabetta MONNINI Officine Galileo B.U. Spazio- Firenze Plant - An Alenia Difesa/Finmeccanica S.p.A. Company Via A. Einstein 35, 50013 Campi Bisenzio

More information

L1 Track Finding For a TiME Multiplexed Trigger

L1 Track Finding For a TiME Multiplexed Trigger V INFIERI WORKSHOP AT CERN 27/29 APRIL 215 L1 Track Finding For a TiME Multiplexed Trigger DAVIDE CIERI, K. HARDER, C. SHEPHERD, I. TOMALIN (RAL) M. GRIMES, D. NEWBOLD (UNIVERSITY OF BRISTOL) I. REID (BRUNEL

More information

Efficiency and readout architectures for a large matrix of pixels

Efficiency and readout architectures for a large matrix of pixels Efficiency and readout architectures for a large matrix of pixels A. Gabrielli INFN and University of Bologna INFN and University of Bologna E-mail: giorgi@bo.infn.it M. Villa INFN and University of Bologna

More information

Micromegas calorimetry R&D

Micromegas calorimetry R&D Micromegas calorimetry R&D June 1, 214 The Micromegas R&D pursued at LAPP is primarily intended for Particle Flow calorimetry at future linear colliders. It focuses on hadron calorimetry with large-area

More information

The LHCb Vertex Locator (VELO) Pixel Detector Upgrade

The LHCb Vertex Locator (VELO) Pixel Detector Upgrade Home Search Collections Journals About Contact us My IOPscience The LHCb Vertex Locator (VELO) Pixel Detector Upgrade This content has been downloaded from IOPscience. Please scroll down to see the full

More information

Studies on MCM D interconnections

Studies on MCM D interconnections Studies on MCM D interconnections Speaker: Peter Gerlach Department of Physics Bergische Universität Wuppertal D-42097 Wuppertal, GERMANY Authors: K.H.Becks, T.Flick, P.Gerlach, C.Grah, P.Mättig Department

More information

Gentec-EO USA. T-RAD-USB Users Manual. T-Rad-USB Operating Instructions /15/2010 Page 1 of 24

Gentec-EO USA. T-RAD-USB Users Manual. T-Rad-USB Operating Instructions /15/2010 Page 1 of 24 Gentec-EO USA T-RAD-USB Users Manual Gentec-EO USA 5825 Jean Road Center Lake Oswego, Oregon, 97035 503-697-1870 voice 503-697-0633 fax 121-201795 11/15/2010 Page 1 of 24 System Overview Welcome to the

More information

Characterizing the Noise Performance of the KPiX ASIC. Readout Chip. Jerome Kyrias Carman

Characterizing the Noise Performance of the KPiX ASIC. Readout Chip. Jerome Kyrias Carman Characterizing the Noise Performance of the KPiX ASIC Readout Chip Jerome Kyrias Carman Office of Science, Science Undergraduate Laboratory Internship (SULI) Cabrillo College Stanford Linear Accelerator

More information

Upgrade of the CMS Tracker for the High Luminosity LHC

Upgrade of the CMS Tracker for the High Luminosity LHC Upgrade of the CMS Tracker for the High Luminosity LHC * CERN E-mail: georg.auzinger@cern.ch The LHC machine is planning an upgrade program which will smoothly bring the luminosity to about 5 10 34 cm

More information

Final Results from the APV25 Production Wafer Testing

Final Results from the APV25 Production Wafer Testing Final Results from the APV Production Wafer Testing M.Raymond a, R.Bainbridge a, M.French b, G.Hall a, P. Barrillon a a Blackett Laboratory, Imperial College, London, UK b Rutherford Appleton Laboratory,

More information

Micromegas for muography, the Annecy station and detectors

Micromegas for muography, the Annecy station and detectors Micromegas for muography, the Annecy station and detectors M. Chefdeville, C. Drancourt, C. Goy, J. Jacquemier, Y. Karyotakis, G. Vouters 21/12/2015, Arche meeting, AUTH Overview The station Technical

More information

Performance of a Single-Crystal Diamond-Pixel Telescope

Performance of a Single-Crystal Diamond-Pixel Telescope University of Tennessee, Knoxville From the SelectedWorks of stefan spanier 29 Performance of a Single-Crystal Diamond-Pixel Telescope R. Hall-Wilton V. Ryjov M. Pernicka V. Halyo B. Harrop, et al. Available

More information

A Prototype Amplifier-Discriminator Chip for the GLAST Silicon-Strip Tracker

A Prototype Amplifier-Discriminator Chip for the GLAST Silicon-Strip Tracker A Prototype Amplifier-Discriminator Chip for the GLAST Silicon-Strip Tracker Robert P. Johnson Pavel Poplevin Hartmut Sadrozinski Ned Spencer Santa Cruz Institute for Particle Physics The GLAST Project

More information

The ATLAS Trigger in Run 2: Design, Menu, and Performance

The ATLAS Trigger in Run 2: Design, Menu, and Performance he ALAS rigger in Run 2: Design, Menu, and Performance amara Vazquez Schroeder, on behalf of the ALAS Collaboration McGill University E-mail: tamara.vazquez.schroeder@cern.ch he ALAS trigger system is

More information

Commissioning and operation of the CDF Silicon detector

Commissioning and operation of the CDF Silicon detector Commissioning and operation of the CDF Silicon detector Saverio D Auria On behalf of the CDF collaboration International conference on Particle Physics and Advanced Technology, Como, Italy, 15-19 October

More information

M Hewitson, K Koetter, H Ward. May 20, 2003

M Hewitson, K Koetter, H Ward. May 20, 2003 A report on DAQ timing for GEO 6 M Hewitson, K Koetter, H Ward May, Introduction The following document describes tests done to try and validate the timing accuracy of GEO s DAQ system. Tests were done

More information

http://clicdp.cern.ch Hybrid Pixel Detectors with Active-Edge Sensors for the CLIC Vertex Detector Simon Spannagel on behalf of the CLICdp Collaboration Experimental Conditions at CLIC CLIC beam structure

More information

A MAPS-based readout for a Tera-Pixel electromagnetic calorimeter at the ILC

A MAPS-based readout for a Tera-Pixel electromagnetic calorimeter at the ILC A MAPS-based readout for a Tera-Pixel electromagnetic calorimeter at the ILC STFC-Rutherford Appleton Laboratory Y. Mikami, O. Miller, V. Rajovic, N.K. Watson, J.A. Wilson University of Birmingham J.A.

More information

A Real Time Digital Signal Processing Readout System for the PANDA Straw Tube Tracker

A Real Time Digital Signal Processing Readout System for the PANDA Straw Tube Tracker A Real Time Digital Signal Processing Readout System for the PANDA Straw Tube Tracker a, M. Drochner b, A. Erven b, W. Erven b, L. Jokhovets b, G. Kemmerling b, H. Kleines b, H. Ohm b, K. Pysz a, J. Ritman

More information

arxiv: v2 [physics.ins-det] 13 Oct 2015

arxiv: v2 [physics.ins-det] 13 Oct 2015 Preprint typeset in JINST style - HYPER VERSION Level-1 pixel based tracking trigger algorithm for LHC upgrade arxiv:1506.08877v2 [physics.ins-det] 13 Oct 2015 Chang-Seong Moon and Aurore Savoy-Navarro

More information

Preparing for the Future: Upgrades of the CMS Pixel Detector

Preparing for the Future: Upgrades of the CMS Pixel Detector : KSETA Plenary Workshop, Durbach, KIT Die Forschungsuniversität in der Helmholtz-Gemeinschaft www.kit.edu Large Hadron Collider at CERN Since 2015: proton proton collisions @ 13 TeV Four experiments:

More information

Readout architecture for the Pixel-Strip (PS) module of the CMS Outer Tracker Phase-2 upgrade

Readout architecture for the Pixel-Strip (PS) module of the CMS Outer Tracker Phase-2 upgrade Readout architecture for the Pixel-Strip (PS) module of the CMS Outer Tracker Phase-2 upgrade Alessandro Caratelli Microelectronic System Laboratory, École polytechnique fédérale de Lausanne (EPFL), Lausanne,

More information

Where do we use Machine learning and where do want to improve?

Where do we use Machine learning and where do want to improve? Tracking@LHCb Where do we use Machine learning and where do want to improve? Sascha Stahl, CERN Paul Seyfert, INFN On behalf of LHCb DS@HEP 07.07.2016 The LHCb detector Vertex and track finding Particle

More information

Totem Experiment Status Report

Totem Experiment Status Report Totem Experiment Status Report Edoardo Bossini (on behalf of the TOTEM collaboration) 131 st LHCC meeting 1 Outline CT-PPS layout and acceptance Running operation Detector commissioning CT-PPS analysis

More information