Chapter III

From Atlas Wiki
Jump to navigation Jump to search


Trigger and Data Acquisition


The LHC collider has been designed to provide high luminosity, in order to study rare physics phenomena and to reduce the statistical error for precision measurements of particle properties --- such as the mass of the top quark. The luminosity is determined by the focusing of the beam, the number of particles per bunch, and the bunch crossing frequency. This last factor has a large influence on the experiments: LHC operates with a bunch crossing rate of 40 MHz (see Section 1.1); with about 23 interactions per bunch crossing, this means that each second about one billion of events will take place in the detectors. Storing the data from all events is neither a feasible nor desirable option: the sheer volume of data would amount to one Petabyte per second, which is impossible to manage with the actual technology. Moreover, most of the data produced consists of soft interactions, diffractive events and QCD background. These events are not very interesting from the physicists' point of view and need to be filtered out or stored with a reduced efficiency.

In order to identify interesting and rare physics phenomena in the bulk of the data, the ATLAS detector implements a three-level trigger. The task of the trigger is to classify events according to landmark properties --- for example, the presence of high-pT leptons --- and to store events that comply with a list of given properties --- this list is currently defined as the trigger menu. In this Chapter, after a brief introduction of the LHC beam structure, I will describe the three levels of the ATLAS trigger system, focusing on the trigger items used for the definition of the top sample.

1.1  The LHC beam structure


The LHC collider is the last stage in a series of accelerators: first, the protons are grouped in bunches of 1.8×1012 and accelerated in a LINAC up to an energy of 1.4 GeV. Then, the bunches are injected in the first synchrotron ring, the PS. The PS shapes the beam into 72 bunches of 1.7×1011 protons, 3.6 ns long and with a spacing of 25 ns, and accelerates the bunches up to 26.4 GeV. The "train" of 72 bunches is injected into the SPS, where 10 PS trains are grouped to form a SPS "batch". Each batch is accelerated up to 450 GeV before being extracted by the LHC collider for the final beam acceleration to 7 TeV. The final beam structure is illustrated in Figure 1.1. For each LHC orbit, the number of bunch crossings available to the experiments will range from 2620 for ALICE and LHCb up to 2808 for ATLAS and CMS. The last two experiments benefit from a tuning of the beam which synchronises gaps in the beam structure [60].

All the LHC experiments use a common timing system, called the Timing Trigger and Control (TTC) system, which distributes the LHC clock to the experiments. The experiments receive from LHC two signals: the 40 MHz (bc) clock --- which is used to count the bunch crossings and to identify events belonging to the same crossing --- and the orbit signal, which resets the bunch counter.

The interplay of the TTC system and the ATLAS detector is outlined in Section 1.4.4.


File:Beam.gif

Figure 1.1: Beam structure at LHC: each SPS "batch" is composed of a four-fold replica of a sequence of 3,3,4 PS "trains". Each train is composed by 72 proton bunches, 3.8 ns long and 25 ns apart. The spacings between the trains and the missing bunches at the end of a batch are due, respectively, to the transfer mechanisms PSSPS and SPSLHC. The total SPS batch length of 88.924 μs corresponds to one LHC orbit [60].


1.2  Trigger Menu


ATLAS is provided with a preliminary menu for the initial low luminosity phase. During the initial phase, the efficiency of the items on the trigger menu will be evaluated. The evaluation will allow a refining of the trigger menu, to be eventually used at design luminosity. Extra menu items can be added if justified by the physics program.

The ATLAS trigger menu is composed by four types of triggers:

  • inclusive physics triggers; these menu items are the backbone of the physics programme. They have been tailored to maximise the discovery potential for landmark physics channels --- e.g. the "gold-plated channel" H4μ;
  • pre-scaled physics triggers: inclusive selections with lowered thresholds and pre-scaling factors. These items are useful for the evaluation of detector performance --- for example to measure minimum bias --- and to allow the discovery of new physics;
  • exclusive physics triggers --- for rare processes;
  • calibration and monitoring trigger; these trigger items are designed to evaluate detector performance and verify the correct functioning of its components.

The menu for inclusive physics triggers is listed in Table 1.1. In the course of this chapter I will only describe the trigger items and algorithms related to the physics channel studied in this thesis --- the electroweak single top production. Since the analyses for all three single top production channels require the presence of a high-pT lepton (see Section ??), I require my data sample to be defined by the presence of a high-pT isolated electron (trigger item e25i) or muon (trigger item μ20i).


Selection signature Expected rate (Hz)
e25i 40
2e15i <1
γ60i 25
2γ20i 2
μ20i 40
2μ10 10
j400 10
3j165 10
4j110 10
j70 + xE70 20
t35i + xE45 5
2μ6 + secondary vertex 10
Other 20
Total ~200


Table 1.1: Trigger items and expected rates for the inclusive physics triggers at low luminosity [63]. The trigger codes have the following meaning: the first letter indicates the physics object to be triggered --- e for electrons, γ for photons, μ for muons, t for taus, j for jets, xE for missing transverse energy. If a number precedes the first letter, the trigger is requesting for multiple objects. The number after the object identifier indicates the pT threshold for the triggered object. The trailing letter i, when present, indicates an isolated object --- that is an object not found in the proximity of another object.


1.3  Data Acquisition and Trigger Architecture


File:Pauly 2.gif


Figure 1.2: The three-level structure of the ATLAS Data Acquisition and Trigger system [61].


The Data Acquisition and Trigger architecture of the ATLAS detector can be roughly divided in the hardware readout chain and the trigger system (see Figure 1.2). In the readout chain we can identify three main blocks: the Front-End (FE) Electronics, the Read-Out Drivers (RODs) and the Read-Out Buffers (ROBs). The FE electronics block is composed of fast, dedicated modules, mounted as close as possible to the sub-detectors to minimise electronic noise in the signal data. The FE electronics use fast pipeline memories: the data for each bunch crossing are stored in the memories, and kept until read out upon a Level 1 Trigger accept. The data from the FE electronics are sent to the RODs, which align data in time and, if necessary, compress the data fragments. The data from the RODs are sent to the ROBs, which are large memory buffers. The ROBs send the data to a PC farm where the final event reconstruction is performed and the data are stored. An example of the readout chain is given in Chapter 4 for the MDT system.

The flow of data in the readout hardware is regulated by the three level trigger system. At each level, the trigger system analyses the data and allows or inhibits the forwarding of data to the next step of the readout chain. So, the first level trigger analyses data from dedicated channels of the FE, the second level trigger analyses data from a subset of the ROB system and the third level analyses data stored in the whole ROB system.

The three trigger levels have different requirements and different implementation strategies:

Level1
The first level trigger (LVL1) is optimised for maximum processing speed. It has to process the detector data every bunch crossing, which means that the input rate is 40 MHz. The maximum decision time (latency) is 2.5 μs; if this limit is exceeded, the pipeline memory buffers of the FE electronics may be overflown, resulting in a loss of information or data corruption.The LVL1 processes a subset of the available data: Calorimeters are read out only with coarse granularity, while faster Muon Trigger detectors are read out with full granularity. To further reduce processing time, the trigger is implemented with dedicated electronics, located as close to the detector as possible.
Level2
The second level trigger (LVL2) receives input from LVL1 for accepted events. This input consists of Regions of Interest (RoIs); the RoIs signal areas in the sub-detectors recognised by the first level trigger as containing a trigger item. The LVL2 reads out data with full granularity from the RoIs, plus data from the sub-detectors that were not used by the first level trigger (Inner Detector). Unlike LVL1, the LVL2 system is based on software running on 500 rack-mounted dual-CPU PCs. The design input rate from LVL1 is 100 kHz, which limits the maximum processing time for LVL2 algorithms --- running on multiple PCs --- to 10 ms.
Event Filter
The third level trigger is called Event Filter (EF). The EF receives LVL2 decisions with an input rate of 3.5 kHz, then proceeds to read out the full event data from all the sub-detectors. The processing time is 1 second for each reconstructed event, and the result is stored in a permanent repository. The Event Filter is distributed on a large farm of PCs, and the output rate for reconstructed data is ~200 Hz.

The algorithms applied in the trigger system are designed to discriminate these objects:

electrons
At LVL1, the trigger identifies calorimeter clusters with electrons, according to the shape of the electromagnetic showers in the LAr samples (see Section ??). LVL2 analyses the energy deposition pattern in the calorimeters and correlates the calorimeter information with matching charged tracks from the Inner Detector and e/p likelihood from the TRT (see Section ??). The electron trigger menu covers a pseudorapidity range |h|<2.5.
photons
The LVL1 and LVL2 trigger algorithms are similar to the corresponding electron algorithms, with the difference that in LVL2 a veto against charged tracks is applied. Moreover, an algorithm running over Inner Detector data checks for γ e+e- conversion.
muons
The LVL1 trigger utilises specialised fast chambers (RPCs, TGCs) in the Muon Spectrometer. The output of the LVL1 is a h-f patch of the detector where muon candidates can be found. The LVL2 processes data from the precision chambers of this patch and extrapolates the reconstructed muon track back to the Inner Detector, looking for a matching track. The pseudorapidity coverage of the muon trigger is |h|<2.4.
jets
The jet trigger algorithm analyses shower shapes in the calorimeters. The LVL1 jet trigger covers the range |h|<3.2, but the EF can reconstruct jets in the forward calorimeter in the range 3.2<|h|<4.9.
taus
The tau trigger algorithm analyses shower shapes in the calorimeter cells. The LVL2 algorithm uses as discriminating variable the ratio between the energy deposited in a 37 cell cluster and the energy deposited in a 77 cell cluster. Since clusters of hadronic decay products from taus have a denser core, a high ratio may help identify tau decay products versus hadronic jets. In addition to shower shape information, the LVL2 algorithm looks for track multiplicity in the Inner Detector --- tau decays create fewer charged tracks than hadron jets do.
b-tagged jets
The trigger for b-tagged jet operates at LVL2. The selection starts from LVL1 RoIs for jets. The algorithm measures impact parameters of tracks from the Inner Detector, looking for secondary vertices.
missing transverse energy
The missing ET is evaluated by measuring the energy deposition in all cells of the calorimeter, plus the pT of muons. The coverage of the trigger algorithm is |h|<4.9.
total transverse energy
The total transverse energy algorithm sums the energy of all calorimeter cells up to |h|<4.9. Muons are included in the computation.

1.4  The Level 1 Trigger


The Level 1 Trigger is composed of four functional blocks (see Figure 1.3): the Calorimeter Trigger, the Muon Trigger, the Central Trigger Processor and the Timing Trigger and Control (TTC) system. The task of the Calorimeter and Muon systems is to collect data from the detectors with a reduced granularity (for the Calorimeter Trigger) or from dedicated trigger chambers (for the Muon Trigger), process it, and forward it to the Central Trigger Processor. The CTP combines the data from the Calorimeter and Muon systems and checks if the data matches the requirements for any of the items in the trigger menu. If the result is positive, the Level 1 accept (L1A) signal is sent to the Level 2 trigger. The L1A signal is sent also to the TTC system, which distributes it to the Front-End electronics. The L1A is used by the FE electronics as a command for outputting event data --- see the case for the MDT chambers in Section ??.

All the components of the LVL1 trigger system are synchronised with the 40 MHz LHC clock, and the system architecture makes full use of parallel processing, to reduce computing time.


File:Lvl1.gif


Figure 1.3: The Level 1 Trigger System [62]


1.4.1  Calorimeter Trigger


The Calorimeter Trigger is made of three components: a Front-End Preprocessor, a Cluster Processor and a Jet/Energy Processor (see Figure 1.3). The Preprocessor accepts the analogue input from ~7200 cell towers from the LAr, the Tile and the End-Cap calorimeters, with a granularity of Df×Dh=0.1×0.1. The data read out by the preprocessor consists of the analogue ionisation signal for each of the calorimeter cells. The Preprocessor shapes the analogue signal from the cells and digitizes it in 5 samples, each of 25 ns duration. The Preprocessor identifies the time slice where the signal is peaking, and assigns the event to the correct bunch crossing, since the time difference between a bunch crossing and the ionisation peak is constant [59]. The Preprocessor then applies to the signal a correction factor, taken from a lookup table (LUT), which calibrates the cell energy and corrects for the presence of the pedestal and electronic noise. The corrected result is encoded in an 8-bit word energy measurement, on a 1-bit per 1 GeV basis. The digital data is passed to the Cluster and Jet/Energy processors.

The Cluster and Jet/Energy processors apply specific algorithms to identify candidates for electrons, photons, single hadrons, taus, jets and to measure ETmiss. The output of the processors is the multiplicity of the reconstructed objects passing the trigger menu thresholds, and the RoI information. The object multiplicity data is sent by the Calorimeter Trigger to the Central Trigger Processor, which accepts or rejects the event. If the event is accepted, the Calorimeter Trigger forwards the RoI data to the Level 2 Trigger.

The Cluster Processor

The Cluster Processor collects digital data from the Preprocessor and searches for clusters of cell activity. For each calorimeter, the Cluster Processor elaborates a map of 5064 cell energies. The maps are scanned by a "sliding window" algorithm, which means that every possible combination of 44 cells is analysed, and the cell energies summed. To reduce computing time, the algorithm runs in parallel on separate electronic modules, one for each cell window. Since the sliding algorithm examines overlapping windows, each cell is examined 16 times by the algorithm. This implies that the data for each cell needs a fan-out to the modules that analyse the 16 windows.

The Cluster Processor combines each 44 window in the Electromagnetic calorimeter with a topologically connected window in the Hadronic calorimeter. Then, three regions are identified in the combined window (see Figure 1.4):


File:L1-calo-trig.gif


Figure 1.4: Cluster definitions for electron/gamma and hadron/tau candidates in the LVL1 Calorimeter Trigger algorithm [59].


  • a core region where the cells' energy is summed (Horizontal and Vertical Sums) to evaluate the shower ET;
  • a 22 RoI cluster where the shower centroid is calculated to fix the position of the RoI;
  • an isolation ring where the cell activity should be minimal.

The definition of the three regions changes slightly for electron/gamma candidates and hadron/tau candidates. For electron/gamma candidates, the Horizontal and Vertical energy sums are performed in four 21 cells clusters, located in the center of the 44 window, in the electromagnetic calorimeter cells only. Of these 4 sums, only the highest one is used to evaluate the shower energy. The RoI region is a 22 cluster at the center of the window, restricted to the electromagnetic calorimeter. The isolation ring is composed by the 12 cells surrounding the RoI region, plus all the 16 hadronic cells.

For hadron/tau candidates, the cluster definitions are different: the core region used to evaluate the shower energy is defined as one of the four 21 cell clusters in the electromagnetic calorimeter, plus the 22 central cell cluster in the hadronic calorimeter. The RoI region is composed by the two central 22 clusters in both the electromagnetic and the hadronic calorimeter, while the isolation ring is defined by the 12+12 cells surrounding the RoI region.

The Jet/Energy Processor

The task of the Jet/Energy Processor is to identify hadronic jets, and to measure the missing and the total transverse energy.

The jet identification algorithm is based on a sliding window principle, similar to the algorithm used in the Cluster Processor. Since the jets are in general larger than the electromagnetic showers created by electrons or taus, the granularity of the Jet/Energy Processor is 0.20.2 in Dh×Df, resulting in a total number of 3032 cells. Moreover, since the definition of a jet is not unique, various windows sizes can be used in the sliding window algorithm. The LVL1 design supports three window sizes: 22, 33 and 44 [59]. A larger window size guarantees a higher efficiency but suffers from the effects of increased pile-up and electronic noise; thus window sizes can be modified in the course of the experiment, to cope with changing requirements. A window is accepted as a seed for the trigger if the total ET exceeds the trigger thresholds, and if the ET of the RoI cluster --- a 22 sub-cluster inside the window --- is a local maximum among neighbouring RoI clusters. No isolation rings are foreseen in the jet trigger windows.

The Jet/Energy processor also performs the sum of the total transverse energy in the calorimeter and computes the missing ET.

1.4.2  Muon Trigger


The Muon Trigger system is composed of three sub-systems: the barrel system, which processes data from the RPC trigger chambers, the end-cap system, which processes data from the TGC chambers (see Section ??), and the Muon-CTP Interface (MUCTPI), which merges the output of the RPC and TGC subsystems and forwards it to the Central Trigger Processor. Three different subsystems are needed because the RPC and TGC operate in different conditions: the TGC chambers are slower than RPC chambers, thus extra electronics is needed to achieve synchronisation of the trigger data with the correct bunch crossing. Moreover, the magnetic field in the end-caps is more complex, as it is generated by different magnetic systems.

Both the RPC and TGC trigger system measure the muon momentum by searching for a muon track track inside a coincidence window. Each hit in the pivot planes --- RPC1 and TGC3, (see Figure 1.5) --- is fitted with a straight track through the interaction point, and the track is extrapolated to the other two chamber stations. Coincidence windows are defined around the intersections of the extrapolated track with the trigger stations. The width of the coincidence window depends on the momentum threshold of the muon trigger --- the lower the threshold, the wider the window --- but it does not exceed the size of 0.10.1 Dh×Df. There are two trigger algorithms --- low-pT and high-pT --- with different requirements. The low-pT algorithms analyses hits in two stations, while the high-pT algorithm extends to all three trigger stations (see Figure 1.5).

In the low-pT algorithm, a track is identified as a muon candidate if:

  • there is at least one hit in the coincidence window;
  • all hits belong to the same bunch crossing;
  • at least one of the two stations has hits in both trigger layers.

In the high-pT algorithm, in addition to the above requirements, an extra hit in the coincidence window in the third station is needed.

The output of the RPC and TGC trigger system consists of multiplicities of muon candidates for six pT thresholds. The multiplicities are passed to the MUCTPI system, which prevents the double-counting of muons by removing overlapping candidates. The MUCTPI then sends the filtered data to the Central Trigger Processor.

The RoI data of the muon system consists of the h and f coordinates of the muon tracks at the pivot plane. The RoIs are sent to the LVL2 on receiving the LVL1 Accept signal.


File:Mdt-trig.gif


Figure 1.5: The LVL1 Muon Trigger system. The pivot plane for RPC chambers is the innermost station (RPC1), while for TGC chambers is the outermost wheel (TGC3). The low-pT algorithm searches for hits in the pivot plane and in stations RPC2 and TGC2, while for high-pT triggers extra hits in the stations RPC3 and TGC1 are required [59].


1.4.3  Central Trigger Processor


The Central Trigger Processor is the core of the LVL1 trigger system. The CTP receives up to 160 trigger inputs from various sources [61]:

  • Trigger object multiplicities --- for different pT thresholds --- from Calorimeter and Muon Triggers for electron/photon, hadron/tau, jets, muons.
  • Energy flags from the Calorimeter Trigger: ET, ETmiss, S ETjets;
  • External calibration triggers for specific sub-detectors;
  • Specialised triggers from external detectors: for example beam pick-ups for luminosity triggers.

The first task of the CTP is to align in time the trigger inputs. Despite the fact that all LVL1 trigger systems are synchronised with the 40MHz LHC clock, the trigger inputs belonging to different subsystems may arrive at different times. To achieve time alignment, a programmable delay is applied to each input.

The CTP combines trigger inputs according to trigger menu: up to 256 trigger items (configurable via look-up-tables) are made from combinations of conditions on the trigger inputs. Each trigger item in the menu has a mask, a priority and a pre-scale factor. The trigger mask is used to enable/disable trigger items during the course of a run; for example, calibration triggers are foreseen only in coincidence with empty bunches in the LHC batch (see Figure 1.1). The pre-scale factor and the priority are used to prevent the overflow of FE electronics' buffers: low-priority triggers are pre-scaled to reduce trigger rate, while high-priority triggers are pre-scaled or switched off only in case of strict necessity. In addition to the this safeguard mechanism, the CTP introduces a 100 ns (4 LHC bunch crossings) dead time between consecutive L1A signals; this mechanism is necessary because the calorimeter signal sent from the detector to the FE electronics is composed of a ionisation profile spanning five bunch crossings. The data used for one trigger accept is shifted out of the pipeline memory and cannot be retrieved for subsequent triggers..

The CTP input also includes an external veto signal, that switches off all trigger items, in case of a severe buffer overflow in the system.

The Level 1 Accept signal is the result of a logical or of all trigger items. The L1A signal is sent to the TTC system, together with an 8-bit word that describes the trigger type. The TTC forwards the L1A to all detectors, which in turn send the data downstream to the ROBs.

The CTP has its own dedicated ROD that communicates with the Data Acquisition system: the object multiplicities, the energy flags and the trigger type are stored together with the reconstructed event. The LVL1 data can then be used to study trigger efficiencies or for monitoring trigger rates.

1.4.4  Timing Trigger and Control


The Timing Trigger and Control system fulfils two major tasks: synchronising the FE electronics and distributing the L1A signal. The FE electronics of the ATLAS detector are grouped into ~40 regions, called TTC partitions; each of the partitions must receive trigger and clock signals at the same time, otherwise data from different partitions may become unaligned in time. To achieve this goal, the TTC system uses a tree network of optical links for distributing timing and trigger information. Each partitions is served by a separate tree.

The TTC system receives two signals from the LHC accelerator: the 40 MHz clock signal and the orbit signal (see Section 1.1). For each clock signal received, the TTC increases by one an internal register, called the bunch crossing counter (BCID). The BCID is used to uniquely identify bunch crossing inside a LHC orbit. At the end of one orbit (88.924 μs), the orbit signal is sent, and the TTC resets the BCID counter. The TTC system fans out the LHC clock and the bunch counter reset (BCR) to the partitions.

The TTC receives from the CTP the L1A signal and the 8-bit trigger type word. Upon receiving L1A, the TTC increases an internal register called the event counter (EVID). The TTC forwards the L1A, the trigger type, the BCID and the EVID to the FE electronics; the BCID is used by the FE to retrieve the correct data from the buffers --- see the case for the MDT chambers in Section ?? --- while the EVID is used to identify triggered events and is stored in the data.

All the FE electronics in the TTC partitions can adjust the phase of the received TTC signal; this ensures a time jitter between partitions lower than 30 ps.

1.5  High-Level Trigger and Data Flow


In the ATLAS experiment, the Level 2 Trigger and the Event Filter are identified under the collective name of High Level Trigger (HLT) [63]. The common denomination stems from the fact that --- unlike LVL1 which is implemented in dedicated electronics --- these two trigger levels are implemented on rack-mounted PCs. The selection algorithms implemented on the LVL2 and the EF compose the Event Selection Software (ESS). The ESS algorithms are adapted from the offline reconstruction software used for physics analysis at ATLAS. The algorithms can be run in consecutive stgif]]; the result of one algorithm is used as "seed" for the following step. Each step can be seen as a Bayesian test on the event data [65]; if an event is rejected in one of the early tests, then the event is discarded and the computing resources are freed for another event. In this way the occupancy of the trigger system and the computing time are minimised.

The HLT system relies on the infrastructure of the Data Flow (DF) system. The purpose of the Data Flow is to collect data from the memory buffers and present them to the HLT system upon request. The interplay between the components of the HLT and the DF is shown in Figure 1.6.

The main computing block of the Level 2 trigger system is composed by the Level 2 Processing Units (L2P), where the ESS algorithms are executed. The L2Ps interact with the following Data Flow components:

  • the RoI Builder (RoIB) combines RoI information coming from the LVL1 trigger algorithms into a single event record. This record contains a map of the regions of the detector that may contain interesting data;
  • the Level 2 Supervisors (L2SV) selects which Level 2 Processing Units need to process the data. There are about 10 L2SV, each controlling a subset of the available processing units;
  • the Read-Out Subsystems (ROS) retrieve the data from the read-out buffers of the ATLAS detectors and build data fragments to be sent to the L2Ps.

The Event Filter is composed by a farm of Event Filter Processors (EFP) which execute the ESS algorithms at the Event Filter level and reconstruct the event with the full detector data. The EFPs interact with the following Data Flow components:

  • the Data Flow Manager (DFM) receives the decision from the L2SV, retrieves the data from the ROS and selects the Sub-Farm Inputs for data retrieval. The Sub-Farm Input (SFI) collects the data from the ROS and sends it via a network switch to one or more Event Filter Processor. The system composed by a DFM and multiple SFIs is collectively known as Event Builder (EB);
  • the Sub-Farm Outputs (SFO) receive the reconstructed event from the EFPs and write it to the permanent storage.

File:L2-ef.gif


Figure 1.6: The exchange of messages between HLT and DF components [63].


The life cycle of an event can be described as follows (see Figure 1.6): after an event is accepted by LVL1, the RoIB builds the RoI record, which is sent to one of the Level 2 Supervisors. Since each L2SV is connected only to a sub-set of processing units, the RoIB chooses the L2SV on the basis of the RoI information, and in a round-robin scheme to minimise the supervisors' load. The L2SV assigns the event to one of the L2P and sends the RoI information to it. The L2P then asks a subset of the ROSs to retrieve the event data from the read-out buffers. The retrieved data is processed on the L2Ps by the LVL2 trigger algorithms (see Section 1.5.1); the output of the algorithms is sent back to the supervisor. Depending on the output of the algorithms, the L2SV receives from the L2Ps a decision whether to accept or reject the event. If the event is accepted, the L2SV forwards the positive result to the Data Flow Manager. The output of the trigger algorithms is stored on a dedicated ROS --- the pseudo-ROS (pROS). Events that do not pass the LVL2 algoritmhs' criteria can be retained ("forced accepts") for monitoring purposes or for random triggering [64].


The Data Flow Manager instructs a Sub-Farm Input to retrieve the complete event data from all the ROS systems; the SFI builds the event fragment from the separate ROS fragments, and forwards it to an Event Filter Processor. The EFPs are grouped in PC farms connected by a network switch; the Event Builder can instruct the SFI to send the data to an EFP which is not already processing another event, to minimise workload. The chosen EFP executes the EF algorithms (see Section 1.5.2) on the full event data. If the event is accepted by the algorithms, the EFP adds to the event a summary record which lists the reconstructed objects, and sends the event to the Sub-Farm Output, which stores the data permanently.

Unlike the LVL1 trigger, which accesses only part of the ATLAS sub-detectors --- the Muon Spectrometer and the Calorimeters --- the LVL2 and EF make use of data from all sub-detectors, with full granularity.

1.5.1  Level 2 trigger algorithms


T2Calo

T2Calo is a clustering algorithm for electromagnetic showers; this algorithm can differentiate between isolated electron/photon candidates from jets by examining the shape of the electromagnetic showers in the calorimeter. The algorithm input seeds are the LVL1 electron/gamma RoI positions (see Section 1.4.1).

The T2Calo algorithm has access to fine granularity data, including the separate samplings of the EM calorimeter (see Section ??). The first step of the algorithm is to refine the LVL1 position by finding the cell with the highest energy in the second sampling of the EM calorimeter. This position (h1,f1) is later refined by calculating the energy-weighted centroid (hc,fc) in a window of 37 cells (h×f) centred around (h1,f1), using as weights the energy in the second sampling.

In order to select electron/photon candidates and reject the jet background, the following shower shape parameters are used:

  • the ratio Rhshape=E37/E77 between the total ET measured in windows of 37 and 77 in Sample 2 of the LAr Calorimeter. Electron/photons are expected to have narrow showers, thus Rhshape~1;
  • in the first strip sampling of the LAr Calorimeter, the parameter Rhstrip=(E1st-E2nd)/(E1st+E2nd) is computed. Here E1st and E2nd are the energies of the highest and second-highest local maxima. This parameter is useful to reject against early pion conversions;
  • the total energy E deposited in the EM calorimeter is calculated in a window of 37 cells around (h1,f1), summing over all three samplings;
  • the energy leaking into the hadron calorimeter Ehad is calculated in a window of size Dh×Df=0.2×0.2 around (hc,fc).

muFast

The muFast algorithm is a stand-alone LVL2 tracking algorithm for the Muon Spectrometer. The input seeded is the RoI given by the LVL1 Muon Trigger; the LVL2 algorithm analyses data from the two ROS towers closest to the RoI to allow for muon track reconstruction across chamber boundaries. The algorithm follows four stgif]]:

  • LVL1 emulation: the RoI information obtained directly from LVL1 is not sufficient for a detailed measurement of the muon track properties. Hence, the algorithm retrieves the RPC or TGC hits that were used for the LVL1 decision and repeats the same procedure of the LVL1 algorithm. The result is a straight "road" around the muon trajectory, crossing three or more MDT chambers.
  • Pattern recognition is performed inside the road. MDT tubes lying within the road are selected and a contiguity algorithm is applied to remove background hits not associated with the muon track.
  • A straight-line track fit is made within each MDT chamber. For each chamber, the fit provides a precision spatial measurement (super point) of the muon track. In the spectrometer barrel, the track sagitta is evaluated by measuring the displacement of the super point in the pivot plane with respect to a straight line fit between the other two super points. In the end-cap, instead, the deflection angle of the muon track is measured.
  • A fast pT estimate of the muon track is made using a Look-Up Table. In the spectrometer barrel, the LUT encodes the following function [67]:
    1
    s
    = A0 pT+A1
    where the A0 parameter models the magnetic field of the spectrometer, and A1 models the effects of energy losses in the calorimeter. A0 and A1 are tabulated versus h and f bins.

The output of this algorithm is a measurement of the muon pT, obtained via the LUT, plus the h and f position of the muon track at the entrance of the Spectrometer.


File:Mufast.gif


Figure 1.7: Efficiency (left) and pT (right) resolution of single muon reconstruction as a function of the muon pT for the EF algorithms MOORE, iPatRec and MuID (stand-alone and combined) [73].



muComb

The muComb algorithm goes a step further than the muFast algorithm. The muComb algorithms uses the muon tracks reconstructed by muFast and searches for matching tracks in the Inner Detector. The track matching allows to identify high-pT prompt muons and to reject fake muons and muons from kaon and pion decays.

The Inner Detector tracks --- reconstructed by LVL2 algorithms not described here --- are extrapolated to the Muon Spectrometer, taking into account the deflection by the central solenoid magnetic field:

Df=
Qa
pT-pT0

where Q is the track charge (as measured by the Inner Detector), a is the field integral and pT0 accounts for the energy loss in the calorimeter. Monte Carlo estimates give to the constant parameter pT0 a value of ~1.5 GeV [63]. The pT of combined tracks is measured by a weighted average of the momenta of the ID track and of the muon track, and a c2 cut is applied. Muons from kaon and pion decays have a large c2 value --- because of the large pT taken by the decay neutrino --- and get rejected by the algorithm.


1.5.2  Event Filter algorithms


LArClusterRec

LArClusterRec is the reconstruction package for electromagnetic clusters in the calorimeter. The algorithm is organized in two stgif]]: in the first step, towers are created by summing the cells of the electromagnetic calorimeter and the pre-sampler in depth using the full granularity of 0.0250.025. In the second step, cell clusters are built by running a sliding window algorithm on the calorimeter towers. The window sizes are optimised to obtain a trade-off between maximum efficiency for shower reconstruction and maximum rejection of fake clusters. For Sample 1 the window size is 1 or 2 strips depending on shower position with respect to the strip center, for Sample 2 the window size is 5x5 (h×f) in the End-Cap and 3x5 (if looking for unconverted photons) or 3x7 (electrons) in the Barrel. The optimal window sizes are estimated from Monte Carlo simulations.

The energy read out from the calorimeter cells are mapped on the h/f plane, and the energies of the cells inside the window are summed; the window slides on the calorimeter map searching for local maxima of energy, which are assumed to coincide with the core of the electromagnetic showers.

The energy of the reconstructed cluster is corrected for both longitudinal and lateral leakage, and for energy losses in the dead material from the Inner Detector and the cryostat. Longitudinal leakage and losses in the passive material are corrected for by summing the energy from the pre-sampler and the accordion sections, and by assigning different weights to the two sections according to the shower depth, parametrised with respect to the radiation length X0. The weights are extracted by Monte Carlo simulation of the detector or by test-beam measurement. The amount of energy measured in a window of fixed size depends on the position of the shower centre within the window. This effect is corrected by applying a factor

C0 ( 1+C1(h-hc-C2)2 )

where h-hC parametrizes the distance between the shower centroid and the geometrical center of the window.


Finally, the energy of the cluster has to be corrected for modulation effects. The accordion geometry creates a dependence from f which can be fitted by a periodic function

a0

/
|
|

\
3
i=1
aicos(2ip(fabs-Df))+b1sin(2pfabs)

\
|
|

/

correcting the modulation effects.


egammaRec

The algorithm egammaRec is an algorithm designed to discriminate electrons and photons from jets. The algorithm merges information from the electromagnetic clusters --- reconstructed with LArClusterRec algorithm --- and from the tracking system.

The shape of the electromagnetic clusters is analysed in a way similar to the LVL2 T2Calo algorithm. Again, the algorithm exploits the fact that electron/photon showers are narrower than jets.

The algorithm then tries to combine clusters and tracks. For a given cluster all tracks are examined in a window around the cluster position. In case more than one track is found, the track with the highest pT is retained. The algorithm calculates the ratio E/p between the the cluster energy and the track momentum. Since the mass of the electron can be neglected at the scale of LHC energy, the expected value of the E/p ratio is about unity; thus a cluster and track match with E/p between 0.5 and 1.5 is required [63]. In the subsequent particle-identification step the information provided by egammaRec can be used. In the case of an electron hypothesis, jets can be rejected by analysis of the shower shape, track quality cuts, E/p matching, and the precision of the match between the cluster and the track positions.

MOORE and MuID

MOORE is an offline track reconstruction package for the Muon Spectrometer. The offline algorithm has been adapted to run in "seeded mode" in the HLT framework.

The algorithm calculates the drift distances in the MDT tubes from the drift times. The results are corrected to compensate for time of flight, propagation of the signals along the MDT wires, and Lorentz effect. Then, MOORE performs pattern recognition on the MDT hits. The algorithm uses the RoI seed to recognise f slices where the hits are. Since the magnetic field of the Muon Spectrometer has no bending power in the f coordinate, hits with roughly the same f are assumed to belong to the same muon track. For each MDT chambers with tube hits, two hits are taken from each multi-layer, and four track segments are built from lines tangential to the corresponding two drift circles. Hits that are within a given distance from the segment are added iteratively to the segment, and a straight line fit is performed. The track segment with the lowest c2 is chosen as the MDT segment. All the MDT segments of the outer station are combined with those of the middle station.The position of the fitted tracks is specified at the first measured point inside the Muon Spectrometer. In order to be used for physics studies an extrapolation of the track position to the interaction point is needed. To accomplish this task a different offline package, Muon Identification (MuID), is used. MuID models the multiple scattering of the muon track in the calorimeters with two scattering planes and the energy loss is evaluated from the calorimeter measurements or from a parametrization as a function of h and the muon momentum. With the scattering plane parameters and the lost energy, MuID is capable to extrapolate the track to the interaction point. In the next step, tracks from the Muon Spectrometer and from the Inner Detector are combined, and a c2 fit of the muon track parameters and their summed covariance from the Inner Detector is performed [66]. All matches giving a satisfactory combined fit are retained as identified muons.

1.6  Trigger Efficiencies

The performance of HLT trigger algorithms has been tested with Monte Carlo data. The ATLAS detector was divided in detector "slices", and for each slice, the complete trigger chain LVL1LVL2EF was simulated.

1.6.1  Electron selection performance

The efficiency of the selection for the electron trigger item e25i were evaluated for the LVL2 and EF algorithms, while the LVL1 efficiency was assumed to be 100%1. Two type of Montecarlo samples were used:

  • monochromatic samples with electrons of fixed energy ranging from 7 to 80 GeV and pile-up;
  • a realistic sample simulating Z e+e-, plus pile-up.

The performance of the trigger algorithms was evaluated using calorimeter information only, and combining calorimeter clusters with tracks from the Inner Detector. The use of charged tracks helps reduce the rate of pions identified as electrons and electrons from photon conversions.

The total efficiency for monochromatic electrons of ET=25 GeV, with a threshold of 22 GeV is 79.8% (see Table 1.2). This threshold corresponds to the trigger item e25i (see Table 1.1). The effective threshold is lower than the nominal energy of the trigger item, in order to achieve the maximum efficiency at the nominal energy. The efficiency curve for the electron trigger slice is shown in Figure 1.8.


Trigger Step Rate (Hz) Efficiency (%)
LVL1 8600 100
LVL2 Calo 1900 97.3
LVL2 Calo + ID 169 91.0
EF Calo 124 90.0
EF ID 76 84.9
EF Calo + ID 38 79.8


Table 1.2: Efficiencies and rates for the electron trigger slice, evaluated for monochromatic electrons with ET=25 GeV. The LVL1 efficiency is assumed to be 100% [71].




File:Ef-elec-effi.gif


Figure 1.8: Efficiency of the electron trigger system (all three levels combined) as a function of electron ET in the low luminosity regime. The effective threshold used in the e25i trigger item is ET=22 GeV [71].



1.6.2  Muon selection performance

The performance of the Muon Trigger slice was evaluated by generating single muon events with muon momentum up to 100 GeV. The LVL1 hardware for the barrel of the Muon Specrometer was simulated by a software algorithm in the ATHENA package. The LVL1 trigger selection was performed using three low-pT thresholds of 6, 8 and 10 GeV and two high-pT thresholds of 20 and 40 GeV. The LVL1 selection algorithm have an efficiency of about 80% at the nominal threshold value, for all the thresholds considered. The efficiency curves for the two high-pT thresholds are shown on Figure 1.9. The low level of efficiency of the LVL1 trigger is mainly due to the geometric acceptance of the Muon Spectrometer. A plot of the efficiency of the algorithm in the hf space (see Figure 1.10) shows an "X-ray scan" of the Spectrometer Barrel, with the uninstrumented crack at h=0 and the magnet support structures.

The results of the LVL1 algorithm were used to seed the LVL2 algorithms muFast and muComb. The muFast algorithm achieves an efficiency of nearly 100% above the nominal thresholds of 6 and 20 GeV (see Figure 1.11). The muComb algorithm refines the muFast results by matching tracks from the Muon Spectrometer with tracks from the Inner Detector. The extrapolation performed by muComb matched tracks with a residual of 27 mrads in the azimuthal angle f and 3 cm in z [73].

Finally, the LVL2 results were used with the EF algorithms MOORE and MuID. In addition to the single muon signal, background samples were generated. The background samples contained single muons plus minimum bias and cavern background; the background levels were generated at the nominal low luminosity levels and multiplied by safety factors 2, 5 and 10.

The background reduces the efficiency of the muon trigger algorithms to a level of about 85% with respect to the LVL2 result (see Figure 1.12); a 10 background level also increases the rate of fake muon tracks to 20%. The reconstruction efficiency of MOORE and MuID, under nominal background levels, is 99% and 98.7% respectively [74].

The momentum resolution spT/pT of the EF muon algorithms is about 4%. The benefit of combining muon tracks with Inner Detector tracks is evident for low pT muons, which are better measured in the Inner Detector (see Figure 1.12). Minimum bias and cavern background have a negligible effect on the momentum resolution of true muon tracks.


File:L1-mu.gif   File:L1-mu-ineff.gif


Figure 1.9: Efficiency curves of the LVL1 Muon Trigger barrel algorithm, for the high-pT thresholds of 20 and 40 GeV [72].


Figure 1.10: Inefficiency of the LVL1 Muon Trigger algorithm. The plot shows the efficiency of the algorithm, in one half of the hf space spanned by the Muon Spectrometer Barrel. Dark areas indicate high inefficiency zones. The eight-fold pattern is caused by the support structure for the Spectrometer magnet. The dark area at h~0.75, f~4.5 is caused by the presence of an access elevator in the spectrometer [72].



File:Mufast-effi.gif


Figure 1.11: Efficiency of the LVL2 algorithm muFast for pT thresholds of 6 and 20 GeV. The plotted efficiency is relative to the LVL1 algorithm [73].




File:Muid-eff.gif   File:Muid-comb v2.gif


Figure 1.12: Efficiency of the MuID algorithm at nominal low-luminosity background level and with safety factors 2, 5 and 10[74].


Figure 1.13: Resolution of single muon reconstruction as a function of the muon pT for the EF muon algorithms MOORE and MuID and the Inner Detector algorithm iPatRec. The resolution of MuID at low pT improves when combining muon tracks with Inner Detector tracks [75].