Chapter I

From Atlas Wiki
Revision as of 17:38, 18 July 2005 by Barison (talk | contribs)
Jump to navigation Jump to search


Chapter 1  The Top Quark

During the past thirty years, physicists have been building comprehensive and accurate models to describe the universe at sub-nuclear level. The result is an elegant set of theories collectively known as the Standard Model (usually abbreviated as SM). These theories provide powerful tools; their predictive power has been confirmed in many experiments worldwide.

The inspiring principle of the Standard Model is gauge symmetry. This principle states that the description of all physical phenomena does not vary if the Lagrangian equations describing the phenomena are subjected to a special set of local transformations --- the local gauge transformations. In analogy with classical mechanics, local gauge invariance in the interaction Lagrangian is associated with conserved currents and boson fields, which act as the medium of the interaction. A great success for gauge theories (and thus for the Standard Model) was the prediction of the properties of intermediate vector bosons W± and Z0, which mediate the weak interaction, and which were discovered at CERN in 1983 [2, 3, 4, 5].


Despite its glorious career, the Standard Model is far from writing the final word in the book of physics: there are several puzzles that still need to be solved and may prove to be a limit to the validity of the Standard Model. The most obvious puzzle is the existence of mass: the electroweak gauge symmetry applies only if the intermediate vector bosons are massless; this of course is contradicted by experimental measurements, which assign to W± and Z0 a mass of 80.425±0.034 and 91.1875±0.0021 GeV respectively [6]. In order to reconcile gauge symmetry and the existence of mass, a spontaneous symmetry breaking mechanism --- known as the Higgs mechanism --- is introduced in the Standard Model. The Higgs mechanism postulates the existence of at least one scalar, electrically neutral boson field --- the Higgs field --- which breaks the electroweak symmetry and interacts with particles. The coupling strength of the interaction of a particle with the Higgs field is proportional to the mass of the particle itself.


The carrier of the Higgs field --- the Higgs boson --- still needs to be found before the Higgs mechanism can be validated; however, the task of finding this elusive particle is rendered more difficult by the fact that the Higgs theory does not predict a value for MH, the mass of the Higgs boson. Despite this fact, it is still possible to constrain the allowed values of MH: since the Higgs field is responsible for generating the mass of the electroweak bosons, by measuring precisely the masses of the W± and Z0 one can derive an expectation value for MH.


The Higgs mass can be constrained by analyzing the one-loop corrections to the mass of W±: these corrections are proportional to D r:

D r=D r(a,MZ,MW,mt,MH)=Da-
cW2
sW2
Dr+(D r)rem

where Da contains the leading logarithmic contributions from the light fermion loops, Dr contains the mt2 dependence from top/bottom loops and (D r)rem contains the non-leading terms where MH plays a role [7]. Thus, a precise measurement of MW and mt --- the other physical parameters a and MZ have a smaller impact --- can give us, through the evaluation of D r, an estimate on MH, as shown in Figure 1.1.


=7cm figs/tmass_higgs.eps


Figure 1.1: c2-likelihood for the mass of the Higgs boson, depending on the value of the top quark mass. The most probable value corresponds to the minimum c2. The graph shows the shift in MH for the new Tevatron analyses (dotted line) compared to the previous Tevatron measurements (full line). The band around the full line shows the uncertainties in the theoretical model. Figure taken from [6].


Although the mass of the W± has been measured with a precision of a few tens of MeV by several experiments over the past twenty years, a measurement of the mass of the top quark with the same degree of precision has not been performed yet. Precise determination of the masses of both the W± and the top quark are necessary to obtain a good estimate of the Higgs mass.

The top quark was discovered at Fermilab in 1995 [8], and its mass was measured by two experiments, CDF and DØ. The average top mass in Run I was mt=178.0±2.7±3.3 GeV, and the limiting factors in the mass resolution were uncertainties in the jet energy scale and limited statistics [7]. Hopefully, the statistics will improve once the LHC accelerator starts functioning, pushing top quark physics from the discovery phase to the precision measurement phase. LHC, with its high luminosity, will become a true top factory producing up to 80 million tt pairs per year, allowing us to refine the top mass measurement to a precision of ~1 GeV [7].


Although statistically important, pair production is not the only process by means of which top quarks can be produced at the LHC; there is also so-called single top production, where a single top is present among the final state particles. This process may play an important role in the measurement of mt: as only one top is present in the final state, the assignment of decay particles is simplified whereas in pair production one has to find the correct assignment of decay products for both the t and the t. Hence the systematic error in the measurement of the top mass, due to the incorrect assignment of decay products, may be reduced, resulting in a more precise measurement.

Another interesting property of electroweak single top production is the cross-section, which is proportional to the CKM matrix element Vtb. A precise measurement of the cross-section for this class of processes may confirm the unitarity of the CKM matrix, or give an indirect evidence for the existence of a fourth quark generation, mixing with the (t,b)L doublet.


1.1  Properties of the top quark


The top quark was postulated in the Standard Model to complement the bottom quark in the third generation doublet; the model predicts an up-like quark, with spin=1/2, weak isospin=1/2 and charge=+2/3. Results from b-physics at LEP --- for example, the precision measurement of the Z® bb partial width --- constrains the top mass to a value around 170 GeV [10].


Several properties of the top quark were studied during the first run at Tevatron:

  • the tt production cross section;
  • kinematics of the tt pair;
  • the top mass;
  • the nature of the V-A electroweak coupling by determining the helicity of the W produced in the decay of the top quark;
  • spin correlations in tt production;
  • electroweak production of top quarks;
  • t decays of the top quark;
  • exotic decays;
  • decay channels involving Flavour Changing Neutral Currents.

Results at the Tevatron were hampered by low statistics. The studies will proceed, with evident benefits from increased luminosities, both for Tevatron Run II and at the LHC.

Among the Tevatron studies, the most interesting concerns the mass of the top quark: the combined measurements [9] of the CDF and experiments at the Tevatron at Run I resulted in a top mass of 178.0±4.3 GeV (see Figure 1.2). Such a large mass is quite extraordinary in the the quark hierarchy: the top quark is roughly 35 times more massive than the b quark.


=7cm figs/world_avg.eps


Figure 1.2: Combined measurement of the top quark mass at the Tevatron during Run I. The Run I average includes measurements from lepton-plus-jets and di-leptonic decay channels for both CDF and DØ, plus the CDF jets-only channel. Figure taken from [9].


In the Higgs model, the top quark (like any other fermion) acquires mass via the Yukawa coupling to the Higgs field:

mt=yt
v
2

where yt is the strength of the Yukawa coupling between the top quark and the Higgs field, and v is the vacuum expectation value that is the energy associated to the Higgs field in the vacuum. The Yukawa coupling factor for the top quark yt is close to unity; this suggests that the top quark may play an important role in models describing the fermionic masses. The Higgs model, however, cannot (yet) explain why the Yukawa coupling of the top quark is so large compared to other quarks; Technicolor models, instead, provide both a dynamical breaking of the electroweak symmetry and a theory for fermion masses. One of these models, the Topcolor-assisted Technicolor (usually referred to as TC2), predicts the existence of a new interaction that couples only with the third quark family [19] and generates the large top mass.


This large mass provides a large phase space in the top decay, which results in a lifetime of 4×10-25 seconds, an order of magnitude shorter than the characteristic QCD hadronisation time of 28×10-25 seconds. This has some interesting experimental and theoretical aspects:

  • with the exception of the 1S ground state, toponium resonances cannot be formed because of the short lifetime of the top quark [16];
  • hadronisation will not degrade the mass measurement of top quark; however, this effect is spoiled by the fact that the majority of the decay products hadronise;
  • since the time constant for spin-flip via gluon emission is longer than the lifetime of the top quark, decay products --- in particular the W boson --- retain the spin information of the original top quark [7];
  • pair production at threshold is not affected by non-perturbative QCD contributions, thus the threshold shape can be predicted by perturbative QCD [17].

Top quark production in tt pairs (see Section 1.2) has a larger cross-section than electroweak single top production (see Section 1.3).

1.2  Top pair production


At the tree level, the cross section for QCD tt production is expressed as:

Chapter 1  The Top Quark

During the past thirty years, physicists have been building comprehensive and accurate models to describe the universe at sub-nuclear level. The result is an elegant set of theories collectively known as the Standard Model (usually abbreviated as SM). These theories provide powerful tools; their predictive power has been confirmed in many experiments worldwide.

The inspiring principle of the Standard Model is gauge symmetry. This principle states that the description of all physical phenomena does not vary if the Lagrangian equations describing the phenomena are subjected to a special set of local transformations --- the local gauge transformations. In analogy with classical mechanics, local gauge invariance in the interaction Lagrangian is associated with conserved currents and boson fields, which act as the medium of the interaction. A great success for gauge theories (and thus for the Standard Model) was the prediction of the properties of intermediate vector bosons W± and Z0, which mediate the weak interaction, and which were discovered at CERN in 1983 [2, 3, 4, 5].


Despite its glorious career, the Standard Model is far from writing the final word in the book of physics: there are several puzzles that still need to be solved and may prove to be a limit to the validity of the Standard Model. The most obvious puzzle is the existence of mass: the electroweak gauge symmetry applies only if the intermediate vector bosons are massless; this of course is contradicted by experimental measurements, which assign to W± and Z0 a mass of 80.425±0.034 and 91.1875±0.0021 GeV respectively [6]. In order to reconcile gauge symmetry and the existence of mass, a spontaneous symmetry breaking mechanism --- known as the Higgs mechanism --- is introduced in the Standard Model. The Higgs mechanism postulates the existence of at least one scalar, electrically neutral boson field --- the Higgs field --- which breaks the electroweak symmetry and interacts with particles. The coupling strength of the interaction of a particle with the Higgs field is proportional to the mass of the particle itself.


The carrier of the Higgs field --- the Higgs boson --- still needs to be found before the Higgs mechanism can be validated; however, the task of finding this elusive particle is rendered more difficult by the fact that the Higgs theory does not predict a value for MH, the mass of the Higgs boson. Despite this fact, it is still possible to constrain the allowed values of MH: since the Higgs field is responsible for generating the mass of the electroweak bosons, by measuring precisely the masses of the W± and Z0 one can derive an expectation value for MH.


The Higgs mass can be constrained by analyzing the one-loop corrections to the mass of W±: these corrections are proportional to Dr:

s(s,mt) =

 
å
i,j
ó
õ
1


0
dx1 ó
õ
1


0
dx2  

fi(xif2)   fj(xjf2)  

sij(s,mt,asr2))  

    (1.1)
D r=D r(a,MZ,MW,mt,MH)=Da-
cW2
sW2
Dr+(D r)rem

where Da contains the leading logarithmic contributions from the light fermion loops, Dr contains the mt2 dependence from top/bottom loops and (D r)rem contains the non-leading terms where MH plays a role [7]. Thus, a precise measurement of MW and mt --- the other physical parameters a and MZ have a smaller impact --- can give us, through the evaluation of D r, an estimate on MH, as shown in Figure 1.1.


=7cm figs/tmass_higgs.eps


Figure 1.1: c2-likelihood for the mass of the Higgs boson, depending on the value of the top quark mass. The most probable value corresponds to the minimum c2. The graph shows the shift in MH for the new Tevatron analyses (dotted line) compared to the previous Tevatron measurements (full line). The band around the full line shows the uncertainties in the theoretical model. Figure taken from [6].


Although the mass of the W± has been measured with a precision of a few tens of MeV by several experiments over the past twenty years, a measurement of the mass of the top quark with the same degree of precision has not been performed yet. Precise determination of the masses of both the W± and the top quark are necessary to obtain a good estimate of the Higgs mass.

The top quark was discovered at Fermilab in 1995 [8], and its mass was measured by two experiments, CDF and DØ. The average top mass in Run I was mt=178.0±2.7±3.3 GeV, and the limiting factors in the mass resolution were uncertainties in the jet energy scale and limited statistics [7]. Hopefully, the statistics will improve once the LHC accelerator starts functioning, pushing top quark physics from the discovery phase to the precision measurement phase. LHC, with its high luminosity, will become a true top factory producing up to 80 million tt pairs per year, allowing us to refine the top mass measurement to a precision of ~1 GeV [7].


Although statistically important, pair production is not the only process by means of which top quarks can be produced at the LHC; there is also so-called single top production, where a single top is present among the final state particles. This process may play an important role in the measurement of mt: as only one top is present in the final state, the assignment of decay particles is simplified whereas in pair production one has to find the correct assignment of decay products for both the t and the t. Hence the systematic error in the measurement of the top mass, due to the incorrect assignment of decay products, may be reduced, resulting in a more precise measurement.

Another interesting property of electroweak single top production is the cross-section, which is proportional to the CKM matrix element Vtb. A precise measurement of the cross-section for this class of processes may confirm the unitarity of the CKM matrix, or give an indirect evidence for the existence of a fourth quark generation, mixing with the (t,b)L doublet.


1.1  Properties of the top quark


The top quark was postulated in the Standard Model to complement the bottom quark in the third generation doublet; the model predicts an up-like quark, with spin=1/2, weak isospin=1/2 and charge=+2/3. Results from b-physics at LEP --- for example, the precision measurement of the Z® bb partial width --- constrains the top mass to a value around 170 GeV [10].


Several properties of the top quark were studied during the first run at Tevatron:

  • the tt production cross section;
  • kinematics of the tt pair;
  • the top mass;
  • the nature of the V-A electroweak coupling by determining the helicity of the W produced in the decay of the top quark;
  • spin correlations in tt production;
  • electroweak production of top quarks;
  • t decays of the top quark;
  • exotic decays;
  • decay channels involving Flavour Changing Neutral Currents.

Results at the Tevatron were hampered by low statistics. The studies will proceed, with evident benefits from increased luminosities, both for Tevatron Run II and at the LHC.

Among the Tevatron studies, the most interesting concerns the mass of the top quark: the combined measurements [9] of the CDF and experiments at the Tevatron at Run I resulted in a top mass of 178.0±4.3 GeV (see Figure 1.2). Such a large mass is quite extraordinary in the the quark hierarchy: the top quark is roughly 35 times more massive than the b quark.


=7cm figs/world_avg.eps


Figure 1.2: Combined measurement of the top quark mass at the Tevatron during Run I. The Run I average includes measurements from lepton-plus-jets and di-leptonic decay channels for both CDF and DØ, plus the CDF jets-only channel. Figure taken from [9].


In the Higgs model, the top quark (like any other fermion) acquires mass via the Yukawa coupling to the Higgs field:

mt=yt
v
2

where yt is the strength of the Yukawa coupling between the top quark and the Higgs field, and v is the vacuum expectation value that is the energy associated to the Higgs field in the vacuum. The Yukawa coupling factor for the top quark yt is close to unity; this suggests that the top quark may play an important role in models describing the fermionic masses. The Higgs model, however, cannot (yet) explain why the Yukawa coupling of the top quark is so large compared to other quarks; Technicolor models, instead, provide both a dynamical breaking of the electroweak symmetry and a theory for fermion masses. One of these models, the Topcolor-assisted Technicolor (usually referred to as TC2), predicts the existence of a new interaction that couples only with the third quark family [19] and generates the large top mass.


This large mass provides a large phase space in the top decay, which results in a lifetime of 4×10-25 seconds, an order of magnitude shorter than the characteristic QCD hadronisation time of 28×10-25 seconds. This has some interesting experimental and theoretical aspects:

  • with the exception of the 1S ground state, toponium resonances cannot be formed because of the short lifetime of the top quark [16];
  • hadronisation will not degrade the mass measurement of top quark; however, this effect is spoiled by the fact that the majority of the decay products hadronise;
  • since the time constant for spin-flip via gluon emission is longer than the lifetime of the top quark, decay products --- in particular the W boson --- retain the spin information of the original top quark [7];
  • pair production at threshold is not affected by non-perturbative QCD contributions, thus the threshold shape can be predicted by perturbative QCD [17].

Top quark production in tt pairs (see Section 1.2) has a larger cross-section than electroweak single top production (see Section 1.3).

1.2  Top pair production


At the tree level, the cross section for QCD tt production is expressed as:

s(s,mt) =

 
å
i,j
ó
õ
1


0
dx1 ó
õ
1


0
dx2  

fi(xif2)   fj(xjf2)  

sij(s,mt,asr2))  

    (1.1)


Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \sigma(s,m_t) = \sum_{i,j} \int_0^1 dx_1 \int_0^1 dx_2 \, f_i(x_i,\mu^2_f) \, f_j(x_j,\mu^2_f) \; \hat\sigma_{ij}(\hat{s},m_t,\alpha_s(\mu^2_r)) <MATH> This equation descends from the following assumption: the QCD scattering process can be expressed by independent --- <EM>factorised</EM> --- terms. Consider the scattering of two protons: they are complex objects, made of several components called <EM>partons</EM>. The fractional momentum <I>x</I> carried by the partons inside the protons is described by probability density called <EM>Parton Distribution Functions</EM> --- <B>PDF</B>, indicated by <I>f</I>(<I>x</I>,µ<FONT SIZE=2><SUB><I>f</I></SUB><SUP>2</SUP></FONT>) in the equation. However, when we consider the scattering process, we assume that the scattering partons are independent from the protons that contained them; then, we have <EM>factorised</EM> the matrix element of the scattering of the two partons from the PDFs that contained the partons. By doing so, we choose an energy scale µ<FONT SIZE=2><I><SUB>f</SUB></I></FONT>, the <EM>factorisation scale</EM> that separates the description of the parton as a statistic variable from the description of a pointlike particle in the scattering process. In the factorization scheme, the partonic cross section for <I>tt</I> production <FONT FACE=symbol>s</FONT> depends only on the square of the partonic center of mass energy <I>s</I>=<I>x<FONT SIZE=2><SUB>i</SUB></FONT>x<FONT SIZE=2><SUB>j</SUB></FONT>s</I>, the top mass <I>m<FONT SIZE=2><SUB>t</SUB></FONT></I> and the running strong coupling constant <FONT FACE=symbol>a</FONT><FONT SIZE=2><I><SUB>s</SUB></I></FONT>(µ<FONT SIZE=2><SUB><I>r</I></SUB><SUP>2</SUP></FONT>). The coupling constant is evaluated at the <EM>renormalisation scale</EM> µ<FONT SIZE=2><I><SUB>r</SUB></I></FONT>, that sets the energy limit above which the hard scattering is assumed to be independent from hadronisation effects.<BR> <BR> Although the cross-section should be independent from the factorisation and renormalisation scales, the calculation of the scattering matrix element up to finite order introduces an unphysical dependence. At Leading Order (<B>LO</B>) the <I>tt</I> cross section is usually evaluated with µ<FONT SIZE=2><I><SUB>f</SUB></I></FONT>=µ<FONT SIZE=2><I><SUB>r</SUB></I></FONT>=<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>, and has an uncertainty of about 50%. The scale-dependence of the cross section can be reduced if we perform Next-to-Leading Order (<B>NLO</B>) calculations of the same cross section: the expected cross-section at the LHC energy scale increases of 30%, the factorisation scale dependence reduces to 12% [7].<BR> <BR> NLO calculations, however, are still affected by the problem of resummations: truncating the calculation of the cross section to some fixed order <I>n</I> of <FONT FACE=symbol>a</FONT><FONT SIZE=2><I><SUB>s</SUB></I></FONT> gives reliable results only if the physics processes included in the calculation happen at roughly the same energy scale. When two or more very different energy scales <I>Q</I>,<I>Q</I><FONT SIZE=2><SUB>1</SUB></FONT> are involved in the calculation, the effect of logarithmic terms of the type (<FONT FACE=symbol>a</FONT><FONT SIZE=2><I><SUB>s</SUB></I></FONT>ln(<I>Q</I>/<I>Q</I><FONT SIZE=2><SUB>1</SUB></FONT>))<FONT SIZE=2><SUP><I>n</I>+1</SUP></FONT> has to be included in the computation [7]. Inclusion of these logarithms in the cross-section is called resummation.<BR> <BR> There are several classes of logarithms that need to be resummed to calculate cross-sections of heavy quark production processes: <UL><LI>small-<I>x</I> logarithms; these logarithms appear in the cross-section calculations when the center of mass energy <I>s</I> of the colliding partons is several orders of magnitude larger than the energy scale <I>Q</I> of the hard scattering; the extrapolation of PDFs between the two energy scales results in large logarithms ln(<I>s</I>/<I>Q</I>); <LI>bremsstrahlung logarithms; these are connected to the emission of soft collinear gluons by scattered particles; <LI>threshold logarithms of the type ln(1-<I>x</I>); these appear when the final state particles carry a large fraction of the center of mass energy. These logarithms have a sizeable effect for <I>tt</I> production at the LHC: this process obtains its main comntribution from gluon-gluon fusion (see Figure&nbsp;1.3), and gluon PDFs reach large values for small <I>x</I>, such as at the <I>tt</I> threshold: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>x</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>2<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>s</I></TD> </TR></TABLE></TD> <TD NOWRAP><U>~</U>0.025;</TD> </TR></TABLE></DIV> <LI>transverse momentum logarithms that occur in the distribution of transverse momentum of systems with high mass that are produced with a vanishing <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> in the LO process. </UL> Resummation of logarithms is performed by introducing a conjugate space to the phase space and performing a transformation of the cross-section equation in the conjugate space. By a proper choice of the conjugate space, the logarithms of the transformed cross-section can be summed in an exponential form factor. Applying an inverse transformation to this form factor results in the correction to the fixed-order cross section.<BR> <BR> Resummations performed on the <I>tt</I> cross-section [7] show that Next-to-Leading Logarithm (<B>NLL</B>) corrections applied to NLO diagrams further reduce factorisation scale dependence by 6% (see Table&nbsp;1.1). It is important to note that resummations do not only affect the absolute value of the cross-section, but the kinematical properties of the process as well. For example, transverse momentum logarithms are associated with the emission of soft gluons in the initial state; a comparison of the <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> spectrum for low-<I>p<FONT SIZE=2><SUB>T</SUB></FONT> tt</I> pair between NLO predictions and Montecarlo shower algorithms --- which reproduce faithfully soft and collinear gluon radiation --- can point out whether resummation is needed or not. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <DIV ALIGN=center><TABLE BORDER=1 CELLSPACING=0 CELLPADDING=1> <TR><TD ALIGN=center NOWRAP>Factorisation scale</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=center NOWRAP>(µ<FONT SIZE=2><I><SUB>f</SUB></I></FONT>=µ<FONT SIZE=2><I><SUB>r</SUB></I></FONT>)</TD> <TD ALIGN=center NOWRAP>NLO</TD> <TD ALIGN=center NOWRAP>NLO+NLL</TD> </TR> <TR><TD ALIGN=center NOWRAP><I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>/2</TD> <TD ALIGN=center NOWRAP>890</TD> <TD ALIGN=center NOWRAP>883</TD> </TR> <TR><TD ALIGN=center NOWRAP><I>m<FONT SIZE=2><SUB>t</SUB></FONT></I></TD> <TD ALIGN=center NOWRAP>796</TD> <TD ALIGN=center NOWRAP>825</TD> </TR> <TR><TD ALIGN=center NOWRAP>2<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I></TD> <TD ALIGN=center NOWRAP>705</TD> <TD ALIGN=center NOWRAP>782</TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.1: Resummation correction to the total <I>tt</I> cross-section (pb) and residual factorisation scale dependence. Numbers taken from [7].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> Top pair production at hadron colliders proceeds via the QCD processes <I>qq</I><FONT FACE=symbol>®</FONT> <I>tt</I> and <I>gg</I><FONT FACE=symbol>®</FONT> <I>tt</I> (see Figure&nbsp;1.3). The two processes have a different relative importance for Tevatron and LHC: when we consider <I>tt</I> production at threshold, the colliding partons need to have a minimum fractional momentum of <I>x</I>=2<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>/<I>s</I> in order to produce a <I>tt</I> pair. Substituting in this equation the center of mass energies of the two colliders, one obtains <I>x</I><U>~</U>0.2 for Tevatron and <I>x</I><U>~</U>0.025 for LHC; collisions at the LHC occur in a region where colliding partons carry a small fraction of the momentum of the incoming particles. Small-<I>x</I> regions in Parton Distribution Functions (see below) are mainly populated by gluons, hence at the LHC <I>tt</I> production occurs mainly via gluon-gluon fusion, while at the Tevatron quark/anti-quark annihilation is the most important process. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <BR> <DIV ALIGN=center>Figure 1.3: Leading order Feynman diagrams for <I>tt</I> production via the strong interaction. Diagram <I>a</I>). quark-antiquark annihilation, is dominant at the Tevatron, while diagrams <I>b</I>) and <I>c</I>), gluon-gluon fusion, give the largest cross-section at the LHC.</DIV><BR> <DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC section Single top production--> <H2>1.3&nbsp;&nbsp;Single top production</H2><!--SEC END --> Single top production probes the weak coupling of the top quark with the down-type quarks (d,s,b); at the LHC energies the cross section for single top production is about one third of <I>tt</I> pair production, thus providing the opportunity to obtain adequate statistics for precision measurements in the electroweak sector, which cover the following topics: <UL><LI>the cross-section of single top production processes is proportional to the square of the CKM element <I>V<FONT SIZE=2><SUB>tb</SUB></FONT></I>; direct measurement of this parameter has not been performed yet, and violations of the unitarity of the 33 CKM matrix may point to the existence of a fourth quark generation; <LI>lower particle multiplicities in the final state of single top processes reduce combinatorial effects in the reconstruction of the top quark, giving a precision mass measurement complementary to the one obtained from <I>tt</I> processes; <LI>single-top processes constitute a background to other processes of interest, such as <I>tt</I> production or Higgs production; <LI>single top quarks are produced with almost 100% spin polarisation; by measuring the spin polarisation of the top decay products, the V-A coupling of the <I>Wtb</I> vertex can be evaluated; </UL> There are three dominant single top production mechanisms: the s-channel, the t-channel and the associated production, illustrated in Figure&nbsp;1.4.<BR> <BR> At the Tevatron, s-channel production has the largest cross-section, while associated production is negligible; at the LHC the situation is reversed: because of the large gluon density at small <I>x</I>, both associated production and gluon-boson fusion have larger cross-section than the s-channel process, with gluon-boson fusion having the largest cross-section of all three processes (see Table&nbsp;1.2). <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <DIV ALIGN=center><TABLE BORDER=1 CELLSPACING=0 CELLPADDING=1> <TR><TD ALIGN=left NOWRAP>Process</TD> <TD ALIGN=center NOWRAP>Tevatron Run 1</TD> <TD ALIGN=center NOWRAP>Tevatron Run 2</TD> <TD ALIGN=center NOWRAP>LHC (<I>t</I>)</TD> <TD ALIGN=center NOWRAP>LHC (<I>t</I>)</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT FACE=symbol>s</FONT><FONT SIZE=2><SUB><I>s</I>-<I>chann</I></SUB><SUP><I>NLO</I></SUP></FONT> (pb)</TD> <TD ALIGN=center NOWRAP>0.380± 0.002</TD> <TD ALIGN=center NOWRAP>0.447± 0.002</TD> <TD ALIGN=center NOWRAP>6.55± 0.03</TD> <TD ALIGN=center NOWRAP>4.07± 0.02</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT FACE=symbol>s</FONT><FONT SIZE=2><SUB><I>t</I>-<I>chann</I></SUB><SUP><I>NLO</I></SUP></FONT> (pb)</TD> <TD ALIGN=center NOWRAP>0.702± 0.003</TD> <TD ALIGN=center NOWRAP>0.959± 0.002</TD> <TD ALIGN=center NOWRAP>152.6± 0.6</TD> <TD ALIGN=center NOWRAP>90.0± 0.5</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT FACE=symbol>s</FONT><FONT SIZE=2><SUB><I>assoc</I>.</SUB><SUP><I>LL</I></SUP></FONT> (pb)</TD> <TD ALIGN=center NOWRAP>-</TD> <TD ALIGN=center NOWRAP>0.093± 0.024</TD> <TD ALIGN=center NOWRAP>31<FONT SIZE=2><SUB>-2</SUB><SUP>+8</SUP></FONT></TD> <TD ALIGN=center NOWRAP>31<FONT SIZE=2><SUB>-2</SUB><SUP>+8</SUP></FONT></TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.2: Single top quark production cross sections --- table taken from [10].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> <BR> <DIV ALIGN=center>Figure 1.4: Tree-level Feynman diagram for single-top production processes: (a) s-channel, (b,c) t-channel, (d,e) associated production with a <I>W</I>.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsection Single top production in the s-channel--> <H3>1.3.1&nbsp;&nbsp;Single top production in the s-channel</H3><!--SEC END --> The s-channel process (fig.&nbsp;1.4.a) proceeds via a virtual time-like <I>W</I> boson that decays in a <I>tb</I> pair.<BR> This process probes the kinematic region <I>q</I><FONT SIZE=2><SUP>2</SUP></FONT><FONT FACE=symbol>³</FONT>(<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>+<I>m<FONT SIZE=2><SUB>b</SUB></FONT></I>)<FONT SIZE=2><SUP>2</SUP></FONT>. The cross-section for this channel at the LHC is much smaller than for the t-channel process; however, the cross-section for this process is known to a better precision, since the initial state involves quark and anti-quarks, the PDFs of which have been accurately measured. Moreover, the quark luminosity can be constrained by measuring the similar Drell-Yan process <I>qq</I><FONT FACE=symbol>®</FONT> <I>W</I><FONT SIZE=2><SUP>*</SUP></FONT><FONT FACE=symbol>®n</FONT> [7].<BR> <BR> Calculations of the NLO cross-section have been performed, which show a dependence on factorisation and renormalisation scales of about 2%, and resummation effects add another 3% to the uncertainty, while the Yukawa corrections from loop diagrams involving the Higgs field are negligible. It has been shown, however, that uncertainties in the measurement of <I>m<FONT SIZE=2><SUB>t</SUB></FONT></I> of ±5&nbsp;GeV result in uncertainties in the cross section of 10%. Overall, taking into account predicted statistical errors and theoretical uncertainties, the measurement of the s-channel cross-sections presents the most favourable method of evaluating the CKM matrix element <I>V<FONT SIZE=2><SUB>tb</SUB></FONT></I>. <BR> <BR> <!--TOC subsection Gluon-boson fusion--> <H3>1.3.2&nbsp;&nbsp;Gluon-boson fusion</H3><!--SEC END --> The t-channel process is also known as <EM>gluon-boson fusion</EM> since in this process a <I>b</I>-quark from a gluon splitting interacts with a space-like <I>W</I> to produce the top quark. The gluon-boson fusion process is the channel with the largest cross-section for single top production at the LHC, about 23 times larger than the cross-section of the s-channel. At the Next-to-Leading-Order level, the process is composed by two diagrams, shown in Figure&nbsp;1.4(b,c). Both diagrams depict a <I>b</I>-quark interacting with a <I>W</I> boson --- emitted by the colliding parton --- to produce a top quark; diagram (b) depicts a <I>b</I>-quark from the quark sea inside the proton, while diagram (c) is a NLO correction to diagram (b) and is relevant if instead we consider the <I>b</I>-quark in the initial state as the product of a splitting gluon, where the splitting <I>bb</I> pair has a non-vanishing transverse momentum. When the <I>bb</I> pair is collinear with the emitting gluon, diagram (c) becomes a non-perturbative process that can be included in the <I>b</I>-quark PDF; the NLO corrections in this kinematical region have to be subtracted from the computation to avoid double counting of such a diagram. The two diagrams (b) and (c) have the same experimental signature --- a forward scattered light quark, a <I>W</I> and a <I>b</I>-quark --- since the additional b quark from diagram (c) has 75% of the events <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&lt;&nbsp;20&nbsp;GeV, thus hardly observable [10].<BR> <BR> The NLO cross-section for the t-channel has a worse scale dependence than the s-channel, with about 5% uncertainty. The top mass uncertainty contributes for 3% when the top mass is changed by ±5&nbsp;GeV. Yukawa corrections are small (of 1% order) [10].<BR> <BR> <!--TOC subsection Associated production--> <H3>1.3.3&nbsp;&nbsp;Associated production</H3><!--SEC END --> In this production channel, the single top quark is created together with a real <I>W</I> boson. Two Feynman diagrams --- depicted in Figure&nbsp;1.4(d,e) --- contribute to this channel. However, the t-channel diagram (e) has a smaller contribution since this diagram describes the splitting of a gluon into a <I>tt</I> pair and is mass-suppressed; thus the initial state is affected by the low gluon density at high-<I>x</I> values. The s-channel diagram (d) dominates the associated production; the 1/<I>s</I> scaling of this process, combined with the small <I>b</I>-quark density, result in a negligible cross-section at the Tevatron, while at the LHC it contributes to about 20% for the total single top production.<BR> <BR> A subset of NLO diagrams has been computed for this production channel. Gluons in the in initial state splitting into a collinear <I>bb</I> pair have been included in the <I>b</I>-quark PDF, similar to NLO corrections to the t-channel. It must be remarked that one of the corrections corresponds to the strong production process <I>gg</I><FONT FACE=symbol>®</FONT> <I>tt</I>, followed by top quark decay. This diagram represents a background for the associated production channel, and should be subtracted from the cross-section computation [7]. The cross-section has a strong dependence on both PDFs and renormalisation scale, and the total uncertainty is of the order of 30% (see Table&nbsp;1.2).<BR> <BR> <!--TOC section Top quark decay--> <H2>1.4&nbsp;&nbsp;Top quark decay</H2><!--SEC END --> In the Standard Model, the top quark predominantly decays into a b quark and a <I>W</I> with a branching ratio of 0.998. Aleph and Opal conducted searches for Flavour Changing Neutral Currents (<B>FCNC</B>) decays, which resulted in upper limits for <I>t</I><FONT FACE=symbol>®g</FONT> <I>q</I> and <I>t</I><FONT FACE=symbol>®</FONT> <I>Zq</I> of respectively 0.17 and 0.137 [13]. Other SM-allowed decays to down-like quarks are very difficult to disentangle from the QCD background, as opposed to a b-tagged jet. Non-SM decays, however, may provide suitable experimental signatures.<BR> <BR> The extension of the SM Higgs sector could induce new channels for the top decay. In the so-called two-Higgs double models (2HDM), the Higgs sector is composed of two neutral scalars (<I>h</I>,<I>H</I>), a neutral pseudo-scalar (<I>A</I>) and two charged scalars (<I>H</I><FONT SIZE=2><SUP>±</SUP></FONT>) [10]. In this hypothesis, the top quark could decay into a charged Higgs: <I>t</I><FONT FACE=symbol>®</FONT> <I>bH</I><FONT SIZE=2><SUP>+</SUP></FONT>. Both CDF and DØ have performed indirect searches in Run&nbsp;I data. No evidence has been found, but searches will continue at Run&nbsp;II. A direct measurement of this channel may be performed searching for the signature <I>H</I><FONT SIZE=2><SUP>±</SUP></FONT><FONT FACE=symbol>®tn</FONT>, while a heavier charged Higgs decaying to quarks will suffer from QCD jet background.<BR> <BR> The 2HDM model usually postulates that the charged Higgs should couple preferentially with third-generation quarks due to their large mass. When this assumption is relaxed, new decay channels of the top quark involving Flavour Changing Neutral Currents may emerge: <I>t</I><FONT FACE=symbol>®</FONT> <I>cV</I><FONT SIZE=2><SUB><I>i</I></SUB><SUP>0</SUP></FONT><I>V</I><FONT SIZE=2><SUB><I>j</I></SUB><SUP>0</SUP></FONT> at tree level and <I>t</I><FONT FACE=symbol>®</FONT> <I>cV</I><FONT SIZE=2><SUP>0</SUP></FONT> at one-loop --- <I>V</I><FONT SIZE=2><SUP>0</SUP></FONT> indicates either <FONT FACE=symbol>g</FONT>, <I>Z</I> or <I>g</I>. However, the signatures of these channels are very hard to disentangle from QCD background.<BR> <BR> <!--TOC section Top detection--> <H2>1.5&nbsp;&nbsp;Top detection</H2><!--SEC END --> Top quarks in the SM decay almost exclusively into <I>Wb</I>. Because of fermion universality in electroweak interactions, the <I>W</I> boson decays 1/3 of the time into a lepton/neutrino pair and 2/3 of the time into a <I>qq</I> pair. Since in <I>tt</I> events two real <I>W</I> bosons are present, the signatures of the events are classified according to the decay channel of the <I>W</I> bosons: <DL COMPACT=compact><DT><B>all jets channel</B><DD> Here both <I>W</I>s decay into a quark/anti-quark pair. The event has at least six high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> jets, two of which have to be b-tagged. Despite having the highest branching ratio (44%), this decay channel suffers heavily from QCD background and ambiguities in the assignment of jets to the originating <I>W</I>s. <DT><B>lepton+jets channel</B><DD> In this decay channel one <I>W</I> decays into a lepton--neutrino pair, the other <I>W</I> into a quark/anti-quark pair. One isolated lepton, four jets (two with b-tagging) and missing energy characterise the event. The branching ratio is about 30%. <DT><B>di-lepton channel</B><DD> In this decay channel, both <I>W</I>s decay into a lepton--neutrino pair. For practical purposes, only <I>e</I>,µ are considered, since <FONT FACE=symbol>t</FONT> decays are difficult to distinguish from the QCD background. The events have two high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> leptons, two jets (at least one of which is b-tagged) and missing energy due to the neutrinos. This signature is quite clear, being affected mainly by electroweak background. The only drawback of this decay channel is its low branching ratio (5%). </DL> <!--TOC section Event selection and backgrounds--> <H2>1.6&nbsp;&nbsp;Event selection and backgrounds</H2><!--SEC END --> Top quark pairs are produced near threshold and have low kinetic energy, thus present little or no boost in the beam direction. Since the decay products of the top quark have a much smaller mass than the top quark itself, they typically carry large transverse momentum and cross the central region of the detector (|<FONT FACE=symbol>h</FONT>|&lt;2.5); the low boost from the decaying top accounts for good angular separation of the decay products. If di-lepton or lepton+jets channels are considered, a large missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT> is part of the signature. Experimental cuts on <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> and <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT> alone are sufficient to strongly reduce the QCD background, which has an exponentially-falling <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I> spectrum and small <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT> [10]. Tagging one or more b-jets (either by secondary vertex or soft muon tagging) further reduces background from QCD.<BR> <BR> In addition to the above cuts, further selections can be performed according to the topological features of top production and its decay channels: for example, semi- and fully leptonic decays present one or two high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> isolated leptons; topological variables such as <I>H<FONT SIZE=2><SUB>T</SUB></FONT></I> (the scalar sum of the <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I> of all observed objects), sphericity (<I>S</I>) and aplanarity (<I>A</I>) can be employed to discriminate against QCD background. <BR> <BR> <!--TOC subsection Top mass determination in <I>t</I><I>t</I> events--> <H3>1.6.1&nbsp;&nbsp;Top mass determination in <I>tt</I> events</H3><!--SEC END --> The selection cuts used at CDF in the lepton+jets sample require [23]: <UL><LI>one isolated lepton with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;20&nbsp;GeV; <LI>missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>&gt;20&nbsp;GeV; <LI>at least three jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;15&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2 and one jet with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;8&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2, with at least 1 <I>b</I>-tagged jet;<BR><BR> --- or --- <LI>at least four jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;15&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2 and no <I>b</I>-tagging requirement. </UL> The selected events are subjected to a kinematical fit, with the constraints <I>M<FONT SIZE=2><SUB>jj</SUB></FONT></I>=<I>M</I><FONT FACE=symbol SIZE=2><SUB>n</SUB></FONT>=<I>M<FONT SIZE=2><SUB>W</SUB></FONT></I> and <I>M<FONT SIZE=2><SUB>t</SUB></FONT></I>=<I>M<FONT SIZE=2><SUB>t</SUB></FONT></I>; of the possible 24 combinations --- 12 if <I>b</I>-tagging is included --- the one with the lowest <FONT FACE=symbol>c</FONT><FONT SIZE=2><SUP>2</SUP></FONT> is chosen; the reconstructed top masses are histogrammed and fitted with signal and background templates, where the signal templates vary according to the mass; the mass that provides the best likelihood in the fit determines the final result.<BR> <BR> Top mass measurements in the lepton+jets channel at DØ employ both a reconstruction technique similar to the one described above and a likelihood method. This method examines the kinematical features of each event, and compares them with a template based on sample events generated at the tree-level by the simulation package VECBOS, both for <I>tt</I> production (signal) and <I>W</I>+4<I>j</I> (background), and convoluted with a transfer function that models fragmentation and detector effects. The probability for each event to be a background or a signal event is then used to compute the likelihood for the top quark to have a given mass.<BR> <BR> Selection cuts at ATLAS require [21]: <UL><LI>one isolated lepton with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;20&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2.5; <LI>missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>&gt;20&nbsp;GeV; <LI>at least four jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;40&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2.5, of which at least 2 are <I>b</I>-tagged jets. </UL>Two of the non-tagged jets are used to reconstruct the <I>W</I>, with the constraint |<I>M<FONT SIZE=2><SUB>jj</SUB></FONT></I>-<I>M<FONT SIZE=2><SUB>W</SUB></FONT></I>|&lt;&nbsp;20&nbsp;GeV; the reconstructed <I>W</I> is combined with one of the two <I>b</I>-jets to form the top quark. Of all <I>jjb</I> possible combinations, either the one which gives the highest <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> to the reconstructed top, or the one with the highest angular separation between the <I>b</I>-jet and the other jets, is assumed to represent the top quark [21]. The efficiency of this reconstruction is estimated to be 5%, with a top mass resolution of 11.9&nbsp;GeV.<BR> <BR> <!--TOC subsection Top mass determination in single top events--> <H3>1.6.2&nbsp;&nbsp;Top mass determination in single top events</H3><!--SEC END --> Gluon-boson fusion and s-channel production were studied at the Tevatron during Run I, while associated production has a negligible cross-section at the Tevatron (see Table&nbsp;1.2).<BR> <BR> CDF performed an inclusive single top analysis, searching for single top in the <I>W</I>+<I>jets</I> sample, asking for the invariant mass of the lepton, <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>and highest-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> jet to lie between 140 and 210&nbsp;GeV. This was followed by a likelihood fit of the total transverse energy <I>H<FONT SIZE=2><SUB>T</SUB></FONT></I>. This technique gave a lower limit for inclusive single-top cross section of 14&nbsp;pb. CDF also performed two separate searches for s- and t-channel production, which resulted in lower limits of respectively, 18 and 13&nbsp;pb. DØ used neural networks giving lower limits of 17&nbsp;pb for s-channel and 22&nbsp;pb for t-channel production.<BR> <BR> Since single top experimental signatures show a lower jet multiplicity compared to pair production, stringent cuts are required to isolate single top events; from QCD background; moreover, each of the three single top production processes is a background process for the other two, the cuts need to be tailored to each of the three channels.<BR> <BR> The ATLAS reconstruction technique entails a set of pre-selection cuts to reduce QCD background; this set includes [21]: <UL><LI>at least one isolated lepton with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;20&nbsp;GeV; <LI>at least two jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;30&nbsp;GeV; <LI>at least 1 <I>b</I>-tagged jet with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV. </UL> The net effect of these cuts is to select leptonic decay products of the single top. On top of the pre-selection cuts, each channel adds its own set of selection cuts.<BR> <BR> <!--TOC subsubsection s-channel selection cuts--> <H4>s-channel selection cuts</H4><!--SEC END --> <UL><LI>exactly two jets within |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2.5; this cut reduces <I>tt</I> background, which has a higher jet multiplicity; <LI>the two jets need to be <I>b</I>-tagged and have <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;75&nbsp;GeV; this cut reduces both <I>W</I>+jets and Gluon-boson fusion, where the second <I>b</I>-jet is either missing or has low <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>; <LI><I>H<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;175&nbsp;GeV and total invariant mass greater than 200&nbsp;GeV; these cuts reduce <I>Wjj</I> background, which tend to have smaller total transverse energy and smaller invariant masses; <LI>reconstructed top mass between 150 and 200&nbsp;GeV. </UL> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/sign-schan.eps <BR> <DIV ALIGN=center>Figure 1.5: Experimental signature of single top production in the s-channel.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsubsection Gluon-boson fusion selection cuts--> <H4>Gluon-boson fusion selection cuts</H4><!--SEC END --> <UL><LI>exactly two jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;30&nbsp;GeV; this cut reduces the <I>tt</I> background; <LI>one of the two jets with |<FONT FACE=symbol>h</FONT>|&gt;&nbsp;2.5 and <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV; this cut selects the forward light quark, which is the trademark of the gluon-boson fusion process; <LI>the other jet needs to be <I>b</I>-tagged and have <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV; this cut reduces <I>Wjj</I> background; <LI><I>H<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;200&nbsp;GeV and total invariant mass greater than 300&nbsp;GeV; these cuts reduce <I>Wjj</I> background; <LI>reconstructed top mass between 150 and 200&nbsp;GeV. </UL> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/sign-wgluon.eps <BR> <DIV ALIGN=center>Figure 1.6: Experimental signature of single top production in the gluon-boson fusion channel.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsubsection Associated production selection cuts--> <H4>Associated production selection cuts</H4><!--SEC END --> <UL><LI>exactly three jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV; this cut reduces the <I>tt</I> background; <LI>one of the three jets needs to be <I>b</I>-tagged; this cut reduces <I>Wjj</I> background, and further reduces <I>tt</I> background; <LI>total invariant mass less than 300&nbsp;GeV; this cuts reduces the <I>tt</I> background; <LI>the invariant mass of the two non-b jets between 65 and 95&nbsp;GeV; this cut enhances the probability that a real <I>W</I> boson is present in the final state. </UL> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/sign-wt.eps <BR> <DIV ALIGN=center>Figure 1.7: Experimental signature of single top production in the associated production channel.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <DIV ALIGN=center><TABLE CELLSPACING=2 CELLPADDING=0> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD ALIGN=left NOWRAP>Efficiencies (%)</TD> <TD ALIGN=center NOWRAP><I>Wg</I></TD> <TD ALIGN=center NOWRAP><I>W</I><FONT SIZE=2><SUP>*</SUP></FONT></TD> <TD ALIGN=center NOWRAP><I>Wt</I></TD> <TD ALIGN=center NOWRAP><I>tt</I></TD> <TD ALIGN=center NOWRAP><I>Wjj</I></TD> </TR> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD ALIGN=left NOWRAP>Pre-selection</TD> <TD ALIGN=center NOWRAP>20.0</TD> <TD ALIGN=center NOWRAP>27.0</TD> <TD ALIGN=center NOWRAP>25.5</TD> <TD ALIGN=center NOWRAP>44.4</TD> <TD ALIGN=center NOWRAP>0.667</TD> </TR> <TR><TD ALIGN=left NOWRAP>Selection</TD> <TD ALIGN=center NOWRAP>1.64</TD> <TD ALIGN=center NOWRAP>1.67</TD> <TD ALIGN=center NOWRAP>1.27</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD ALIGN=left NOWRAP>Events/30&nbsp;fb<FONT SIZE=2><SUP>-1</SUP></FONT></TD> <TD ALIGN=center NOWRAP>26800±1000</TD> <TD ALIGN=center NOWRAP>1106±40</TD> <TD ALIGN=center NOWRAP>6828±269</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.3: Efficiencies of the selection cuts for gluon-boson fusion (<I>Wg</I>), s-channel production (<I>W</I><FONT SIZE=2><SUP>*</SUP></FONT>) and associated production (<I>Wt</I>) at ATLAS. The pre-selection efficiencies for the two most important background processes, <I>tt</I> and <I>Wjj</I> are given for comparison. Table abridged from [21].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC section Performance requirements for top physics--> <H2>1.7&nbsp;&nbsp;Performance requirements for top physics</H2><!--SEC END --> Studying electroweak top production processes at hadron colliders involves the correct identification of the decay chain of the top quark, which may contain the following ``ingredients'': <UL><LI>highly energetic isolated leptons; <LI>highly energetic hadronic jets; <LI>one or more <I>b</I>-tagged jets; <LI>missing energy from the leptonic decay of the <I>W</I>. </UL> To each item on this list correspond precise requirements on the performance of the detector: a good detector performance results in a higher efficiency for event analyses and lower systematic error on measurements of the parameters involved in the physical process under study.<BR> <BR> These requirements include: <UL><LI>correct identification of electrons, photons and pions; <LI>calibrated calorimeter for precise jet energy measurements; <LI>efficient track reconstruction for high-<I>p<FONT SIZE=2><SUB>t</SUB></FONT></I> leptons; <LI>tagging of jets originated by b-quarks; <LI>detector coverage up to high pseudorapidity for accurate missing transverse energy measurements. </UL> In the past years, the design of detectors employed at colliders has been optimised in order to fulfil these requirements. The typical detector is composed of several concentric subdetectors, each performing a specific task, and all or part of the detector is immersed in one or more magnetic fields in order to evaluate particle momenta.<BR> <BR> The innermost layer of a detector is usually instrumented with silicon trackers. These detectors fulfill the task of tracking the path of charged particles close to the interaction point. The reconstructed tracks can extrapolated back to the interaction point to evaluate the impact parameters or, if the extrapolation leads to a different point, reveal a secondary vertex. Secondary vertices are a signal for long-lived unstable particles, such as the b-quark.<BR> <BR> Outside the silicon trackers, the calorimetry system is located. The antiquated term ``calorimetry'' reflects the purpose of this part of the detector: much like the Dewar bottles with which good old Mister&nbsp;Joule was entertaining himself during his honeymoon, today's calorimeters have to contain and measure the total energy of the interaction. Plus, modern calorimeters give the opportunity to measure individually the energy of each of the particles created in the hard interaction, and to measure their direction. There are two types of calorimeters: electromagnetic calorimeters and hadronic calorimeters. These two types of calorimeters exploit different physical phenomena to measure the energy of the incoming particles: the first type usually deals with electron, photons and the occasional soft pion, while the second type deals with hadrons. A well-designed detector utilises both types of calorimeters.<BR> <BR> The calorimeter succesfuly contains all types of particles but two: neutrinos --- which can be detected by measuring an imbalance of energy in the calorimeters --- and muons. Muons deposit a minimal amount of energy in the calorimeters, thus an alternative method is needed for measuring their energy. For this reason, the typical detector is provided with a Muon Spectrometer, located on the outermost layer, outside the calorimeters. The spectrometer uses gaseous detectors, such as drift chambers, to track muons and measure their momentum.<BR> <BR> ATLAS is a general-purpose detector designed to be used at the LHC; thus it adheres to the detector design described above. Apart from performance requirements related the physics program, the ATLAS detector is required to cope with the harsh environment created by the LHC accelerator: the detector must operate efficiently in all luminosity regimes, from an initial low luminosity period of 210<FONT SIZE=2><SUP>33</SUP></FONT>cm<FONT SIZE=2><SUP>-2</SUP></FONT>s<FONT SIZE=2><SUP>-1</SUP></FONT> up to the nominal LHC luminosity of 10<FONT SIZE=2><SUP>34</SUP></FONT>cm<FONT SIZE=2><SUP>-2</SUP></FONT>s<FONT SIZE=2><SUP>-1</SUP></FONT>. High luminosity poses a double threat: for each bunch crossing, about 20 proton-proton interactions, most of which are soft elastic collisions (minimum bias events) constitute a background for physics processes of interest. High luminosity also means high levels of radiation around the interaction points and in the forward regions of the detector: these detector elements need to be radiation tolerant in order to keep an acceptable level of performance during the years of operation. In the next sections I will outline the measurement techniques used at ATLAS and the performance they can provide, while the description of the detector and the evaluation of the performance in real-life tests will be laid out in Chapter&nbsp;2.<BR> <BR> <!--TOC subsection Vertex identification and b-tagging--> <H3>1.7.1&nbsp;&nbsp;Vertex identification and b-tagging</H3><!--SEC END --> The tagging of b-quarks is a very important tool in the study of top decay processes. Given the high branching ratio for a top to decay into <I>Wb</I>, the requirement of a b-tagged jet is one of the most important analytical cuts to reject backgrounds which have a high jet multiplicity but a low b-jet content (such as <I>W</I>+jets). The b-tagging needs to combine a high rejection power with a high efficiency --- otherwise it would lower the overall efficiency of the analysis.<BR> <BR> The b-tagging algorithm in ATLAS is based on the long lifetime of the b-quark: these quarks, on average, decay about 470&nbsp;µm away from the primary interaction vertex; by measuring the tracks of the decay products is possible to reconstruct the b-decay vertex, and tag the jet formed by the decay products as a b-jet.<BR> <BR> For each hadronic jet found in the calorimeter, all tracks measured in the Inner Detector ((see Section&nbsp;2.2)) with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;1&nbsp;GeV and inside a cone of radius <FONT FACE=symbol>D</FONT> <I>R</I>&lt;0.4 around the jet axis are assigned to the jet. Each track yields two parameters: the impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> and the longitudinal impact parameter <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>. The impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> is the distance between the extrapolated track and the primary interaction vertex in the transverse plane; this parameter is signed: the sign is positive if the extrapolated track intersects the beam axis between the primary vertex and the jet, while it is negative if the track intersects the beam axis behind the primary vertex --- that is, the primary vertex is located between the intersection point and the jet. The longitudinal impact parameter <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT> is the <I>z</I> coordinate of the intersection point, and is signed in the same way as <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>. Montecarlo data has been generated to study the distribution of the impact parameters for <I>u</I>- and <I>b</I>-quark jets; both distributions show a peak at zero (no secondary vertex) and a small tail at negative values (given by particles wrongly assigned to the jet); however, the b-jet distributions for <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> and <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT> show a tail on the positive side, corresponding to a secondary vertex. The original algorithm --- illustrated in the ATLAS Technical Design Review (<B>TDR</B>) [20] --- used only the impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> (hence it is regarded as 2D-algorithm), while the current algorithm combines the two impact parameters (hence it is called 3D-algorithm).<BR> <BR> The <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> and <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT> parameters of each track are used to compute a significance value: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>S</I>(<I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>,<I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>)=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>a</I><FONT SIZE=2><SUB>0</SUB></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>s</FONT>(<I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>)</TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>Å</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>z</I><FONT SIZE=2><SUB>0</SUB></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>s</FONT>(<I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>)</TD> </TR></TABLE></TD> </TR></TABLE></DIV> where <FONT FACE=symbol>s</FONT>(<I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>) and <FONT FACE=symbol>s</FONT>(<I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>) are the resolutions on the impact parameters.<BR> <BR> The distribution of the significances for u-jets and b-jets is shown in Figure&nbsp;1.8.a. Each track is assigned a value given by the ratio of the significances for the two flavours, and the jet is assigned a weight given by the logarithmic sum of the ratios of the tracks matching the jet: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>W<FONT SIZE=2><SUB>jet</SUB></FONT></I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=center>&nbsp;</TD> </TR> <TR><TD ALIGN=left><FONT FACE=symbol SIZE=7>å</FONT></TD> </TR> <TR><TD ALIGN=center><FONT SIZE=2><I>tracks</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>ln</TD> <TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>b</I>(<I>S</I>)</TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>u</I>(<I>S</I>)</TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP>.</TD> </TR></TABLE></DIV> The weight is the likelihood measure for a given jet to originate from a b-quark; by applying a cut on this weight --- optimised to reach 50% efficiency on b-tagging --- it is possible to separate light jets from b-jets. Though it is not possible to investigate the performance of b-tagging algorithms in a test setup, it is possible to apply Montecarlo tools to study detector effects. The likelihood distributions for light and b-jets obtained from the Montecarlo simulation are shown in Figure&nbsp;1.8.b. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/bsig.eps <BR> <DIV ALIGN=center>Figure 1.8: a) significance of the impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> for u-jets and b-jets. b) jet weights for u-jets and b-jets [25].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> In a recent study [25], the comparison of the performance of b-tagging in a realistic ATLAS environment against the TDR results includes: <UL><LI>changes in the layout of the Pixel Detector (see Section&nbsp;2.2.1): the initial layout has only 2 pixel layers, the intermediate pixel layer will be installed at a later date; the b-layer has been moved further away from the beam with respect to the TDR layout; <LI>the pixel size in the <FONT FACE=symbol>h</FONT> direction is increased from 300 to 400&nbsp;µm; <LI>``ganged'' pixels (see Section&nbsp;2.2.1) are present in the layout; <LI>increase of dead material in the Pixel Detector due to a redesign of detector services; <LI>staging of the wheels of the Transition Radiation Detector (see Section&nbsp;2.2.3); <LI>simulation of detector inefficiencies: on top of the standard 3% inefficiency for Pixels, SCT strips and TRT straw, the effect of 1% dead pixel modules and 2% dead pixel modules are added; <LI>effects of misalignment between the Pixel Detector and the Semiconductor Tracker (see Section&nbsp;2.2.2); <LI>addition of minimum bias pile-up events. </UL> The study used samples from <I>tt</I>, <I>ttH</I>, <I>WH</I> (<I>M<FONT SIZE=2><SUB>H</SUB></FONT></I>=120&nbsp;GeV) and <I>WH</I> (<I>M<FONT SIZE=2><SUB>H</SUB></FONT></I>=400&nbsp;GeV), to evaluate the efficiency of the b-tagging algorithm for b-jets over a wide range of pseudorapidity and <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>. The Inner Detector was simulated by GEANT3, while the jets were reconstructed using both ATLFAST and full detector simulation --- the differences were found to be marginal. The results of the study, summarised in Table&nbsp;1.4, are the following: <UL><LI>changes in the detector layout amount to a reduction in rejection power by a factor 0.5±0.2, mainly due to increase of material in the Inner Detector; <LI>staging of the intermediate layer in the Pixel Detector amounts to a reduction by a factor 0.7±0.1; <LI>pile-up events at low luminosity, realistic detector inefficiencies, and misalignment between the Pixel Detector and the SCT during the detector commissioning stages amount to a factor 0.75±0.05; <LI>changing the pixel size to 400&nbsp;µm in the b-layer amount to a 10% decrease in rejection; <LI>improved track fitting algorithms increase the rejection power by a factor 1.8; the improved algorithms perform well with high track multiplicity coming from pile-up events at high luminosity: the rejection power is degraded only by 10% at high luminosity; <LI>using the new 3D algorithm instead of the old 2D algorithm increases the rejection power by a factor 1.9; a factor 2.8 can be reached by an improved algorithm [25] which combines the 3D likelihood with other discriminating variables (such as the invariant mass of tracks from the secondary vertex). </UL> Despite the decrease in rejection given by the new layout, improvements in the tracking and tagging algorithms can still provide a rejection factor of about 150 for a b-tagging efficiency <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=60%, higher than the nominal TDR value of R=100 at <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=50%. The b-tagging algorithm can realistically achieve R=100 at <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=70% in the low luminosity regime, increasing the efficiency of all physical analyses based on the identification of b-jets [25].<BR> <BR> This study produced also a new parametrization of b-tagging, depending on <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> and <FONT FACE=symbol>h</FONT>, to be used in conjunction with ATLFAST data. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <TABLE BORDER=1 CELLSPACING=0 CELLPADDING=1> <TR><TD ALIGN=left NOWRAP>Algorithm</TD> <TD ALIGN=center NOWRAP>TDR</TD> <TD ALIGN=center NOWRAP COLSPAN=4>Initial layout</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>perfect</TD> <TD ALIGN=center NOWRAP>perfect</TD> <TD ALIGN=center NOWRAP>+pile-up</TD> <TD ALIGN=center NOWRAP>+400µm</TD> <TD ALIGN=center NOWRAP>+Ineff 1/2%</TD> </TR> <TR><TD ALIGN=left NOWRAP><B>2D</B> <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=50%</TD> <TD ALIGN=center NOWRAP>300±10</TD> <TD ALIGN=center NOWRAP>204±6</TD> <TD ALIGN=center NOWRAP>203±5</TD> <TD ALIGN=center NOWRAP>200±5</TD> <TD ALIGN=center NOWRAP>156±4</TD> </TR> <TR><TD ALIGN=left NOWRAP>2D <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=60%</TD> <TD ALIGN=center NOWRAP>83±1</TD> <TD ALIGN=center NOWRAP>62±1</TD> <TD ALIGN=center NOWRAP>60±1</TD> <TD ALIGN=center NOWRAP>58±1</TD> <TD ALIGN=center NOWRAP>49±1</TD> </TR> <TR><TD ALIGN=left NOWRAP><B>3D</B> <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=50%</TD> <TD ALIGN=center NOWRAP>650±31</TD> <TD ALIGN=center NOWRAP>401±15</TD> <TD ALIGN=center NOWRAP>387±14</TD> <TD ALIGN=center NOWRAP>346±12</TD> <TD ALIGN=center NOWRAP>261±8</TD> </TR> <TR><TD ALIGN=left NOWRAP>3D <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=60%</TD> <TD ALIGN=center NOWRAP>151±3</TD> <TD ALIGN=center NOWRAP>112±2</TD> <TD ALIGN=center NOWRAP>109±2</TD> <TD ALIGN=center NOWRAP>97±2</TD> <TD ALIGN=center NOWRAP>79±1</TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.4: Rejection power for the 2D and 3D algorithms, both for TDR and current detector layout. ``Perfect'' refers to the detector without inefficiencies, no pile-up and with the original pixel size in the b-layer. Realistic effects are added from left to right column, degrading the rejection power. Table taken from [25].</DIV><BR> <DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsection Jet reconstruction--> <H3>1.7.2&nbsp;&nbsp;Jet reconstruction</H3><!--SEC END --> Top production events always include two or more high energy hadronic jets; for a better comprehension of the underlying physics phenomena, it is necessary to understand the connection between the jets and the particles that generated them. However, the definition of a jet is not unique, as different reconstruction algorithms may produce signatures with incompatible characteristics (jet multiplicity, energy etc.). Thus, the choice of the jet algorithm introduces a systematic effect in the measurement.<BR> <BR> A jet algorithm should not only give a good estimation of the properties of originating particles, but it should also be consistent both at the experimental and the theoretical levels, in order to facilitate the comparison between experimental results and theoretical or Montecarlo predictions. The ideal jet reconstruction algorithm should include these characteristics [26]: <DL COMPACT=compact><DT> <B>Infrared safety</B><DD> the result of the jet finding algorithm does not get affected by soft radiation; <DT><B>Collinear safety</B><DD> the jet finding algorithm is not sensitive to the emission of collinear particles; <DT><B>Invariance under boosts</B><DD> the algorithm result is independent of boosts along the beam axis; <DT><B>Stability with luminosity</B><DD> the algorithm is not affected by the presence of minimum bias events and multiple hard scatterings; <DT><B>Detector independence</B><DD> the algorithm performance is independent from the type of detector that provides the data. The algorithm does not degrade the intrinsic resolution of the detector; <DT><B>Ease of calibration</B><DD> the kinematical properties of the jet are well-defined and allow calibration. </DL> Jet reconstruction algorithms parse a list of clusters --- which can be made of calorimeter towers in a real experiment or particle clusters in a Montecarlo simulations --- and tries to merge neighbouring clusters into jet candidates. The properties of the merged clusters are processed, according to a ``recombination scheme'', to produce the kinematical variables of the jet candidate.<BR> <BR> There are two types of algorithms for merging clusters: cone algorithms and <I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> algorithms.<BR> <BR> <!--TOC subsubsection Cone algorithms--> <H4>Cone algorithms</H4><!--SEC END --> In cone algorithms, a 2-dimensional map of calorimeter cells is scanned, looking for clusters which have a local maximum of deposited energy. These maxima are used as ``seeds'' for the jet search and stored in a list, ordered by decreasing <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I>. For each seed in the list, the algorithm adds sequentially clusters which lie within a distance <I>R</I> from the center of the starting seed --- the distance in the detector frame of reference is defined as <FONT FACE=symbol>D</FONT> <I>R</I>=<FONT FACE=symbol>Dh</FONT><FONT SIZE=2><SUP>2</SUP></FONT>+<FONT FACE=symbol>Df</FONT><FONT SIZE=2><SUP>2</SUP></FONT>, and it describes a cone centered on the interaction point.<BR> <BR> At each step of the sequence, the energies of the merged cluster and the jet candidate are summed, and the centroid of the jet is calculated by weighing the coordinates of the clusters with the transverse energy <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I> of the clusters. The algorithm stops either when there are no more clusters available or when the cone is ``stable'', which means that the centroid is aligned with the cone axis [26]. In order to reduce jet multiplicity, a cutoff on the minimum jet energy <I>E<FONT SIZE=2><SUB>min</SUB></FONT></I> can be introduced. Jets with energy lower than <I>E<FONT SIZE=2><SUB>min</SUB></FONT></I> are rejected by the algorithm.<BR> <BR> There two main disadvantages in the use of cone algorithms: first of all, the use of seeds makes the algorithm sensitive to infrared radiation and collinear effects. In the case of infrared emission, the emitted radiation creates additional energy clusters which can be used as jet seeds, biasing the result of the algorithm. A similar thing occurs in the case of collinear radiation, where the bias is caused by the jet energy being spread over different towers by collinear particles. This effects can be solved by <EM>seedless cone algorithms</EM> [26]. This class of algorithms is however computer-intensive, since the algorithm treats all clusters as possible jet initiators. Another solution is to include in the list of initiatiors the midpoints between the seeds.<BR> <BR> The second disadvantage of cone algorithms is the occurrence of overlapping jets. This problem can be solved by introducing the following policy [26]: jets which share a fraction <I>f</I> of their total energy are merged --- typically <I>f</I>=50% --- while for lower fractions the shared clusters are split between the two jets, according to the nearest distance.<BR> <BR> <!--TOC subsubsection <I>K</I><SUB><FONT SIZE=2><I>T</I></FONT></SUB> algorithms--> <H4><I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> algorithms</H4><!--SEC END --> In the <I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> scheme, the algorithm starts with a list of <EM>preclusters</EM>. This is a list of clusters which have been preliminary merged, for two reasons: to reduce the number of input objects to the algorithm, hence reducing computation time; to reduce detector-dependent effects (for example, merging clusters across calorimeter cracks and uninstrumented regions).<BR> <BR> Preclustering can be achieved in several ways: at CDF, cells from the hadronic and electromagnetic calorimeters are combined only if the <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> of the resulting precluster is larger than 100&nbsp;MeV; at DØ, preclusters are formed by summing cells until the precluster <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> is positive<SUP>1</SUP> and larger than 200&nbsp;MeV.<BR> <BR> <B>What about ATLAS?</B> <BR> <BR> For each precluster combination <I>i</I>,<I>j</I> from the list, the <I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> algorithm computes [26]:<BR> <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> </TD> <TD NOWRAP><TABLE CELLSPACING=2 CELLPADDING=0> <TR><TD ALIGN=right NOWRAP> <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I></TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> min(<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>,<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>j</I></SUB><SUP>2</SUP></FONT>)</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>R</I><FONT SIZE=2><SUB><I>ij</I></SUB><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>D</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR></TABLE></TD> </TR></TABLE></TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=right NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> min(<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>,<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>j</I></SUB><SUP>2</SUP></FONT>)</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>(<I>y<FONT SIZE=2><SUB>i</SUB></FONT></I>-<I>y<FONT SIZE=2><SUB>j</SUB></FONT></I>)<FONT SIZE=2><SUP>2</SUP></FONT>+(<FONT FACE=symbol>f</FONT><FONT SIZE=2><I><SUB>i</SUB></I></FONT>-<FONT FACE=symbol>f</FONT><FONT SIZE=2><I><SUB>j</SUB></I></FONT>)<FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>D</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR></TABLE></TD> </TR></TABLE></TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR></TABLE></TD> </TR></TABLE></DIV><BR> where <I>y<FONT SIZE=2><SUB>i</SUB></FONT></I>, <FONT FACE=symbol>f</FONT><FONT SIZE=2><I><SUB>i</SUB></I></FONT> are the rapidity and the azimuthal angle of the cluster <I>i</I>. The parameter <I>D</I> is a cutoff parameter that regulates the maximum allowed distance for two preclusters to be merged. For <I>D</I><FONT FACE=symbol>»</FONT>1 and <FONT FACE=symbol>D</FONT> <I>R</I><FONT SIZE=2><SUB><I>ij</I></SUB><SUP>2</SUP></FONT>«1, <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I> is the relative transverse momentum <I>k</I><FONT FACE=symbol SIZE=2><SUB>^</SUB></FONT> between the two preclusters <I>i</I>,<I>j</I>.<BR> <BR> The algorithm computes the minimum among all <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I> and the squared momenta of the preclusters <I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>. If the minimum is a <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I>, the two corresponding preclusters are removed from the list and replaced with a merged cluster. If the minimum is a <I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>, then the precluster <I>i</I> is identified as a jet and removed from the list. The algorithm proceeds either when the precluster list is empty, or when the minimum falls below a threshold value <I>d<FONT SIZE=2><SUB>cut</SUB></FONT></I>.<BR> <BR> <!--TOC subsection Jet energy calibration--> <H3>1.7.3&nbsp;&nbsp;Jet energy calibration</H3><!--SEC END --> The calorimeters need to cover the maximum possible <FONT FACE=symbol>h</FONT> range and have the largest feasible absorption length, to avoid particles escaping detection in non-instrumented areas or ``punching through'' the detector; a good detector hermeticity allows for the accurate measurements of imbalances in the azimuthal distribution of energy, giving an estimate of the missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>carried by non-interacting neutral particles, such as the neutrino or its supersymmetric counterpart. Calorimetry is very important for muon detection too, since it can complement the tracking of the Muon Spectrometer with a measurement of the momentum lost by the muons while crossing the calorimeters.<BR> <BR> All of the ATLAS calorimeters are segmented in cells over the <I>R</I>,<FONT FACE=symbol>f</FONT>,<FONT FACE=symbol>h</FONT> coordinates: each of the cells is read out independently from the others. The segmentation makes it possible to study the characteristics of particle showers generated by the primary particle entering the detector; the position of the showers gives an indication on the direction of the primary particle, while the spatial properties of the shower permits the identification of the primary particle: electrons and photons produce typically shorter and narrower showers, hadron jets produce broader and longer showers, while muons deposit only a modest amount of their initial energy.<BR> <BR> The calorimetry system of the ATLAS experiment is divided in an electromagnetic and a hadronic section (see Figure&nbsp;2.7). The electromagnetic calorimeter is more suited to contain particle showers created by electrons and photons, while the hadronic calorimeters deals with single hadrons --- such as pions, kaons etc --- or hadronic jets.<BR> <BR> The energy resolution for all calorimetry systems is expressed by the following formula: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>s</FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>E</I></TD> </TR></TABLE></TD> <TD NOWRAP>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>a</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>E</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>Å</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>b</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>E</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>Å</FONT> <I>c</I>.</TD> </TR></TABLE></DIV> The term <I>a</I> is due to the stochastic fluctuations on the number of particles in the shower created by the primary particle and to fluctuations on the amount of energy that cannot be measured by the calorimeter. This ``invisible'' energy includes energy absorbed in the passive material of sampling calorimeters and energy dissipated by physical processes difficult to detect in the calorimeter (such as neutron production, nuclear fission or nuclear excitation); the effect of term <I>a</I> decreases for increasing energies. The term <I>b</I> is the noise term and includes both electronic and detector noise and pile-up events; these events are soft (<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&lt; 500&nbsp;MeV) scattering events which accompany the hard collision --- around 20 pile-up events for each bunch crossing at design luminosity are expected --- and deposit a low amount of energy in the calorimeter. The term <I>c</I> is the constant systematic error, which becomes predominant at high energies, and is composed of several contributions: <UL><LI>the difference in the electromagnetic/hadronic response for non-compensating calorimeters; <LI>inhomogeneities in the response of the calorimeter; <LI>miscalibration in the conversion factor between the charge signal measured by the calorimeter and the energy of the primary particle; <LI>environmental effects: temperature gradients across the calorimeter, aging of the detector elements due to irradiation; <LI>leakage of the particle shower through the detector; <LI>degradation of energy resolution by loss of energy of the primary particle outside the calorimeter (cabling, cooling pipes, support structures, upstream detectors). </UL> While the stochastic term and the noise term can be modelled by simulation, the systematic term can be reduced in magnitude only by a better understanding of the real detector. The aim of test beam studies of the calorimetry systems is to evaluate the impact of all three terms on the calorimeter performance, and to obtain a better energy resolution.<BR> <BR> <!--TOC subsection <I>e</I>/<FONT FACE=symbol>g</FONT> identification--> <H3>1.7.4&nbsp;&nbsp;<I>e</I>/<FONT FACE=symbol>g</FONT> identification</H3><!--SEC END --> The leptonic decay of the <I>W</I> in single top events generates a high energy lepton providing a clear signature for the process. However, there is a chance that electrons may be mistaken for photons and vice versa. A high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> electron may lose a fair amount of energy via bremsstrahlung, thus making its track reconstruction impossible; energy deposition in the calorimeter combined with no visible charged track would result in the electron being mistakenly identified as a photon. On the other hand, a photon may convert early to an electron/positron pair, and one of the two track may not be detected, thus wrongly identifying the surviving track as a prompt electron instead of a conversion product.<BR> <BR> A technique for the correct reconstruction of electron and photons was explored in the ATLAS TDR [20]: for each cluster in the Electromagnetic Calorimeter, tracks are searched in a cone of radius <FONT FACE=symbol>D</FONT> <I>R</I>&lt;0.1 around the direction of the cluster. The xKalman track reconstruction algorithm is used to identify tracks; this algorithm treats bremsstrahlung emission as a soft continuous correction, thus it cannot cope with hard photon emissions which cause a kink in the track. If tracks are found in the search cone, they are passed to the xConver algorithm; this algorithm scans for opposite charged track pairs that may come from a photon conversion. If no conversion is found, the cluster is identified as an electron. If xKalman does not find any track, a second algorithm PixlRec, which can cope with hard bremsstrahlung, is invoked. The tracks found by PixlRec are again passed to xConver. If either no tracks are found both by xKalman and PixlRec, or xConver flags a track as a result of a conversion, the cluster is identified as a photon. The electron efficiency of this sequence of algorithms is estimated to be 99.8% with a rejection factor against mistagged photons of 18. The photon efficiency is 94.4% with a rejection factor against electrons of about 500 [20]. <BR> <BR> <!--TOC subsection <I>e</I>/<FONT FACE=symbol>p</FONT> identification--> <H3>1.7.5&nbsp;&nbsp;<I>e</I>/<FONT FACE=symbol>p</FONT> identification</H3><!--SEC END --> A low energy pion can create a shower in the electromagnetic calorimeter, while leaving no signal in the hadronic calorimeter. If such a shower is associated by mistake with a charged track in the silicon trackers, the pion can be mistakenly identified as an electron. The detection of Transition Radiation can prevent such an occurrence. Transition Radiation is emitted when a particle traverses a medium with varying dielectric constant. The variation in the constant creates an oscillating dipole that emits X-rays with energy varying from 2 to 20&nbsp;keV. Detecting X-rays in conjunction with a charged track gives the possibility to perform particle identification: in order to emit the transition radiation, the impinging particle needs to travel in the ultra-relativistic regime, with <FONT FACE=symbol>g</FONT><U>~</U>1000 [13] --- that is, 0.5&nbsp;GeV for electrons and 140&nbsp;GeV for pions. Thus, the presence of X-ray signal along the particle track is more likely to indicate the presence of an electron track rather than a pion track. It should be pointed out that pions can generate high-energy signals through <FONT FACE=symbol>d</FONT>-rays; but the choice of a threshold level of 5&nbsp;keV --- above the expected energy deposition of a <FONT FACE=symbol>d</FONT>-ray --- minimises pion misidentification [40]. <BR> <BR> <!--TOC subsection Muon momentum measurement--> <H3>1.7.6&nbsp;&nbsp;Muon momentum measurement</H3><!--SEC END --> The measurement of the momentum of a charged particle in a magnetic field is based upon the measurement of the deflection of the particle track by means of the Lorentz force: a charged particle with momentum <I>p</I> in a magnetic field <I>B</I> travels along an helicoidal path; projecting the helix upon a plane normal to <I>B</I> we obtain an arc which radius <I>r</I> is: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>r</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>|<I>p</I>|</TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>k</I>×|<I>B</I>|</TD> </TR></TABLE></TD> <TD NOWRAP>, &nbsp;&nbsp;&nbsp;&nbsp; <I>k</I>=0.3&nbsp;<I>GeV&nbsp;m</I><FONT SIZE=2><SUP>-1</SUP></FONT><I>T</I><FONT SIZE=2><SUP>-1</SUP></FONT>.</TD> </TR></TABLE></DIV> Measuring the chord <I>L</I> of the arc and the sagitta <I>s</I>, we have:<BR> <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> </TD> <TD NOWRAP><TABLE CELLSPACING=2 CELLPADDING=0> <TR><TD ALIGN=right NOWRAP><I>rll L</I></TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP> 2<I>r</I>sin<FONT FACE=symbol>q</FONT><U>~</U> 2<I>r</I><FONT FACE=symbol>q</FONT> </TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=right NOWRAP> <I>s</I></TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>r</I>(1-cos<FONT FACE=symbol>q</FONT>)<U>~</U></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>1 </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center>2</TD> </TR></TABLE></TD> <TD NOWRAP><I>r</I><FONT FACE=symbol>q</FONT><FONT SIZE=2><SUP>2</SUP></FONT>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>L</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center>8<I>r</I></TD> </TR></TABLE></TD> </TR></TABLE></TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR></TABLE></TD> </TR></TABLE></DIV><BR> where <FONT FACE=symbol>q</FONT> is the angle corresponding to the ratio between the arc length and the full circumference with the same radius as the arc. From this equation we obtain the relation between the relation between the particle momentum and the arc: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>p</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>0.3<I>B</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>r</I></TD> </TR></TABLE></TD> <TD NOWRAP><U>~</U></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>0.3<I>BL</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center>8<I>s</I></TD> </TR></TABLE></TD> </TR></TABLE></DIV> Thus, by measuring the chord and the sagitta of the arc travelled by a charged particle in a known magnetic field, we can determine the particle's momentum.<BR> <BR> The ATLAS Muon Spectrometer (see Section&nbsp;2.4) is located in the outermost region of the detector . The muon momentum is measured using two different techniques: <UL><LI>in the barrel region (|<FONT FACE=symbol>h</FONT>|&lt;2.5) three concentric layers of precision drift chambers measure the track of crossing muons in three points in space. The chambers are in a 4&nbsp;T toroidal magnetic field. The sagitta of the bending tracks is calculated by measuring the position of the track point in the middle layer with respect to a straight line connecting the track points in the inner and outer layers. <LI>at high rapidities, three layers of precision chambers are used but only the first layer is in a magnetic field; thus, the particle momentum is evaluated using the ``point and angle technique'' that consists in measuring the deflection angle between the track segment reconstructed in the first layer and the track segment reconstructed in the outer layers:<DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>p</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>0.3× <I>BL</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>Da</FONT></TD> </TR></TABLE></TD> </TR></TABLE></DIV> where <I>L</I> denotes here the length of the path travelled by the particle inside the magnetic field. </UL> The resolution of momentum measurement in the Muon Spectrometer is influenced by two independent factors: <UL><LI>a term depending on the spatial measurement of the track points: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>sag</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>µ</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>BL</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>s</FONT></TD> </TR></TABLE></DIV> where <FONT FACE=symbol>s</FONT> is the intrinsic resolution of the precision chambers. This term is dominant for high momentum muons (<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;300&nbsp;GeV); <LI>a term due to multiple scattering of muons: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>MS</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>µ</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>1 </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>BLX</I><FONT SIZE=2><SUB>0</SUB></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>.</TD> </TR></TABLE></DIV> This term is dominant for muons with momenta 30&lt;<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&lt;300&nbsp;GeV. </UL> The total momentum resolution is given by the above two terms:<DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2>2</FONT></TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>tot</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>=</TD> <TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2>2</FONT></TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>sag</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>+</TD> <TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2>2</FONT></TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>MS</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>.</TD> </TR></TABLE></DIV><BR> The resolution of the Muon Spectrometer was estimated at the test-beam in 2004. The ``point and angle'' technique was used due to the lack of a large magnetic field to cover three layers of chambers. The results of the tests are summarised in Section&nbsp;2.4.1. <BR> <BR> <!--BEGIN NOTES chapter--> <HR WIDTH="50%" SIZE=1><DL><DT><FONT SIZE=5>1</FONT><DD>At the DØ calorimeter, the energy deposition from preceding bunch crossings may cause a slightly negative voltage signal in the calorimeter cells </DL> <!--END NOTES--> </TR></TABLE></DIV><BR> This equation descends from the following assumption: the QCD scattering process can be expressed by independent --- <EM>factorised</EM> --- terms. Consider the scattering of two protons: they are complex objects, made of several components called <EM>partons</EM>. The fractional momentum <I>x</I> carried by the partons inside the protons is described by probability density called <EM>Parton Distribution Functions</EM> --- <B>PDF</B>, indicated by <I>f</I>(<I>x</I>,µ<FONT SIZE=2><SUB><I>f</I></SUB><SUP>2</SUP></FONT>) in the equation. However, when we consider the scattering process, we assume that the scattering partons are independent from the protons that contained them; then, we have <EM>factorised</EM> the matrix element of the scattering of the two partons from the PDFs that contained the partons. By doing so, we choose an energy scale µ<FONT SIZE=2><I><SUB>f</SUB></I></FONT>, the <EM>factorisation scale</EM> that separates the description of the parton as a statistic variable from the description of a pointlike particle in the scattering process. In the factorization scheme, the partonic cross section for <I>tt</I> production <FONT FACE=symbol>s</FONT> depends only on the square of the partonic center of mass energy <I>s</I>=<I>x<FONT SIZE=2><SUB>i</SUB></FONT>x<FONT SIZE=2><SUB>j</SUB></FONT>s</I>, the top mass <I>m<FONT SIZE=2><SUB>t</SUB></FONT></I> and the running strong coupling constant <FONT FACE=symbol>a</FONT><FONT SIZE=2><I><SUB>s</SUB></I></FONT>(µ<FONT SIZE=2><SUB><I>r</I></SUB><SUP>2</SUP></FONT>). The coupling constant is evaluated at the <EM>renormalisation scale</EM> µ<FONT SIZE=2><I><SUB>r</SUB></I></FONT>, that sets the energy limit above which the hard scattering is assumed to be independent from hadronisation effects.<BR> <BR> Although the cross-section should be independent from the factorisation and renormalisation scales, the calculation of the scattering matrix element up to finite order introduces an unphysical dependence. At Leading Order (<B>LO</B>) the <I>tt</I> cross section is usually evaluated with µ<FONT SIZE=2><I><SUB>f</SUB></I></FONT>=µ<FONT SIZE=2><I><SUB>r</SUB></I></FONT>=<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>, and has an uncertainty of about 50%. The scale-dependence of the cross section can be reduced if we perform Next-to-Leading Order (<B>NLO</B>) calculations of the same cross section: the expected cross-section at the LHC energy scale increases of 30%, the factorisation scale dependence reduces to 12% [7].<BR> <BR> NLO calculations, however, are still affected by the problem of resummations: truncating the calculation of the cross section to some fixed order <I>n</I> of <FONT FACE=symbol>a</FONT><FONT SIZE=2><I><SUB>s</SUB></I></FONT> gives reliable results only if the physics processes included in the calculation happen at roughly the same energy scale. When two or more very different energy scales <I>Q</I>,<I>Q</I><FONT SIZE=2><SUB>1</SUB></FONT> are involved in the calculation, the effect of logarithmic terms of the type (<FONT FACE=symbol>a</FONT><FONT SIZE=2><I><SUB>s</SUB></I></FONT>ln(<I>Q</I>/<I>Q</I><FONT SIZE=2><SUB>1</SUB></FONT>))<FONT SIZE=2><SUP><I>n</I>+1</SUP></FONT> has to be included in the computation [7]. Inclusion of these logarithms in the cross-section is called resummation.<BR> <BR> There are several classes of logarithms that need to be resummed to calculate cross-sections of heavy quark production processes: <UL><LI>small-<I>x</I> logarithms; these logarithms appear in the cross-section calculations when the center of mass energy <I>s</I> of the colliding partons is several orders of magnitude larger than the energy scale <I>Q</I> of the hard scattering; the extrapolation of PDFs between the two energy scales results in large logarithms ln(<I>s</I>/<I>Q</I>); <LI>bremsstrahlung logarithms; these are connected to the emission of soft collinear gluons by scattered particles; <LI>threshold logarithms of the type ln(1-<I>x</I>); these appear when the final state particles carry a large fraction of the center of mass energy. These logarithms have a sizeable effect for <I>tt</I> production at the LHC: this process obtains its main comntribution from gluon-gluon fusion (see Figure&nbsp;1.3), and gluon PDFs reach large values for small <I>x</I>, such as at the <I>tt</I> threshold: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>x</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>2<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>s</I></TD> </TR></TABLE></TD> <TD NOWRAP><U>~</U>0.025;</TD> </TR></TABLE></DIV> <LI>transverse momentum logarithms that occur in the distribution of transverse momentum of systems with high mass that are produced with a vanishing <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> in the LO process. </UL> Resummation of logarithms is performed by introducing a conjugate space to the phase space and performing a transformation of the cross-section equation in the conjugate space. By a proper choice of the conjugate space, the logarithms of the transformed cross-section can be summed in an exponential form factor. Applying an inverse transformation to this form factor results in the correction to the fixed-order cross section.<BR> <BR> Resummations performed on the <I>tt</I> cross-section [7] show that Next-to-Leading Logarithm (<B>NLL</B>) corrections applied to NLO diagrams further reduce factorisation scale dependence by 6% (see Table&nbsp;1.1). It is important to note that resummations do not only affect the absolute value of the cross-section, but the kinematical properties of the process as well. For example, transverse momentum logarithms are associated with the emission of soft gluons in the initial state; a comparison of the <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> spectrum for low-<I>p<FONT SIZE=2><SUB>T</SUB></FONT> tt</I> pair between NLO predictions and Montecarlo shower algorithms --- which reproduce faithfully soft and collinear gluon radiation --- can point out whether resummation is needed or not. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <DIV ALIGN=center><TABLE BORDER=1 CELLSPACING=0 CELLPADDING=1> <TR><TD ALIGN=center NOWRAP>Factorisation scale</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=center NOWRAP>(µ<FONT SIZE=2><I><SUB>f</SUB></I></FONT>=µ<FONT SIZE=2><I><SUB>r</SUB></I></FONT>)</TD> <TD ALIGN=center NOWRAP>NLO</TD> <TD ALIGN=center NOWRAP>NLO+NLL</TD> </TR> <TR><TD ALIGN=center NOWRAP><I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>/2</TD> <TD ALIGN=center NOWRAP>890</TD> <TD ALIGN=center NOWRAP>883</TD> </TR> <TR><TD ALIGN=center NOWRAP><I>m<FONT SIZE=2><SUB>t</SUB></FONT></I></TD> <TD ALIGN=center NOWRAP>796</TD> <TD ALIGN=center NOWRAP>825</TD> </TR> <TR><TD ALIGN=center NOWRAP>2<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I></TD> <TD ALIGN=center NOWRAP>705</TD> <TD ALIGN=center NOWRAP>782</TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.1: Resummation correction to the total <I>tt</I> cross-section (pb) and residual factorisation scale dependence. Numbers taken from [7].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> Top pair production at hadron colliders proceeds via the QCD processes <I>qq</I><FONT FACE=symbol>®</FONT> <I>tt</I> and <I>gg</I><FONT FACE=symbol>®</FONT> <I>tt</I> (see Figure&nbsp;1.3). The two processes have a different relative importance for Tevatron and LHC: when we consider <I>tt</I> production at threshold, the colliding partons need to have a minimum fractional momentum of <I>x</I>=2<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>/<I>s</I> in order to produce a <I>tt</I> pair. Substituting in this equation the center of mass energies of the two colliders, one obtains <I>x</I><U>~</U>0.2 for Tevatron and <I>x</I><U>~</U>0.025 for LHC; collisions at the LHC occur in a region where colliding partons carry a small fraction of the momentum of the incoming particles. Small-<I>x</I> regions in Parton Distribution Functions (see below) are mainly populated by gluons, hence at the LHC <I>tt</I> production occurs mainly via gluon-gluon fusion, while at the Tevatron quark/anti-quark annihilation is the most important process. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <BR> <DIV ALIGN=center>Figure 1.3: Leading order Feynman diagrams for <I>tt</I> production via the strong interaction. Diagram <I>a</I>). quark-antiquark annihilation, is dominant at the Tevatron, while diagrams <I>b</I>) and <I>c</I>), gluon-gluon fusion, give the largest cross-section at the LHC.</DIV><BR> <DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC section Single top production--> <H2>1.3&nbsp;&nbsp;Single top production</H2><!--SEC END --> Single top production probes the weak coupling of the top quark with the down-type quarks (d,s,b); at the LHC energies the cross section for single top production is about one third of <I>tt</I> pair production, thus providing the opportunity to obtain adequate statistics for precision measurements in the electroweak sector, which cover the following topics: <UL><LI>the cross-section of single top production processes is proportional to the square of the CKM element <I>V<FONT SIZE=2><SUB>tb</SUB></FONT></I>; direct measurement of this parameter has not been performed yet, and violations of the unitarity of the 33 CKM matrix may point to the existence of a fourth quark generation; <LI>lower particle multiplicities in the final state of single top processes reduce combinatorial effects in the reconstruction of the top quark, giving a precision mass measurement complementary to the one obtained from <I>tt</I> processes; <LI>single-top processes constitute a background to other processes of interest, such as <I>tt</I> production or Higgs production; <LI>single top quarks are produced with almost 100% spin polarisation; by measuring the spin polarisation of the top decay products, the V-A coupling of the <I>Wtb</I> vertex can be evaluated; </UL> There are three dominant single top production mechanisms: the s-channel, the t-channel and the associated production, illustrated in Figure&nbsp;1.4.<BR> <BR> At the Tevatron, s-channel production has the largest cross-section, while associated production is negligible; at the LHC the situation is reversed: because of the large gluon density at small <I>x</I>, both associated production and gluon-boson fusion have larger cross-section than the s-channel process, with gluon-boson fusion having the largest cross-section of all three processes (see Table&nbsp;1.2). <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <DIV ALIGN=center><TABLE BORDER=1 CELLSPACING=0 CELLPADDING=1> <TR><TD ALIGN=left NOWRAP>Process</TD> <TD ALIGN=center NOWRAP>Tevatron Run 1</TD> <TD ALIGN=center NOWRAP>Tevatron Run 2</TD> <TD ALIGN=center NOWRAP>LHC (<I>t</I>)</TD> <TD ALIGN=center NOWRAP>LHC (<I>t</I>)</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT FACE=symbol>s</FONT><FONT SIZE=2><SUB><I>s</I>-<I>chann</I></SUB><SUP><I>NLO</I></SUP></FONT> (pb)</TD> <TD ALIGN=center NOWRAP>0.380± 0.002</TD> <TD ALIGN=center NOWRAP>0.447± 0.002</TD> <TD ALIGN=center NOWRAP>6.55± 0.03</TD> <TD ALIGN=center NOWRAP>4.07± 0.02</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT FACE=symbol>s</FONT><FONT SIZE=2><SUB><I>t</I>-<I>chann</I></SUB><SUP><I>NLO</I></SUP></FONT> (pb)</TD> <TD ALIGN=center NOWRAP>0.702± 0.003</TD> <TD ALIGN=center NOWRAP>0.959± 0.002</TD> <TD ALIGN=center NOWRAP>152.6± 0.6</TD> <TD ALIGN=center NOWRAP>90.0± 0.5</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT FACE=symbol>s</FONT><FONT SIZE=2><SUB><I>assoc</I>.</SUB><SUP><I>LL</I></SUP></FONT> (pb)</TD> <TD ALIGN=center NOWRAP>-</TD> <TD ALIGN=center NOWRAP>0.093± 0.024</TD> <TD ALIGN=center NOWRAP>31<FONT SIZE=2><SUB>-2</SUB><SUP>+8</SUP></FONT></TD> <TD ALIGN=center NOWRAP>31<FONT SIZE=2><SUB>-2</SUB><SUP>+8</SUP></FONT></TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.2: Single top quark production cross sections --- table taken from [10].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> <BR> <DIV ALIGN=center>Figure 1.4: Tree-level Feynman diagram for single-top production processes: (a) s-channel, (b,c) t-channel, (d,e) associated production with a <I>W</I>.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsection Single top production in the s-channel--> <H3>1.3.1&nbsp;&nbsp;Single top production in the s-channel</H3><!--SEC END --> The s-channel process (fig.&nbsp;1.4.a) proceeds via a virtual time-like <I>W</I> boson that decays in a <I>tb</I> pair.<BR> This process probes the kinematic region <I>q</I><FONT SIZE=2><SUP>2</SUP></FONT><FONT FACE=symbol>³</FONT>(<I>m<FONT SIZE=2><SUB>t</SUB></FONT></I>+<I>m<FONT SIZE=2><SUB>b</SUB></FONT></I>)<FONT SIZE=2><SUP>2</SUP></FONT>. The cross-section for this channel at the LHC is much smaller than for the t-channel process; however, the cross-section for this process is known to a better precision, since the initial state involves quark and anti-quarks, the PDFs of which have been accurately measured. Moreover, the quark luminosity can be constrained by measuring the similar Drell-Yan process <I>qq</I><FONT FACE=symbol>®</FONT> <I>W</I><FONT SIZE=2><SUP>*</SUP></FONT><FONT FACE=symbol>®n</FONT> [7].<BR> <BR> Calculations of the NLO cross-section have been performed, which show a dependence on factorisation and renormalisation scales of about 2%, and resummation effects add another 3% to the uncertainty, while the Yukawa corrections from loop diagrams involving the Higgs field are negligible. It has been shown, however, that uncertainties in the measurement of <I>m<FONT SIZE=2><SUB>t</SUB></FONT></I> of ±5&nbsp;GeV result in uncertainties in the cross section of 10%. Overall, taking into account predicted statistical errors and theoretical uncertainties, the measurement of the s-channel cross-sections presents the most favourable method of evaluating the CKM matrix element <I>V<FONT SIZE=2><SUB>tb</SUB></FONT></I>. <BR> <BR> <!--TOC subsection Gluon-boson fusion--> <H3>1.3.2&nbsp;&nbsp;Gluon-boson fusion</H3><!--SEC END --> The t-channel process is also known as <EM>gluon-boson fusion</EM> since in this process a <I>b</I>-quark from a gluon splitting interacts with a space-like <I>W</I> to produce the top quark. The gluon-boson fusion process is the channel with the largest cross-section for single top production at the LHC, about 23 times larger than the cross-section of the s-channel. At the Next-to-Leading-Order level, the process is composed by two diagrams, shown in Figure&nbsp;1.4(b,c). Both diagrams depict a <I>b</I>-quark interacting with a <I>W</I> boson --- emitted by the colliding parton --- to produce a top quark; diagram (b) depicts a <I>b</I>-quark from the quark sea inside the proton, while diagram (c) is a NLO correction to diagram (b) and is relevant if instead we consider the <I>b</I>-quark in the initial state as the product of a splitting gluon, where the splitting <I>bb</I> pair has a non-vanishing transverse momentum. When the <I>bb</I> pair is collinear with the emitting gluon, diagram (c) becomes a non-perturbative process that can be included in the <I>b</I>-quark PDF; the NLO corrections in this kinematical region have to be subtracted from the computation to avoid double counting of such a diagram. The two diagrams (b) and (c) have the same experimental signature --- a forward scattered light quark, a <I>W</I> and a <I>b</I>-quark --- since the additional b quark from diagram (c) has 75% of the events <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&lt;&nbsp;20&nbsp;GeV, thus hardly observable [10].<BR> <BR> The NLO cross-section for the t-channel has a worse scale dependence than the s-channel, with about 5% uncertainty. The top mass uncertainty contributes for 3% when the top mass is changed by ±5&nbsp;GeV. Yukawa corrections are small (of 1% order) [10].<BR> <BR> <!--TOC subsection Associated production--> <H3>1.3.3&nbsp;&nbsp;Associated production</H3><!--SEC END --> In this production channel, the single top quark is created together with a real <I>W</I> boson. Two Feynman diagrams --- depicted in Figure&nbsp;1.4(d,e) --- contribute to this channel. However, the t-channel diagram (e) has a smaller contribution since this diagram describes the splitting of a gluon into a <I>tt</I> pair and is mass-suppressed; thus the initial state is affected by the low gluon density at high-<I>x</I> values. The s-channel diagram (d) dominates the associated production; the 1/<I>s</I> scaling of this process, combined with the small <I>b</I>-quark density, result in a negligible cross-section at the Tevatron, while at the LHC it contributes to about 20% for the total single top production.<BR> <BR> A subset of NLO diagrams has been computed for this production channel. Gluons in the in initial state splitting into a collinear <I>bb</I> pair have been included in the <I>b</I>-quark PDF, similar to NLO corrections to the t-channel. It must be remarked that one of the corrections corresponds to the strong production process <I>gg</I><FONT FACE=symbol>®</FONT> <I>tt</I>, followed by top quark decay. This diagram represents a background for the associated production channel, and should be subtracted from the cross-section computation [7]. The cross-section has a strong dependence on both PDFs and renormalisation scale, and the total uncertainty is of the order of 30% (see Table&nbsp;1.2).<BR> <BR> <!--TOC section Top quark decay--> <H2>1.4&nbsp;&nbsp;Top quark decay</H2><!--SEC END --> In the Standard Model, the top quark predominantly decays into a b quark and a <I>W</I> with a branching ratio of 0.998. Aleph and Opal conducted searches for Flavour Changing Neutral Currents (<B>FCNC</B>) decays, which resulted in upper limits for <I>t</I><FONT FACE=symbol>®g</FONT> <I>q</I> and <I>t</I><FONT FACE=symbol>®</FONT> <I>Zq</I> of respectively 0.17 and 0.137 [13]. Other SM-allowed decays to down-like quarks are very difficult to disentangle from the QCD background, as opposed to a b-tagged jet. Non-SM decays, however, may provide suitable experimental signatures.<BR> <BR> The extension of the SM Higgs sector could induce new channels for the top decay. In the so-called two-Higgs double models (2HDM), the Higgs sector is composed of two neutral scalars (<I>h</I>,<I>H</I>), a neutral pseudo-scalar (<I>A</I>) and two charged scalars (<I>H</I><FONT SIZE=2><SUP>±</SUP></FONT>) [10]. In this hypothesis, the top quark could decay into a charged Higgs: <I>t</I><FONT FACE=symbol>®</FONT> <I>bH</I><FONT SIZE=2><SUP>+</SUP></FONT>. Both CDF and DØ have performed indirect searches in Run&nbsp;I data. No evidence has been found, but searches will continue at Run&nbsp;II. A direct measurement of this channel may be performed searching for the signature <I>H</I><FONT SIZE=2><SUP>±</SUP></FONT><FONT FACE=symbol>®tn</FONT>, while a heavier charged Higgs decaying to quarks will suffer from QCD jet background.<BR> <BR> The 2HDM model usually postulates that the charged Higgs should couple preferentially with third-generation quarks due to their large mass. When this assumption is relaxed, new decay channels of the top quark involving Flavour Changing Neutral Currents may emerge: <I>t</I><FONT FACE=symbol>®</FONT> <I>cV</I><FONT SIZE=2><SUB><I>i</I></SUB><SUP>0</SUP></FONT><I>V</I><FONT SIZE=2><SUB><I>j</I></SUB><SUP>0</SUP></FONT> at tree level and <I>t</I><FONT FACE=symbol>®</FONT> <I>cV</I><FONT SIZE=2><SUP>0</SUP></FONT> at one-loop --- <I>V</I><FONT SIZE=2><SUP>0</SUP></FONT> indicates either <FONT FACE=symbol>g</FONT>, <I>Z</I> or <I>g</I>. However, the signatures of these channels are very hard to disentangle from QCD background.<BR> <BR> <!--TOC section Top detection--> <H2>1.5&nbsp;&nbsp;Top detection</H2><!--SEC END --> Top quarks in the SM decay almost exclusively into <I>Wb</I>. Because of fermion universality in electroweak interactions, the <I>W</I> boson decays 1/3 of the time into a lepton/neutrino pair and 2/3 of the time into a <I>qq</I> pair. Since in <I>tt</I> events two real <I>W</I> bosons are present, the signatures of the events are classified according to the decay channel of the <I>W</I> bosons: <DL COMPACT=compact><DT><B>all jets channel</B><DD> Here both <I>W</I>s decay into a quark/anti-quark pair. The event has at least six high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> jets, two of which have to be b-tagged. Despite having the highest branching ratio (44%), this decay channel suffers heavily from QCD background and ambiguities in the assignment of jets to the originating <I>W</I>s. <DT><B>lepton+jets channel</B><DD> In this decay channel one <I>W</I> decays into a lepton--neutrino pair, the other <I>W</I> into a quark/anti-quark pair. One isolated lepton, four jets (two with b-tagging) and missing energy characterise the event. The branching ratio is about 30%. <DT><B>di-lepton channel</B><DD> In this decay channel, both <I>W</I>s decay into a lepton--neutrino pair. For practical purposes, only <I>e</I>,µ are considered, since <FONT FACE=symbol>t</FONT> decays are difficult to distinguish from the QCD background. The events have two high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> leptons, two jets (at least one of which is b-tagged) and missing energy due to the neutrinos. This signature is quite clear, being affected mainly by electroweak background. The only drawback of this decay channel is its low branching ratio (5%). </DL> <!--TOC section Event selection and backgrounds--> <H2>1.6&nbsp;&nbsp;Event selection and backgrounds</H2><!--SEC END --> Top quark pairs are produced near threshold and have low kinetic energy, thus present little or no boost in the beam direction. Since the decay products of the top quark have a much smaller mass than the top quark itself, they typically carry large transverse momentum and cross the central region of the detector (|<FONT FACE=symbol>h</FONT>|&lt;2.5); the low boost from the decaying top accounts for good angular separation of the decay products. If di-lepton or lepton+jets channels are considered, a large missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT> is part of the signature. Experimental cuts on <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> and <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT> alone are sufficient to strongly reduce the QCD background, which has an exponentially-falling <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I> spectrum and small <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT> [10]. Tagging one or more b-jets (either by secondary vertex or soft muon tagging) further reduces background from QCD.<BR> <BR> In addition to the above cuts, further selections can be performed according to the topological features of top production and its decay channels: for example, semi- and fully leptonic decays present one or two high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> isolated leptons; topological variables such as <I>H<FONT SIZE=2><SUB>T</SUB></FONT></I> (the scalar sum of the <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I> of all observed objects), sphericity (<I>S</I>) and aplanarity (<I>A</I>) can be employed to discriminate against QCD background. <BR> <BR> <!--TOC subsection Top mass determination in <I>t</I><I>t</I> events--> <H3>1.6.1&nbsp;&nbsp;Top mass determination in <I>tt</I> events</H3><!--SEC END --> The selection cuts used at CDF in the lepton+jets sample require [23]: <UL><LI>one isolated lepton with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;20&nbsp;GeV; <LI>missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>&gt;20&nbsp;GeV; <LI>at least three jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;15&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2 and one jet with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;8&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2, with at least 1 <I>b</I>-tagged jet;<BR><BR> --- or --- <LI>at least four jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;15&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2 and no <I>b</I>-tagging requirement. </UL> The selected events are subjected to a kinematical fit, with the constraints <I>M<FONT SIZE=2><SUB>jj</SUB></FONT></I>=<I>M</I><FONT FACE=symbol SIZE=2><SUB>n</SUB></FONT>=<I>M<FONT SIZE=2><SUB>W</SUB></FONT></I> and <I>M<FONT SIZE=2><SUB>t</SUB></FONT></I>=<I>M<FONT SIZE=2><SUB>t</SUB></FONT></I>; of the possible 24 combinations --- 12 if <I>b</I>-tagging is included --- the one with the lowest <FONT FACE=symbol>c</FONT><FONT SIZE=2><SUP>2</SUP></FONT> is chosen; the reconstructed top masses are histogrammed and fitted with signal and background templates, where the signal templates vary according to the mass; the mass that provides the best likelihood in the fit determines the final result.<BR> <BR> Top mass measurements in the lepton+jets channel at DØ employ both a reconstruction technique similar to the one described above and a likelihood method. This method examines the kinematical features of each event, and compares them with a template based on sample events generated at the tree-level by the simulation package VECBOS, both for <I>tt</I> production (signal) and <I>W</I>+4<I>j</I> (background), and convoluted with a transfer function that models fragmentation and detector effects. The probability for each event to be a background or a signal event is then used to compute the likelihood for the top quark to have a given mass.<BR> <BR> Selection cuts at ATLAS require [21]: <UL><LI>one isolated lepton with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;20&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2.5; <LI>missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>&gt;20&nbsp;GeV; <LI>at least four jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;40&nbsp;GeV and |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2.5, of which at least 2 are <I>b</I>-tagged jets. </UL>Two of the non-tagged jets are used to reconstruct the <I>W</I>, with the constraint |<I>M<FONT SIZE=2><SUB>jj</SUB></FONT></I>-<I>M<FONT SIZE=2><SUB>W</SUB></FONT></I>|&lt;&nbsp;20&nbsp;GeV; the reconstructed <I>W</I> is combined with one of the two <I>b</I>-jets to form the top quark. Of all <I>jjb</I> possible combinations, either the one which gives the highest <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> to the reconstructed top, or the one with the highest angular separation between the <I>b</I>-jet and the other jets, is assumed to represent the top quark [21]. The efficiency of this reconstruction is estimated to be 5%, with a top mass resolution of 11.9&nbsp;GeV.<BR> <BR> <!--TOC subsection Top mass determination in single top events--> <H3>1.6.2&nbsp;&nbsp;Top mass determination in single top events</H3><!--SEC END --> Gluon-boson fusion and s-channel production were studied at the Tevatron during Run I, while associated production has a negligible cross-section at the Tevatron (see Table&nbsp;1.2).<BR> <BR> CDF performed an inclusive single top analysis, searching for single top in the <I>W</I>+<I>jets</I> sample, asking for the invariant mass of the lepton, <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>and highest-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> jet to lie between 140 and 210&nbsp;GeV. This was followed by a likelihood fit of the total transverse energy <I>H<FONT SIZE=2><SUB>T</SUB></FONT></I>. This technique gave a lower limit for inclusive single-top cross section of 14&nbsp;pb. CDF also performed two separate searches for s- and t-channel production, which resulted in lower limits of respectively, 18 and 13&nbsp;pb. DØ used neural networks giving lower limits of 17&nbsp;pb for s-channel and 22&nbsp;pb for t-channel production.<BR> <BR> Since single top experimental signatures show a lower jet multiplicity compared to pair production, stringent cuts are required to isolate single top events; from QCD background; moreover, each of the three single top production processes is a background process for the other two, the cuts need to be tailored to each of the three channels.<BR> <BR> The ATLAS reconstruction technique entails a set of pre-selection cuts to reduce QCD background; this set includes [21]: <UL><LI>at least one isolated lepton with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;20&nbsp;GeV; <LI>at least two jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;30&nbsp;GeV; <LI>at least 1 <I>b</I>-tagged jet with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV. </UL> The net effect of these cuts is to select leptonic decay products of the single top. On top of the pre-selection cuts, each channel adds its own set of selection cuts.<BR> <BR> <!--TOC subsubsection s-channel selection cuts--> <H4>s-channel selection cuts</H4><!--SEC END --> <UL><LI>exactly two jets within |<FONT FACE=symbol>h</FONT>|&lt;&nbsp;2.5; this cut reduces <I>tt</I> background, which has a higher jet multiplicity; <LI>the two jets need to be <I>b</I>-tagged and have <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;75&nbsp;GeV; this cut reduces both <I>W</I>+jets and Gluon-boson fusion, where the second <I>b</I>-jet is either missing or has low <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>; <LI><I>H<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;175&nbsp;GeV and total invariant mass greater than 200&nbsp;GeV; these cuts reduce <I>Wjj</I> background, which tend to have smaller total transverse energy and smaller invariant masses; <LI>reconstructed top mass between 150 and 200&nbsp;GeV. </UL> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/sign-schan.eps <BR> <DIV ALIGN=center>Figure 1.5: Experimental signature of single top production in the s-channel.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsubsection Gluon-boson fusion selection cuts--> <H4>Gluon-boson fusion selection cuts</H4><!--SEC END --> <UL><LI>exactly two jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;30&nbsp;GeV; this cut reduces the <I>tt</I> background; <LI>one of the two jets with |<FONT FACE=symbol>h</FONT>|&gt;&nbsp;2.5 and <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV; this cut selects the forward light quark, which is the trademark of the gluon-boson fusion process; <LI>the other jet needs to be <I>b</I>-tagged and have <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV; this cut reduces <I>Wjj</I> background; <LI><I>H<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;200&nbsp;GeV and total invariant mass greater than 300&nbsp;GeV; these cuts reduce <I>Wjj</I> background; <LI>reconstructed top mass between 150 and 200&nbsp;GeV. </UL> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/sign-wgluon.eps <BR> <DIV ALIGN=center>Figure 1.6: Experimental signature of single top production in the gluon-boson fusion channel.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsubsection Associated production selection cuts--> <H4>Associated production selection cuts</H4><!--SEC END --> <UL><LI>exactly three jets with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;50&nbsp;GeV; this cut reduces the <I>tt</I> background; <LI>one of the three jets needs to be <I>b</I>-tagged; this cut reduces <I>Wjj</I> background, and further reduces <I>tt</I> background; <LI>total invariant mass less than 300&nbsp;GeV; this cuts reduces the <I>tt</I> background; <LI>the invariant mass of the two non-b jets between 65 and 95&nbsp;GeV; this cut enhances the probability that a real <I>W</I> boson is present in the final state. </UL> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/sign-wt.eps <BR> <DIV ALIGN=center>Figure 1.7: Experimental signature of single top production in the associated production channel.</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <DIV ALIGN=center><TABLE CELLSPACING=2 CELLPADDING=0> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD ALIGN=left NOWRAP>Efficiencies (%)</TD> <TD ALIGN=center NOWRAP><I>Wg</I></TD> <TD ALIGN=center NOWRAP><I>W</I><FONT SIZE=2><SUP>*</SUP></FONT></TD> <TD ALIGN=center NOWRAP><I>Wt</I></TD> <TD ALIGN=center NOWRAP><I>tt</I></TD> <TD ALIGN=center NOWRAP><I>Wjj</I></TD> </TR> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD ALIGN=left NOWRAP>Pre-selection</TD> <TD ALIGN=center NOWRAP>20.0</TD> <TD ALIGN=center NOWRAP>27.0</TD> <TD ALIGN=center NOWRAP>25.5</TD> <TD ALIGN=center NOWRAP>44.4</TD> <TD ALIGN=center NOWRAP>0.667</TD> </TR> <TR><TD ALIGN=left NOWRAP>Selection</TD> <TD ALIGN=center NOWRAP>1.64</TD> <TD ALIGN=center NOWRAP>1.67</TD> <TD ALIGN=center NOWRAP>1.27</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD ALIGN=left NOWRAP>Events/30&nbsp;fb<FONT SIZE=2><SUP>-1</SUP></FONT></TD> <TD ALIGN=center NOWRAP>26800±1000</TD> <TD ALIGN=center NOWRAP>1106±40</TD> <TD ALIGN=center NOWRAP>6828±269</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>&nbsp;</TD> </TR> <TR><TD BGCOLOR=black COLSPAN=6><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.3: Efficiencies of the selection cuts for gluon-boson fusion (<I>Wg</I>), s-channel production (<I>W</I><FONT SIZE=2><SUP>*</SUP></FONT>) and associated production (<I>Wt</I>) at ATLAS. The pre-selection efficiencies for the two most important background processes, <I>tt</I> and <I>Wjj</I> are given for comparison. Table abridged from [21].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC section Performance requirements for top physics--> <H2>1.7&nbsp;&nbsp;Performance requirements for top physics</H2><!--SEC END --> Studying electroweak top production processes at hadron colliders involves the correct identification of the decay chain of the top quark, which may contain the following ``ingredients'': <UL><LI>highly energetic isolated leptons; <LI>highly energetic hadronic jets; <LI>one or more <I>b</I>-tagged jets; <LI>missing energy from the leptonic decay of the <I>W</I>. </UL> To each item on this list correspond precise requirements on the performance of the detector: a good detector performance results in a higher efficiency for event analyses and lower systematic error on measurements of the parameters involved in the physical process under study.<BR> <BR> These requirements include: <UL><LI>correct identification of electrons, photons and pions; <LI>calibrated calorimeter for precise jet energy measurements; <LI>efficient track reconstruction for high-<I>p<FONT SIZE=2><SUB>t</SUB></FONT></I> leptons; <LI>tagging of jets originated by b-quarks; <LI>detector coverage up to high pseudorapidity for accurate missing transverse energy measurements. </UL> In the past years, the design of detectors employed at colliders has been optimised in order to fulfil these requirements. The typical detector is composed of several concentric subdetectors, each performing a specific task, and all or part of the detector is immersed in one or more magnetic fields in order to evaluate particle momenta.<BR> <BR> The innermost layer of a detector is usually instrumented with silicon trackers. These detectors fulfill the task of tracking the path of charged particles close to the interaction point. The reconstructed tracks can extrapolated back to the interaction point to evaluate the impact parameters or, if the extrapolation leads to a different point, reveal a secondary vertex. Secondary vertices are a signal for long-lived unstable particles, such as the b-quark.<BR> <BR> Outside the silicon trackers, the calorimetry system is located. The antiquated term ``calorimetry'' reflects the purpose of this part of the detector: much like the Dewar bottles with which good old Mister&nbsp;Joule was entertaining himself during his honeymoon, today's calorimeters have to contain and measure the total energy of the interaction. Plus, modern calorimeters give the opportunity to measure individually the energy of each of the particles created in the hard interaction, and to measure their direction. There are two types of calorimeters: electromagnetic calorimeters and hadronic calorimeters. These two types of calorimeters exploit different physical phenomena to measure the energy of the incoming particles: the first type usually deals with electron, photons and the occasional soft pion, while the second type deals with hadrons. A well-designed detector utilises both types of calorimeters.<BR> <BR> The calorimeter succesfuly contains all types of particles but two: neutrinos --- which can be detected by measuring an imbalance of energy in the calorimeters --- and muons. Muons deposit a minimal amount of energy in the calorimeters, thus an alternative method is needed for measuring their energy. For this reason, the typical detector is provided with a Muon Spectrometer, located on the outermost layer, outside the calorimeters. The spectrometer uses gaseous detectors, such as drift chambers, to track muons and measure their momentum.<BR> <BR> ATLAS is a general-purpose detector designed to be used at the LHC; thus it adheres to the detector design described above. Apart from performance requirements related the physics program, the ATLAS detector is required to cope with the harsh environment created by the LHC accelerator: the detector must operate efficiently in all luminosity regimes, from an initial low luminosity period of 210<FONT SIZE=2><SUP>33</SUP></FONT>cm<FONT SIZE=2><SUP>-2</SUP></FONT>s<FONT SIZE=2><SUP>-1</SUP></FONT> up to the nominal LHC luminosity of 10<FONT SIZE=2><SUP>34</SUP></FONT>cm<FONT SIZE=2><SUP>-2</SUP></FONT>s<FONT SIZE=2><SUP>-1</SUP></FONT>. High luminosity poses a double threat: for each bunch crossing, about 20 proton-proton interactions, most of which are soft elastic collisions (minimum bias events) constitute a background for physics processes of interest. High luminosity also means high levels of radiation around the interaction points and in the forward regions of the detector: these detector elements need to be radiation tolerant in order to keep an acceptable level of performance during the years of operation. In the next sections I will outline the measurement techniques used at ATLAS and the performance they can provide, while the description of the detector and the evaluation of the performance in real-life tests will be laid out in Chapter&nbsp;2.<BR> <BR> <!--TOC subsection Vertex identification and b-tagging--> <H3>1.7.1&nbsp;&nbsp;Vertex identification and b-tagging</H3><!--SEC END --> The tagging of b-quarks is a very important tool in the study of top decay processes. Given the high branching ratio for a top to decay into <I>Wb</I>, the requirement of a b-tagged jet is one of the most important analytical cuts to reject backgrounds which have a high jet multiplicity but a low b-jet content (such as <I>W</I>+jets). The b-tagging needs to combine a high rejection power with a high efficiency --- otherwise it would lower the overall efficiency of the analysis.<BR> <BR> The b-tagging algorithm in ATLAS is based on the long lifetime of the b-quark: these quarks, on average, decay about 470&nbsp;µm away from the primary interaction vertex; by measuring the tracks of the decay products is possible to reconstruct the b-decay vertex, and tag the jet formed by the decay products as a b-jet.<BR> <BR> For each hadronic jet found in the calorimeter, all tracks measured in the Inner Detector ((see Section&nbsp;2.2)) with <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;1&nbsp;GeV and inside a cone of radius <FONT FACE=symbol>D</FONT> <I>R</I>&lt;0.4 around the jet axis are assigned to the jet. Each track yields two parameters: the impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> and the longitudinal impact parameter <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>. The impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> is the distance between the extrapolated track and the primary interaction vertex in the transverse plane; this parameter is signed: the sign is positive if the extrapolated track intersects the beam axis between the primary vertex and the jet, while it is negative if the track intersects the beam axis behind the primary vertex --- that is, the primary vertex is located between the intersection point and the jet. The longitudinal impact parameter <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT> is the <I>z</I> coordinate of the intersection point, and is signed in the same way as <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>. Montecarlo data has been generated to study the distribution of the impact parameters for <I>u</I>- and <I>b</I>-quark jets; both distributions show a peak at zero (no secondary vertex) and a small tail at negative values (given by particles wrongly assigned to the jet); however, the b-jet distributions for <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> and <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT> show a tail on the positive side, corresponding to a secondary vertex. The original algorithm --- illustrated in the ATLAS Technical Design Review (<B>TDR</B>) [20] --- used only the impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> (hence it is regarded as 2D-algorithm), while the current algorithm combines the two impact parameters (hence it is called 3D-algorithm).<BR> <BR> The <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> and <I>z</I><FONT SIZE=2><SUB>0</SUB></FONT> parameters of each track are used to compute a significance value: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>S</I>(<I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>,<I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>)=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>a</I><FONT SIZE=2><SUB>0</SUB></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>s</FONT>(<I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>)</TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>Å</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>z</I><FONT SIZE=2><SUB>0</SUB></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>s</FONT>(<I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>)</TD> </TR></TABLE></TD> </TR></TABLE></DIV> where <FONT FACE=symbol>s</FONT>(<I>a</I><FONT SIZE=2><SUB>0</SUB></FONT>) and <FONT FACE=symbol>s</FONT>(<I>z</I><FONT SIZE=2><SUB>0</SUB></FONT>) are the resolutions on the impact parameters.<BR> <BR> The distribution of the significances for u-jets and b-jets is shown in Figure&nbsp;1.8.a. Each track is assigned a value given by the ratio of the significances for the two flavours, and the jet is assigned a weight given by the logarithmic sum of the ratios of the tracks matching the jet: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>W<FONT SIZE=2><SUB>jet</SUB></FONT></I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=center>&nbsp;</TD> </TR> <TR><TD ALIGN=left><FONT FACE=symbol SIZE=7>å</FONT></TD> </TR> <TR><TD ALIGN=center><FONT SIZE=2><I>tracks</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>ln</TD> <TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>b</I>(<I>S</I>)</TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>u</I>(<I>S</I>)</TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP>.</TD> </TR></TABLE></DIV> The weight is the likelihood measure for a given jet to originate from a b-quark; by applying a cut on this weight --- optimised to reach 50% efficiency on b-tagging --- it is possible to separate light jets from b-jets. Though it is not possible to investigate the performance of b-tagging algorithms in a test setup, it is possible to apply Montecarlo tools to study detector effects. The likelihood distributions for light and b-jets obtained from the Montecarlo simulation are shown in Figure&nbsp;1.8.b. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV><DIV ALIGN=center> =12cm figs/bsig.eps <BR> <DIV ALIGN=center>Figure 1.8: a) significance of the impact parameter <I>a</I><FONT SIZE=2><SUB>0</SUB></FONT> for u-jets and b-jets. b) jet weights for u-jets and b-jets [25].</DIV><BR> </DIV><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> In a recent study [25], the comparison of the performance of b-tagging in a realistic ATLAS environment against the TDR results includes: <UL><LI>changes in the layout of the Pixel Detector (see Section&nbsp;2.2.1): the initial layout has only 2 pixel layers, the intermediate pixel layer will be installed at a later date; the b-layer has been moved further away from the beam with respect to the TDR layout; <LI>the pixel size in the <FONT FACE=symbol>h</FONT> direction is increased from 300 to 400&nbsp;µm; <LI>``ganged'' pixels (see Section&nbsp;2.2.1) are present in the layout; <LI>increase of dead material in the Pixel Detector due to a redesign of detector services; <LI>staging of the wheels of the Transition Radiation Detector (see Section&nbsp;2.2.3); <LI>simulation of detector inefficiencies: on top of the standard 3% inefficiency for Pixels, SCT strips and TRT straw, the effect of 1% dead pixel modules and 2% dead pixel modules are added; <LI>effects of misalignment between the Pixel Detector and the Semiconductor Tracker (see Section&nbsp;2.2.2); <LI>addition of minimum bias pile-up events. </UL> The study used samples from <I>tt</I>, <I>ttH</I>, <I>WH</I> (<I>M<FONT SIZE=2><SUB>H</SUB></FONT></I>=120&nbsp;GeV) and <I>WH</I> (<I>M<FONT SIZE=2><SUB>H</SUB></FONT></I>=400&nbsp;GeV), to evaluate the efficiency of the b-tagging algorithm for b-jets over a wide range of pseudorapidity and <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>. The Inner Detector was simulated by GEANT3, while the jets were reconstructed using both ATLFAST and full detector simulation --- the differences were found to be marginal. The results of the study, summarised in Table&nbsp;1.4, are the following: <UL><LI>changes in the detector layout amount to a reduction in rejection power by a factor 0.5±0.2, mainly due to increase of material in the Inner Detector; <LI>staging of the intermediate layer in the Pixel Detector amounts to a reduction by a factor 0.7±0.1; <LI>pile-up events at low luminosity, realistic detector inefficiencies, and misalignment between the Pixel Detector and the SCT during the detector commissioning stages amount to a factor 0.75±0.05; <LI>changing the pixel size to 400&nbsp;µm in the b-layer amount to a 10% decrease in rejection; <LI>improved track fitting algorithms increase the rejection power by a factor 1.8; the improved algorithms perform well with high track multiplicity coming from pile-up events at high luminosity: the rejection power is degraded only by 10% at high luminosity; <LI>using the new 3D algorithm instead of the old 2D algorithm increases the rejection power by a factor 1.9; a factor 2.8 can be reached by an improved algorithm [25] which combines the 3D likelihood with other discriminating variables (such as the invariant mass of tracks from the secondary vertex). </UL> Despite the decrease in rejection given by the new layout, improvements in the tracking and tagging algorithms can still provide a rejection factor of about 150 for a b-tagging efficiency <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=60%, higher than the nominal TDR value of R=100 at <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=50%. The b-tagging algorithm can realistically achieve R=100 at <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=70% in the low luminosity regime, increasing the efficiency of all physical analyses based on the identification of b-jets [25].<BR> <BR> This study produced also a new parametrization of b-tagging, depending on <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> and <FONT FACE=symbol>h</FONT>, to be used in conjunction with ATLFAST data. <BLOCKQUOTE><DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV> <TABLE BORDER=1 CELLSPACING=0 CELLPADDING=1> <TR><TD ALIGN=left NOWRAP>Algorithm</TD> <TD ALIGN=center NOWRAP>TDR</TD> <TD ALIGN=center NOWRAP COLSPAN=4>Initial layout</TD> </TR> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP>perfect</TD> <TD ALIGN=center NOWRAP>perfect</TD> <TD ALIGN=center NOWRAP>+pile-up</TD> <TD ALIGN=center NOWRAP>+400µm</TD> <TD ALIGN=center NOWRAP>+Ineff 1/2%</TD> </TR> <TR><TD ALIGN=left NOWRAP><B>2D</B> <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=50%</TD> <TD ALIGN=center NOWRAP>300±10</TD> <TD ALIGN=center NOWRAP>204±6</TD> <TD ALIGN=center NOWRAP>203±5</TD> <TD ALIGN=center NOWRAP>200±5</TD> <TD ALIGN=center NOWRAP>156±4</TD> </TR> <TR><TD ALIGN=left NOWRAP>2D <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=60%</TD> <TD ALIGN=center NOWRAP>83±1</TD> <TD ALIGN=center NOWRAP>62±1</TD> <TD ALIGN=center NOWRAP>60±1</TD> <TD ALIGN=center NOWRAP>58±1</TD> <TD ALIGN=center NOWRAP>49±1</TD> </TR> <TR><TD ALIGN=left NOWRAP><B>3D</B> <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=50%</TD> <TD ALIGN=center NOWRAP>650±31</TD> <TD ALIGN=center NOWRAP>401±15</TD> <TD ALIGN=center NOWRAP>387±14</TD> <TD ALIGN=center NOWRAP>346±12</TD> <TD ALIGN=center NOWRAP>261±8</TD> </TR> <TR><TD ALIGN=left NOWRAP>3D <FONT FACE=symbol>e</FONT><FONT SIZE=2><I><SUB>b</SUB></I></FONT>=60%</TD> <TD ALIGN=center NOWRAP>151±3</TD> <TD ALIGN=center NOWRAP>112±2</TD> <TD ALIGN=center NOWRAP>109±2</TD> <TD ALIGN=center NOWRAP>97±2</TD> <TD ALIGN=center NOWRAP>79±1</TD> </TR></TABLE> <BR> <DIV ALIGN=center>Table 1.4: Rejection power for the 2D and 3D algorithms, both for TDR and current detector layout. ``Perfect'' refers to the detector without inefficiencies, no pile-up and with the original pixel size in the b-layer. Realistic effects are added from left to right column, degrading the rejection power. Table taken from [25].</DIV><BR> <DIV ALIGN=center><HR WIDTH="80%" SIZE=2></DIV></BLOCKQUOTE> <!--TOC subsection Jet reconstruction--> <H3>1.7.2&nbsp;&nbsp;Jet reconstruction</H3><!--SEC END --> Top production events always include two or more high energy hadronic jets; for a better comprehension of the underlying physics phenomena, it is necessary to understand the connection between the jets and the particles that generated them. However, the definition of a jet is not unique, as different reconstruction algorithms may produce signatures with incompatible characteristics (jet multiplicity, energy etc.). Thus, the choice of the jet algorithm introduces a systematic effect in the measurement.<BR> <BR> A jet algorithm should not only give a good estimation of the properties of originating particles, but it should also be consistent both at the experimental and the theoretical levels, in order to facilitate the comparison between experimental results and theoretical or Montecarlo predictions. The ideal jet reconstruction algorithm should include these characteristics [26]: <DL COMPACT=compact><DT> <B>Infrared safety</B><DD> the result of the jet finding algorithm does not get affected by soft radiation; <DT><B>Collinear safety</B><DD> the jet finding algorithm is not sensitive to the emission of collinear particles; <DT><B>Invariance under boosts</B><DD> the algorithm result is independent of boosts along the beam axis; <DT><B>Stability with luminosity</B><DD> the algorithm is not affected by the presence of minimum bias events and multiple hard scatterings; <DT><B>Detector independence</B><DD> the algorithm performance is independent from the type of detector that provides the data. The algorithm does not degrade the intrinsic resolution of the detector; <DT><B>Ease of calibration</B><DD> the kinematical properties of the jet are well-defined and allow calibration. </DL> Jet reconstruction algorithms parse a list of clusters --- which can be made of calorimeter towers in a real experiment or particle clusters in a Montecarlo simulations --- and tries to merge neighbouring clusters into jet candidates. The properties of the merged clusters are processed, according to a ``recombination scheme'', to produce the kinematical variables of the jet candidate.<BR> <BR> There are two types of algorithms for merging clusters: cone algorithms and <I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> algorithms.<BR> <BR> <!--TOC subsubsection Cone algorithms--> <H4>Cone algorithms</H4><!--SEC END --> In cone algorithms, a 2-dimensional map of calorimeter cells is scanned, looking for clusters which have a local maximum of deposited energy. These maxima are used as ``seeds'' for the jet search and stored in a list, ordered by decreasing <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I>. For each seed in the list, the algorithm adds sequentially clusters which lie within a distance <I>R</I> from the center of the starting seed --- the distance in the detector frame of reference is defined as <FONT FACE=symbol>D</FONT> <I>R</I>=<FONT FACE=symbol>Dh</FONT><FONT SIZE=2><SUP>2</SUP></FONT>+<FONT FACE=symbol>Df</FONT><FONT SIZE=2><SUP>2</SUP></FONT>, and it describes a cone centered on the interaction point.<BR> <BR> At each step of the sequence, the energies of the merged cluster and the jet candidate are summed, and the centroid of the jet is calculated by weighing the coordinates of the clusters with the transverse energy <I>E<FONT SIZE=2><SUB>T</SUB></FONT></I> of the clusters. The algorithm stops either when there are no more clusters available or when the cone is ``stable'', which means that the centroid is aligned with the cone axis [26]. In order to reduce jet multiplicity, a cutoff on the minimum jet energy <I>E<FONT SIZE=2><SUB>min</SUB></FONT></I> can be introduced. Jets with energy lower than <I>E<FONT SIZE=2><SUB>min</SUB></FONT></I> are rejected by the algorithm.<BR> <BR> There two main disadvantages in the use of cone algorithms: first of all, the use of seeds makes the algorithm sensitive to infrared radiation and collinear effects. In the case of infrared emission, the emitted radiation creates additional energy clusters which can be used as jet seeds, biasing the result of the algorithm. A similar thing occurs in the case of collinear radiation, where the bias is caused by the jet energy being spread over different towers by collinear particles. This effects can be solved by <EM>seedless cone algorithms</EM> [26]. This class of algorithms is however computer-intensive, since the algorithm treats all clusters as possible jet initiators. Another solution is to include in the list of initiatiors the midpoints between the seeds.<BR> <BR> The second disadvantage of cone algorithms is the occurrence of overlapping jets. This problem can be solved by introducing the following policy [26]: jets which share a fraction <I>f</I> of their total energy are merged --- typically <I>f</I>=50% --- while for lower fractions the shared clusters are split between the two jets, according to the nearest distance.<BR> <BR> <!--TOC subsubsection <I>K</I><SUB><FONT SIZE=2><I>T</I></FONT></SUB> algorithms--> <H4><I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> algorithms</H4><!--SEC END --> In the <I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> scheme, the algorithm starts with a list of <EM>preclusters</EM>. This is a list of clusters which have been preliminary merged, for two reasons: to reduce the number of input objects to the algorithm, hence reducing computation time; to reduce detector-dependent effects (for example, merging clusters across calorimeter cracks and uninstrumented regions).<BR> <BR> Preclustering can be achieved in several ways: at CDF, cells from the hadronic and electromagnetic calorimeters are combined only if the <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> of the resulting precluster is larger than 100&nbsp;MeV; at DØ, preclusters are formed by summing cells until the precluster <I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> is positive<SUP>1</SUP> and larger than 200&nbsp;MeV.<BR> <BR> <B>What about ATLAS?</B> <BR> <BR> For each precluster combination <I>i</I>,<I>j</I> from the list, the <I>K<FONT SIZE=2><SUB>T</SUB></FONT></I> algorithm computes [26]:<BR> <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> </TD> <TD NOWRAP><TABLE CELLSPACING=2 CELLPADDING=0> <TR><TD ALIGN=right NOWRAP> <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I></TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> min(<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>,<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>j</I></SUB><SUP>2</SUP></FONT>)</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>R</I><FONT SIZE=2><SUB><I>ij</I></SUB><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>D</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR></TABLE></TD> </TR></TABLE></TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=right NOWRAP>&nbsp;</TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> min(<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>,<I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>j</I></SUB><SUP>2</SUP></FONT>)</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>(<I>y<FONT SIZE=2><SUB>i</SUB></FONT></I>-<I>y<FONT SIZE=2><SUB>j</SUB></FONT></I>)<FONT SIZE=2><SUP>2</SUP></FONT>+(<FONT FACE=symbol>f</FONT><FONT SIZE=2><I><SUB>i</SUB></I></FONT>-<FONT FACE=symbol>f</FONT><FONT SIZE=2><I><SUB>j</SUB></I></FONT>)<FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>D</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR></TABLE></TD> </TR></TABLE></TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR></TABLE></TD> </TR></TABLE></DIV><BR> where <I>y<FONT SIZE=2><SUB>i</SUB></FONT></I>, <FONT FACE=symbol>f</FONT><FONT SIZE=2><I><SUB>i</SUB></I></FONT> are the rapidity and the azimuthal angle of the cluster <I>i</I>. The parameter <I>D</I> is a cutoff parameter that regulates the maximum allowed distance for two preclusters to be merged. For <I>D</I><FONT FACE=symbol>»</FONT>1 and <FONT FACE=symbol>D</FONT> <I>R</I><FONT SIZE=2><SUB><I>ij</I></SUB><SUP>2</SUP></FONT>«1, <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I> is the relative transverse momentum <I>k</I><FONT FACE=symbol SIZE=2><SUB>^</SUB></FONT> between the two preclusters <I>i</I>,<I>j</I>.<BR> <BR> The algorithm computes the minimum among all <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I> and the squared momenta of the preclusters <I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>. If the minimum is a <I>d<FONT SIZE=2><SUB>ij</SUB></FONT></I>, the two corresponding preclusters are removed from the list and replaced with a merged cluster. If the minimum is a <I>p</I><FONT SIZE=2><SUB><I>T</I>,<I>i</I></SUB><SUP>2</SUP></FONT>, then the precluster <I>i</I> is identified as a jet and removed from the list. The algorithm proceeds either when the precluster list is empty, or when the minimum falls below a threshold value <I>d<FONT SIZE=2><SUB>cut</SUB></FONT></I>.<BR> <BR> <!--TOC subsection Jet energy calibration--> <H3>1.7.3&nbsp;&nbsp;Jet energy calibration</H3><!--SEC END --> The calorimeters need to cover the maximum possible <FONT FACE=symbol>h</FONT> range and have the largest feasible absorption length, to avoid particles escaping detection in non-instrumented areas or ``punching through'' the detector; a good detector hermeticity allows for the accurate measurements of imbalances in the azimuthal distribution of energy, giving an estimate of the missing energy <I>E</I>-0.57<I>em</I>0.19<I>ex</I>/<FONT SIZE=2><I><SUB>T</SUB></I></FONT>carried by non-interacting neutral particles, such as the neutrino or its supersymmetric counterpart. Calorimetry is very important for muon detection too, since it can complement the tracking of the Muon Spectrometer with a measurement of the momentum lost by the muons while crossing the calorimeters.<BR> <BR> All of the ATLAS calorimeters are segmented in cells over the <I>R</I>,<FONT FACE=symbol>f</FONT>,<FONT FACE=symbol>h</FONT> coordinates: each of the cells is read out independently from the others. The segmentation makes it possible to study the characteristics of particle showers generated by the primary particle entering the detector; the position of the showers gives an indication on the direction of the primary particle, while the spatial properties of the shower permits the identification of the primary particle: electrons and photons produce typically shorter and narrower showers, hadron jets produce broader and longer showers, while muons deposit only a modest amount of their initial energy.<BR> <BR> The calorimetry system of the ATLAS experiment is divided in an electromagnetic and a hadronic section (see Figure&nbsp;2.7). The electromagnetic calorimeter is more suited to contain particle showers created by electrons and photons, while the hadronic calorimeters deals with single hadrons --- such as pions, kaons etc --- or hadronic jets.<BR> <BR> The energy resolution for all calorimetry systems is expressed by the following formula: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>s</FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>E</I></TD> </TR></TABLE></TD> <TD NOWRAP>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>a</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>E</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>Å</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>b</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>E</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>Å</FONT> <I>c</I>.</TD> </TR></TABLE></DIV> The term <I>a</I> is due to the stochastic fluctuations on the number of particles in the shower created by the primary particle and to fluctuations on the amount of energy that cannot be measured by the calorimeter. This ``invisible'' energy includes energy absorbed in the passive material of sampling calorimeters and energy dissipated by physical processes difficult to detect in the calorimeter (such as neutron production, nuclear fission or nuclear excitation); the effect of term <I>a</I> decreases for increasing energies. The term <I>b</I> is the noise term and includes both electronic and detector noise and pile-up events; these events are soft (<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&lt; 500&nbsp;MeV) scattering events which accompany the hard collision --- around 20 pile-up events for each bunch crossing at design luminosity are expected --- and deposit a low amount of energy in the calorimeter. The term <I>c</I> is the constant systematic error, which becomes predominant at high energies, and is composed of several contributions: <UL><LI>the difference in the electromagnetic/hadronic response for non-compensating calorimeters; <LI>inhomogeneities in the response of the calorimeter; <LI>miscalibration in the conversion factor between the charge signal measured by the calorimeter and the energy of the primary particle; <LI>environmental effects: temperature gradients across the calorimeter, aging of the detector elements due to irradiation; <LI>leakage of the particle shower through the detector; <LI>degradation of energy resolution by loss of energy of the primary particle outside the calorimeter (cabling, cooling pipes, support structures, upstream detectors). </UL> While the stochastic term and the noise term can be modelled by simulation, the systematic term can be reduced in magnitude only by a better understanding of the real detector. The aim of test beam studies of the calorimetry systems is to evaluate the impact of all three terms on the calorimeter performance, and to obtain a better energy resolution.<BR> <BR> <!--TOC subsection <I>e</I>/<FONT FACE=symbol>g</FONT> identification--> <H3>1.7.4&nbsp;&nbsp;<I>e</I>/<FONT FACE=symbol>g</FONT> identification</H3><!--SEC END --> The leptonic decay of the <I>W</I> in single top events generates a high energy lepton providing a clear signature for the process. However, there is a chance that electrons may be mistaken for photons and vice versa. A high-<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I> electron may lose a fair amount of energy via bremsstrahlung, thus making its track reconstruction impossible; energy deposition in the calorimeter combined with no visible charged track would result in the electron being mistakenly identified as a photon. On the other hand, a photon may convert early to an electron/positron pair, and one of the two track may not be detected, thus wrongly identifying the surviving track as a prompt electron instead of a conversion product.<BR> <BR> A technique for the correct reconstruction of electron and photons was explored in the ATLAS TDR [20]: for each cluster in the Electromagnetic Calorimeter, tracks are searched in a cone of radius <FONT FACE=symbol>D</FONT> <I>R</I>&lt;0.1 around the direction of the cluster. The xKalman track reconstruction algorithm is used to identify tracks; this algorithm treats bremsstrahlung emission as a soft continuous correction, thus it cannot cope with hard photon emissions which cause a kink in the track. If tracks are found in the search cone, they are passed to the xConver algorithm; this algorithm scans for opposite charged track pairs that may come from a photon conversion. If no conversion is found, the cluster is identified as an electron. If xKalman does not find any track, a second algorithm PixlRec, which can cope with hard bremsstrahlung, is invoked. The tracks found by PixlRec are again passed to xConver. If either no tracks are found both by xKalman and PixlRec, or xConver flags a track as a result of a conversion, the cluster is identified as a photon. The electron efficiency of this sequence of algorithms is estimated to be 99.8% with a rejection factor against mistagged photons of 18. The photon efficiency is 94.4% with a rejection factor against electrons of about 500 [20]. <BR> <BR> <!--TOC subsection <I>e</I>/<FONT FACE=symbol>p</FONT> identification--> <H3>1.7.5&nbsp;&nbsp;<I>e</I>/<FONT FACE=symbol>p</FONT> identification</H3><!--SEC END --> A low energy pion can create a shower in the electromagnetic calorimeter, while leaving no signal in the hadronic calorimeter. If such a shower is associated by mistake with a charged track in the silicon trackers, the pion can be mistakenly identified as an electron. The detection of Transition Radiation can prevent such an occurrence. Transition Radiation is emitted when a particle traverses a medium with varying dielectric constant. The variation in the constant creates an oscillating dipole that emits X-rays with energy varying from 2 to 20&nbsp;keV. Detecting X-rays in conjunction with a charged track gives the possibility to perform particle identification: in order to emit the transition radiation, the impinging particle needs to travel in the ultra-relativistic regime, with <FONT FACE=symbol>g</FONT><U>~</U>1000 [13] --- that is, 0.5&nbsp;GeV for electrons and 140&nbsp;GeV for pions. Thus, the presence of X-ray signal along the particle track is more likely to indicate the presence of an electron track rather than a pion track. It should be pointed out that pions can generate high-energy signals through <FONT FACE=symbol>d</FONT>-rays; but the choice of a threshold level of 5&nbsp;keV --- above the expected energy deposition of a <FONT FACE=symbol>d</FONT>-ray --- minimises pion misidentification [40]. <BR> <BR> <!--TOC subsection Muon momentum measurement--> <H3>1.7.6&nbsp;&nbsp;Muon momentum measurement</H3><!--SEC END --> The measurement of the momentum of a charged particle in a magnetic field is based upon the measurement of the deflection of the particle track by means of the Lorentz force: a charged particle with momentum <I>p</I> in a magnetic field <I>B</I> travels along an helicoidal path; projecting the helix upon a plane normal to <I>B</I> we obtain an arc which radius <I>r</I> is: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>r</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>|<I>p</I>|</TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>k</I>×|<I>B</I>|</TD> </TR></TABLE></TD> <TD NOWRAP>, &nbsp;&nbsp;&nbsp;&nbsp; <I>k</I>=0.3&nbsp;<I>GeV&nbsp;m</I><FONT SIZE=2><SUP>-1</SUP></FONT><I>T</I><FONT SIZE=2><SUP>-1</SUP></FONT>.</TD> </TR></TABLE></DIV> Measuring the chord <I>L</I> of the arc and the sagitta <I>s</I>, we have:<BR> <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP> </TD> <TD NOWRAP><TABLE CELLSPACING=2 CELLPADDING=0> <TR><TD ALIGN=right NOWRAP><I>rll L</I></TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP> 2<I>r</I>sin<FONT FACE=symbol>q</FONT><U>~</U> 2<I>r</I><FONT FACE=symbol>q</FONT> </TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=right NOWRAP> <I>s</I></TD> <TD ALIGN=center NOWRAP> =</TD> <TD ALIGN=left NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>r</I>(1-cos<FONT FACE=symbol>q</FONT>)<U>~</U></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>1 </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center>2</TD> </TR></TABLE></TD> <TD NOWRAP><I>r</I><FONT FACE=symbol>q</FONT><FONT SIZE=2><SUP>2</SUP></FONT>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>L</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center>8<I>r</I></TD> </TR></TABLE></TD> </TR></TABLE></TD> <TD ALIGN=right NOWRAP>&nbsp;</TD> </TR></TABLE></TD> </TR></TABLE></DIV><BR> where <FONT FACE=symbol>q</FONT> is the angle corresponding to the ratio between the arc length and the full circumference with the same radius as the arc. From this equation we obtain the relation between the relation between the particle momentum and the arc: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>p</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>0.3<I>B</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>r</I></TD> </TR></TABLE></TD> <TD NOWRAP><U>~</U></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>0.3<I>BL</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center>8<I>s</I></TD> </TR></TABLE></TD> </TR></TABLE></DIV> Thus, by measuring the chord and the sagitta of the arc travelled by a charged particle in a known magnetic field, we can determine the particle's momentum.<BR> <BR> The ATLAS Muon Spectrometer (see Section&nbsp;2.4) is located in the outermost region of the detector . The muon momentum is measured using two different techniques: <UL><LI>in the barrel region (|<FONT FACE=symbol>h</FONT>|&lt;2.5) three concentric layers of precision drift chambers measure the track of crossing muons in three points in space. The chambers are in a 4&nbsp;T toroidal magnetic field. The sagitta of the bending tracks is calculated by measuring the position of the track point in the middle layer with respect to a straight line connecting the track points in the inner and outer layers. <LI>at high rapidities, three layers of precision chambers are used but only the first layer is in a magnetic field; thus, the particle momentum is evaluated using the ``point and angle technique'' that consists in measuring the deflection angle between the track segment reconstructed in the first layer and the track segment reconstructed in the outer layers:<DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><I>p</I>=</TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>0.3× <I>BL</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>Da</FONT></TD> </TR></TABLE></TD> </TR></TABLE></DIV> where <I>L</I> denotes here the length of the path travelled by the particle inside the magnetic field. </UL> The resolution of momentum measurement in the Muon Spectrometer is influenced by two independent factors: <UL><LI>a term depending on the spatial measurement of the track points: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>sag</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>µ</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>BL</I><FONT SIZE=2><SUP>2</SUP></FONT></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>s</FONT></TD> </TR></TABLE></DIV> where <FONT FACE=symbol>s</FONT> is the intrinsic resolution of the precision chambers. This term is dominant for high momentum muons (<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&gt;300&nbsp;GeV); <LI>a term due to multiple scattering of muons: <DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP>&nbsp;</TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>MS</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol>µ</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center>1 </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>BLX</I><FONT SIZE=2><SUB>0</SUB></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>.</TD> </TR></TABLE></DIV> This term is dominant for muons with momenta 30&lt;<I>p<FONT SIZE=2><SUB>T</SUB></FONT></I>&lt;300&nbsp;GeV. </UL> The total momentum resolution is given by the above two terms:<DIV ALIGN=center><TABLE CELLSPACING=0 CELLPADDING=0> <TR VALIGN=middle><TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I></TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2>2</FONT></TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>tot</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>=</TD> <TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2>2</FONT></TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>sag</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>+</TD> <TD NOWRAP><FONT FACE=symbol> æ<BR> ç<BR> ç<BR> è</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD NOWRAP ALIGN=center><FONT FACE=symbol>D</FONT> <I>p</I> </TD> </TR> <TR><TD BGCOLOR=black><TABLE BORDER=0 WIDTH="100%" CELLSPACING=0 CELLPADDING=1><TR><TD></TD></TR></TABLE></TD> </TR> <TR><TD NOWRAP ALIGN=center><I>p</I></TD> </TR></TABLE></TD> <TD NOWRAP><FONT FACE=symbol> ö<BR> ÷<BR> ÷<BR> ø</FONT></TD> <TD NOWRAP><TABLE CELLSPACING=0 CELLPADDING=0> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2>2</FONT></TD> </TR> <TR><TD ALIGN=left><BR> <BR> <BR> </TD> </TR> <TR><TD ALIGN=left NOWRAP><FONT SIZE=2><I>MS</I></FONT></TD> </TR></TABLE></TD> <TD NOWRAP>.</TD> </TR></TABLE></DIV><BR> The resolution of the Muon Spectrometer was estimated at the test-beam in 2004. The ``point and angle'' technique was used due to the lack of a large magnetic field to cover three layers of chambers. The results of the tests are summarised in Section&nbsp;2.4.1. <BR> <BR> <!--BEGIN NOTES chapter--> <HR WIDTH="50%" SIZE=1><DL><DT><FONT SIZE=5>1</FONT><DD>At the DØ calorimeter, the energy deposition from preceding bunch crossings may cause a slightly negative voltage signal in the calorimeter cells </DL> <!--END NOTES--> --[[User:Barison|Barison]] 17:32, 18 Jul 2005 (MET DST)}