Difference: CMSDASNotes (1 vs. 2)

Revision 22016-02-21 - TamasVami

Line: 1 to 1
 
META TOPICPARENT name="Diaries"
Added:
>
>
 
Added:
>
>

Mono-Photon Dark Matter Long Exercise - CMSDAS 2016

 
Added:
>
>
 
Added:
>
>

Abstract

 
Deleted:
<
<
-- TamasVami - 2016-02-21
 \ No newline at end of file
Added:
>
>
The mono-photon analysis is one of a family of analyses collectively known as Initial State Radiation (ISR) searches. ISR searches are useful as they may allow sensitivity to phenomena which otherwise might not satisfy trigger criteria and might go entirely overlooked. For example, assume that new physics is present which gives rise to two electrically neutral, weakly interacting particles which are stable at least on order the crossing time of the CMS detector. The production cross section for this sort of process could be large, due to the lack of visible energy in the final state, these events would never be triggered since the two particles would escape unseen, leading to events which appear to be filled with ordinary minimum bias events. However, if the initial state, be it a quark-antiquark annihilation, or gluon fusion, radiates an initial state particle, then the two invisible final state particles are recoiling off of something visible. This allows one to gain sensitivity to these elusive events, at the cost in cross section of effectively searching for a next-to-leading order process. CMS and ATLAS have both successfully pursued ISR searches in the mono-jet, mono-photon, and more recently in the mono-W and mono-Z channels (also mono-top, but that's a bit outside the scope of this discussion). Depending on the requirements placed on the final state, these searches are sensitive to dark matter production, large extra spatial dimensions, SUSY with low mass splittings, and many other elusive signs of new physics. The mono-photon final state presents a unique challenge: it is the only ISR search in which there is a signature in only one detector, the electromagnetic calorimeter (ECAL). Photons are considerably more rare than hadronic jets, which means that trigger thresholds for the mono-photon search are in general much lower than those for the mono-jet search. However, for physics that originates from strong production (gluon fusion), the cross section for mono-jet production is much larger. Nevertheless, the mono-photon analysis is an important member of the the ISR analysis family, all of which in principle should be sensitive to the same beyond the standard model (BSM) physics.

Datasets

# Data
# Luminosity = 2.26fb^-1
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/Data_2015D_v3/
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/Data_2015D_v3_0/
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/Data_2015D_v4_0/
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/Data_2015D_v4_1/

MC Samples
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/ZNuNuGJets_MonoPhoton_PtG-130_TuneCUETP8M1_13TeV-madgraph/ (Z + jet)
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/WGJets_MonoPhoton_PtG-130_TuneCUETP8M1_13TeV-madgraph/ (W + jet)
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/ZLLGJets_MonoPhoton_PtG-130_TuneCUETP8M1_13TeV-madgraph/ (Z + QCD)
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/WToTauNu_M-100_TuneCUETP8M1_13TeV-pythia8/
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/WToMuNu_M-100_TuneCUETP8M1_13TeV-pythia8/
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/Gjets_HT-*/ (gamma+jet)

Signal MC Samples
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/ADDmonoPhoton_MD-1_d-3
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/ADDmonoPhoton_MD-2_d-3
/wk3/cmsdas/store/user/cmsdas/2016/LONG_EXERCISES/Monophoton/ADDmonoPhoton_MD-3_d-3

Skeleton code creation

root -l

TChain *chain = new TChain("ggNtuplizer/EventTree")
chain->MakeClass("DMAnalysis")

vim DMAnalysis.C
# do your cuts & selections

DMAnalysis a
a.Loop()

Backgrounds

G stands for gamma

Irreducible background

Def: it is an SM process which looks the same as the signal

  • Z+gamma (20% branching ratio)
We cannot estimate from data → MC needed. We have to choose the best (the one that best agrees with the data) MC sample produced by Powheg, Pythia, Madgraph etc.

Reducible background

  • Z+jets
    • can fake as a photon
    • how to reduce: using cuts, photon ID
  • W+jets (W2munu, or W2e+nu + the jets)
    • can also fake as a photon
    • how to reduce: using cuts, photon ID
  • QCD multijets
    • the large MET comes from the noise in the detector + photon fake
    • how to reduce: this is quite small contribution, using cuts, photon ID
  • Z+W
    • how to reduce: this is quite small contribution, using cuts
  • W+gamma (e nu gamma, mu nu gamma)
    • how to reduce: lepton (electron/muon) veto (if the event has an electron or muon, then we kill it),
    • this is only suppression, not final ‘killing’
  • gamma + jet
    • much larger than QCD background
    • how to reduce: using cuts
  • cosmics (muon Brehmstrahlung in the ECAL)
    • how to reduce: Timing is used, +-3 nanosec must pass between 2 hits!
    • neglecting this here
  • halo from the beam, BH-induced photon/background
    • how to reduce: Timing is used, +-3 nanosec must pass between 2 hits!
    • neglecting this here

Cuts

Basic Idea: Using cut and count method, find the best set of cuts s.t. you suppress the background and still have a good signal efficiency. Check which of the Loose, Medium, Tight ID sets will give you the best Signal-to-sqrt(background) ratio.

Trigger

  • Photon175
    • cuts E_trans lower the 175 GeV
  • Photon165_HE10
    • cuts E_trans lower the 175 GeV
    • and if the (H/E < 1.0)
    • where H = the energy detected @ Hcal
    • after that you can apply a softwaric cut from 175 GeV
  • MET trigger?
    • we dont use the MET trigger be itself, always with something else (cross trigger: photon+MET)
    • advantages for cross trigger: less data recorded, easier to reco
    • bandwith is 5 Hz thus we can lower the photon pt threshold to 135 instead of the 175 GeV (for analysis we can ask for like 5 Hz and not beyond 10 Hz (ideal))

Code comes here

Non-collision background cuts

Like cosmic rays or the beam halo caused effects.

  • the shower size must be small
  • the photon has to have pixel seeds

Photon

  • usual analyses cut on the eta < then the tracker allows (1.5)
    • because the endcap has poor efficiency and we have large energy photons
  • H over E cut
  • cut on the medium photon ID
    • PF charged hadron isolation
    • Rho corrected PF neutral hadron isolation
    • Rho corrected PF photon isolation
    • Iso’ = Iso - EA * rho, where EA is the Effective area

https://twiki.cern.ch/twiki/bin/view/CMS/EgIdentification

Code comes here

MET

MET - (every event will have missing ET)
  • MET > 140 GeV

Code comes here

Lepton veto

The CMS experiment is very efficient at reconstructing the associated track that is produced when an electron deposits energy through ionization on its way through the tracker. However, bremsstralung energy loss can lead to changes in the curvature of the track, severe enough in some cases that the track is lost. The small fraction of the time this happens is the source of electron backgrounds to our monophoton signature.

  • electron/muon misidentification
  • candle triggers for electrons, W and Z

  • lepton pT > 10 GeV
  • dR(lepton, photon) > 0.5
  • cluster size (eleSigmaIEtaIEtaFull5x5)
  • H over E for the electron should be small
  • the muon should be a global muon
  • the muon should have more than one muon hit
  • the muon should have more than one pixel hit

Code comes here

DPhi cut

We need almos back-to-back events. The phi difference should be less then 2 radians.

Code comes here

Data vs MC comparison

What are the good parameters?

What to change if 13TeV? Which are the good plots to compare?

* PhotonEta (PhoEta ) * Transverse energy of the photons (PhoET ) * MissingET (MET, pfMET)

To be able to fulfill the comparison, the statistics should be the same. Thus we need to normalize to data luminosity.
Each event will be weighted.

Number of events = cross section * Luminosity
W_i = cross section * Luminosity / #of total events in sample
K-factor = NLO_crosssection/ LO_crossection

Plotting

Match the MC distribution to the data distribution. MC is filled with colors (THStack), the most dominant is used to be plotted with green. In the end stack the plots for comparison, data should be black solid dots (with errors).

DM model

ADD model: Arkani–Hamed, Dimopoulos, and Dvali about large extra dimensions.
[13] N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, “The hierarchy problem and new dimensions at a millimeter”, Phys. Lett. B 429 (1998) 263, doi:10.1016/S0370-2693(98)00466-3, arXiv:hep-ph/9803315.
[14] N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, “Phenomenology, astrophysics and cosmology of theories with submillimeter dimensions and TeV scale quantum gravity”, Phys. Rev. D 59 (1999) 086004, doi:10.1103/PhysRevD.59.086004, arXiv:hep-ph/9807344. References 15
[15] I. Antoniadis, K. Benakli, and M. Quiros, “Direct collider signatures of large extra dimensions”, Phys. Lett. B 460 (1999) 176, doi:10.1016/S0370-2693(99)00764-9, arXiv:hep-ph/9905311.
[16] G. Giudice, R. Rattazzi, and J. Wells, “Quantum gravity and extra dimensions at high-energy colliders”, Nucl. Phys. B 544 (1999) 3, doi:10.1016/S0550-3213(99)00044-9, arXiv:hep-ph/9811291. [17] E. Mirabelli, M. Perelstein, and M. Peskin, “Collider signatures of new large space dimensions”, Phys. Rev. Lett. 82 (1999) 2236, doi:10.1103/PhysRevLett.82.2236, arXiv:hep-ph/9811337.

Background estimation

MC cannot really describe the data for high Pt thus we have to estimate that from data. Method is called a template method.

Signal template

good control sample : Z -> mumu + gamma (constrained to low PT from 10 ~ 30 GeV )

For high PT: we trust MC signal template
(gamma in data = gamma MC x (e in Data/e in MC))
(e in data/e in MC:using Z -> e + e)

Background template

We want to have the (SB) side band = background rich region. In order to do it wo do a two component fit. Within the signal region: Find the signal/ background ratio is calculated.

Estimating the systematics

  • hardware
  • method uncertainties

Set limits on the cross section of the DM

Higgs analysis Combine function is used for that
The parameters (eg systematics) are need to be set
N_B is coming from e->gamma, j->gamma


Revision 12016-02-21 - TamasVami

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="Diaries"

-- TamasVami - 2016-02-21

 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback