Usefull Tricks



/afs/ = TrackerAlignment_mp2853_forAnnealingStud

321832 (a while after annealing) [	 jobs]: crab_AnnealingStud_Run321832_SingleMuon | on eos with LA and reso, cleaned, give to Tanja |
--> copied

# ---------------------------------------------------------------------------------------------------------------------------------------------------
320841 (just after annealing) [212 jobs]: crab_AnnealingStud_Run320841_SingleMuon | LA is copied to eos |
rsync -azv --update LA*
--> LA copied
rsync -azv --update Re*
--> Residuals copied

# ---------------------------------------------------------------------------------------------------------------------------------------------------
319992 (Run-C, before-annealing) [248	 jobs]: crab_AnnealingStud_Run319992_SingleMuon_v3 | crab outputed
rsync -azv --update LA*
--> LA copied
rsync -azv --update Re*
--> Residuals copied

# ---------------------------------------------------------------------------------------------------------------------------------------------------
319528, Run-C [265 jobs]: crab_AnnealingStud_Run319528_SingleMuon_v2 | crab outputed
rsync -azv --update LA*
--> LA copied
rsync -azv --update Re*
--> Residuals copied

# ---------------------------------------------------------------------------------------------------------------------------------------------------
317512, Run-B [440 jobs]: crab_AnnealingStud_Run317512_SingleMuon_v2 | crab outputed
rsync -azv --update LA*
--> LA copied
rsync -azv --update Re*
--> Residuals copied

# ---------------------------------------------------------------------------------------------------------------------------------------------------
316879, Run-A [190 jobs]: crab_AnnealingStud_Run316879_SingleMuon_v1 | crab outputed
rsync -azv --update LA*
--> LA copied
rsync -azv --update Re*
--> Residuals copied

# ---------------------------------------------------------------------------------------------------------------------------------------------------
316457, Run-A [1042 jobs]: crab_AnnealingStud_Run316457_SingleMuon_v1 | crab outputed
rsync -azv --update LA*

--> LA copied
rsync -azv --update Re*
--> Reo is copied

I use so far 320688  --> so that is the worse situation 
we need run after 320812 --> PCL is applied
one after they upload the ML alignment 
and one after all conditions are fixed ...
so four alltogether

ls -l . | awk '{if  ($5 < 6000000) print "rm "$9}'
scp LA_*

Alignment tags 2018

with updated tracker alignment and APE conditions for the Summer18 ReReco of the eras ABC(D). The alignment constants are the result of a dedicated alignment campaign, and details and validation results have been presented at the tracker alignment meeting September 4, 2018 [*]. The tags also contain the previous offline alignment history. [*]

Template insights from Morris

The 1D template position estimation is insensitive to smallish changes of charge scale [+-10% should be no problem]. The actual algorithm does truncate pixel charges that are too large to avoid delta ray effects [it is formally sensitive to the charge scale]. The shape fits use the pixel by pixel charges to estimate the uncertainties in the denominators of the chisqure function. Finally, the estimated hit uncertainties and probabilities do depend very much on the charge scale [few percent changes are notable]. Lots of cluster charge indicates lots of delta ray activity and larger errors. Each template calibration should remove the scale effects.

How to do HipPy

For the startup, this is probably the best object:
it's the first one we uploaded to the GT this year.

Instructions for running hippy:

We work in this directory:
The easiest way to start is to cmsenv here, to start with, even if that's not the release you eventually want to use (but later the script will make a new CMSSW for you with the proper release):

Then you can run this script. foldername --scram-arch (...) --cmssw (...) --merge-topic hroskes:hippy-scripts --merge-topic hroskes:common-tkal-dataset

This takes a while and makes a folder /afs/, which contains your CMSSW release and folders called Jobs/ and run/.  Jobs/ is where the jobs eventually happen.  run/ is where all the scripts are.

Inside run/ there are a few things:
DataFiles - here you have to set up a txt file with a list of data files to run on.  To get the list, you can run this script. -d /.../.../ALCARECO --first-run ... --last-run ... --hippy myfilelist.txt -j neventsperjob -m maxevents

For collisions, neventsperjob=10000 typically makes each iteration take about 20 minutes.  I suggest starting with that and then checking how many jobs there will be (by doing wc -l myfilelist.txt).  If it's too many you can either increase neventsperjob or set maxevents.

Then you can get rid of the COSMICS and CDCs line in data_example.lst and set the minbias line to point to your txt file.

IOV - you want a file that points to the first run of your dataset

Configurations - the only thing you have to touch here is the "common" file.  Here you set the global tag, the conditions to override (templates, maybe starting alignment), and the alignables.  To align at ladder level, uncomment the lines "TrackerP1PXBLadder,111111" and "TrackerP1PXECBlade,111111".  The 111111 refer to x y z theta_x theta_y theta_z.  It's probably better to turn off z in FPIX, so change it to 110111.

You can also change minimumNumberOfHits if you want.  That's about all we normally change.

Then, in the main folder, the last thing to change is  You just have to edit the variables that start out commented.
hpnumber is the ID number of the alignment.  It refers to the name of the folder in /afs/  You can just increment the last number by 1.
common, lstfile, IOVfile - these are the names of the files you set up
alignmentname - this is the name of a folder in Jobs where it gets run
niterations - typically we do 10
Then the script requires you to git commit what you've done.  It's a little git repository with all these scripts.

Then finally you can start running.  This has to be in a screen session.  I usually use this script to start the session:
it prints out how to get back to the cmsenv and directory and emails me which lxplus I'm on so that I can get back to it.

Inside the screen you just do ./ and it should run.

Checking int lumi

brilcalc lumi -u /nb --begin 6469 --end 
6469 is the first run in 2018. To install have a look at

New geometry to sqlite files

in file modiy
    geomXMLFiles = cms.vstring(
         # World volume creation and default CMS materials
then go to
modify Geomerty line to

toPut = cms.VPSet(
    cms.PSet(record = cms.string('IdealGeometryRecord'),tag = cms.string('TKRECO_Geometry_Phase2Telescope')),
    cms.PSet(record = cms.string('PGeometricDetExtraRcd'),tag = cms.string('TKExtra_Geometry_Phase2Telescope')),
    cms.PSet(record = cms.string('PTrackerParametersRcd'),tag = cms.string('TKParameters_Geometry_Phase2Telescope'))

What I learned from 2D Templates

cmsrel CMSSW_10_1_0_pre2
cd CMSSW_10_1_0_pre2/src/
git cms-merge-topic 22458

To trigger the new CPE, uncomment last two lines in

To get the correct label modify
and changed the line
std::string label = "";
std::string label = "denominator";

scram b SingleMuPt10_pythia8_cfi --conditions 101X_upgrade2018_realistic_Candidate_2018_03_15_16_26_46 -n 10 --era Run2_2017 --eventcontent FEVTDEBUG --relval 25000,100 -s GEN,SIM --datatier GEN-SIM --beamspot Realistic25ns13TeVEarly2017Collision --geometry DB:Extended step2  --conditions 101X_upgrade2018_realistic_Candidate_2018_03_15_16_26_46 -s DIGI:pdigi_valid,L1,DIGI2RAW,HLT:@relval2017 --datatier GEN-SIM-DIGI-RAW -n 10 --geometry DB:Extended --era Run2_2017 --eventcontent FEVTDEBUGHLT --filein file:SingleMuPt10_pythia8_cfi_GEN_SIM.root step3 --conditions 101X_upgrade2018_realistic_Candidate_2018_03_15_16_26_46 -n 10 --era Run2_2018 --eventcontent RECOSIM,MINIAODSIM,DQM --runUnscheduled -s RAW2DIGI,L1Reco,RECO,RECOSIM,EI,PAT,VALIDATION:@standardValidation+@miniAODValidation,DQM:@standardDQM+@ExtraHLT+@miniAODDQM --datatier GEN-SIM-RECO,MINIAODSIM,DQMIO --geometry DB:Extended --filein file:step2_DIGI_L1_DIGI2RAW_HLT.root --no_exec

open file
and include the line 

for data step3 --conditions 101X_dataRun2_Prompt_Candidate_2018_03_26_19_48_11 -n 10 --era Run2_2018 --eventcontent RECOSIM,MINIAODSIM,DQM --runUnscheduled -s RAW2DIGI,L1Reco,RECO,RECOSIM,EI,PAT,VALIDATION:@standardValidation+@miniAODValidation,DQM:@standardDQM+@ExtraHLT+@miniAODDQM --datatier GEN-SIM-RECO,MINIAODSIM,DQMIO --geometry DB:Extended --filein root:// --no_exec --data RECO -s RAW2DIGI,L1Reco,RECO --data --scenario pp --conditions 101X_dataRun2_Prompt_Candidate_2018_03_26_19_48_11 --era Run2_2017 --process NTUPLE --eventcontent RECO --datatier RECO --filein root:// --runUnscheduled -n 10 --no_exec

Workflow tricks

A typical workflow for the relval for the 2017 workflow (using the phase1 pixel , also true for 2018, 2019 when the only relevant change is for hcal ) , can be found like this , let's say as example for SingleMuPt10 : -n | grep 2017 | grep SingleMuPt10 
--> if you type yourself this command, you will see that the Workflow number is 10007 

Therefore you if you finally type -l 10007 -ne 
--> it gives you the cmsdriver commands for this particular workflow, that are actually used in Pull request test and in relvals. 
--> you can notice typing the command above that the era actually used is --era Run2_2017 

Inserting the name of this era in the github search leads you to see the config listing the various detector eras to build a 2017 CMS era : 
please look here : 
where you can see that for the pixel it is phase1Pixel 

So looks like what you did was good (I mean by adding the modifier) . 

For the workflow you were testing 11624 (2019 configuration) , you can try to same and find out that the era is --era Run3 , therefore you could look at : 
which is loading 
from Configuration.Eras.Era_Run2_2018_cff import Run2_2018 
which loading the 2017 one , so we are in the same config for the pixel as before. 

Something to think about on the pixel team side is whether we would not need to deploy eras according to the year (as hcal is doing) , so far the 2017 pixel is only different that the <2016 pixel (for obvious reasons :) ) but the same as in 2018 , 2019 - Perhaps one should think to have different reconstruction (or simulation) configurations/parameters depending on the years even for the same pixels. Maybe this is irrelevant , but in case the era mechanism will be easily to implement. 

Config files tricks

# Desciption: example how to set up many input files. After 255 files put the next files to the .extend part
import FWCore.ParameterSet.Config as cms
myfilelist = cms.untracked.vstring()
#  ])
FileNames = myfilelist

DQM tricks

A way to get just the histograms you are interested in from the DQMGUI.
From lxplus, do:
cmsrel CMSSW_9_4_0
cd CMSSW_9_4_0/src/
dqm-access -w -c -f '/PixelPhase1/Phase1_MechanicalView/PXBarrel/digi_occupancy_*_PXLayer_*' -e "match('/ZeroBias1/Run2017G-PromptReco-v.*/DQMIO',dataset)" -s

dqm-access --help | less

The last command can be generalized, e.g.:

  • If you want to fetch histograms from all periods, use Run2017.* instead of Run2017G (this is a plain regular expression, you can assemble the one you like)
  • If you are interested in the forward too, I'd suggest making a second query (if you repeat the same command with a different histogram target, the output file in your local directory will be overwritten...)
  • if you are interested only in some specific run, you could use: -e "run == XYZ and match...."

LA tree production

Pixel tree production

Setup procedure is explained here
(instructions.txt is deprecated so ignore it please)

After you finish setup, open and set these four lines
(CMSSW is not mandatory but is ok for bookkeeping.)

After you did all that just run the script. It will create bunch of scripts (in batch folder) ready to be sent on batch.
Before sending you can just open one python script and see if everything is done correctly. You
can even change number of events to e.g. 5 and run script interactively.

How does script actually creates jobs? 
There is the template script which is used for making jobs.
(Note that there are many different templates but only one which is called is actually used,
other templates are used for different configurations, i.e. cosmics, VCAL, etc... Most recent are
ones which have phase1 in its name. If you want to use other template, it must be called

If you are going to send jobs, they will be automatically stored on EOS
I don't know if you have permissions to store there. Anyway, you need to manually create directory
(from line 33) on EOS. In case you wish to make files on EOS, please fill the google doc what you did so
that I can follow what is going on.

Right now, sending jobs is not configurable as a variable so you need to (un)comment
in order to send jobs.

LA plots production

root -l -b -q  'LALayer.C("input.list",1,"outfile.root")'

input list is the txt file with the list of files with the LA trees (it can be also one root file, in that case second argument should be 0). 
Third argument is the output root file with the histograms.
The 2017 LA trees are stored in the eos directory:

Linux tricks

du -hsx * | sort -rh | head -10
locate "*.root" | grep "/data/vami/backup/vami/projects/*" > rootFiles.txt

Changing paylaoads

# ----------------------------------------------------------------------
process.GlobalTag.toGet = cms.VPSet(
	    record = cms.string("#RecordName1"),
           tag = cms.string("#TagName1"),
           connect = cms.untracked.string("frontier://FrontierProd/CMS_CONDITIONS)
	    record = cms.string("#RecordName2"),
           tag = cms.string("#TagName2"),
           connect = cms.untracked.string("frontier://FrontierProd/CMS_CONDITIONS")
# ----------------------------------------------------------------------

edm Tricks

edmConfigDump >

CRAB notes

If we want to resubmit jobs in another task, we do - crab report which creates a file in the /results/ that should be added as a lumiMask in the next task.

config.JobType.pyCfgParams = ['globalTag=80X_mcRun2_...']
['valami=1', 'masik=2', 'harmadik=3']

AlCaDB contact notes


process.GlobalTag.DumpStat = cms.untracked.bool(True)


set storage=/afs/

foreach era (`/bin/ls $storage | grep Run | grep -v sh`)
    #echo $era
    foreach subfolder (`/bin/ls $storage/$era | grep 3`)
	echo $era $subfolder
	echo $storage/$era/$subfolder/promptCalibConditions.db
	conddb_import -c sqlite_file:SiPixelQualityFromDbRcd_other_Ultralegacy2018_v0_mc.db    -f sqlite_file:$storage/$era/$subfolder/promptCalibConditions.db -i SiPixelQualityFromDbRcd_other    -t SiPix
	conddb_import -c sqlite_file:SiPixelQualityFromDbRcd_stuckTBM_Ultralegacy2018_v0_mc.db -f sqlite_file:$storage/$era/$subfolder/promptCalibConditions.db -i SiPixelQualityFromDbRcd_stuckTBM -t SiPix
	conddb_import -c sqlite_file:SiPixelQualityFromDbRcd_prompt_Ultralegacy2018_v0_mc.db   -f sqlite_file:$storage/$era/$subfolder/promptCalibConditions.db -i SiPixelQualityFromDbRcd_prompt   -t SiPix

Modifying IOV bondary in a local sqlite file:
$ sqlite3 TrackerSurfaceDeformations_v9_offline.db 
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> .q

All DB test config

MinBias -s GEN,SIM,DIGI,L1,DIGI2RAW,RAW2DIGI,L1Reco,RECO --evt_type MinBias_13TeV_pythia8_TuneCUETP8M1_cfi --conditions auto:phase1_2017_realistic --era Run2_2017 --geometry DB:Extended --fileout file:GENSIMRECO_MinBias.root --runUnscheduled -n 10

Options to run this:
cmsRun phase1/ saveRECO=1 useTemplates=0 useLocalLASim=1 useLocalLA=1 useLocalGenErr=1 useLocalTemplates=1 maxEvents=10000 noMagField=1 outputFileName=gen_phase1_91X_MC0T_10k.root 
cmsRun phase1/ saveRECO=1 useTemplates=1 useLocalLASim=1 useLocalLA=1 useLocalGenErr=1 useLocalTemplates=1 maxEvents=10000 noMagField=1 outputFileName=tem_phase1_91X_MC0T_10k.root

cmsRun phase1/ saveRECO=1 useTemplates=0 useLocalLASim=1 useLocalLA=1 useLocalGenErr=1 useLocalTemplates=1 maxEvents=10000 outputFileName=gen_phase1_91X_Data38T_10k.root
cmsRun phase1/ saveRECO=1 useTemplates=1 useLocalLASim=1 useLocalLA=1 useLocalGenErr=1 useLocalTemplates=1 maxEvents=10000 outputFileName=tem_phase1_91X_Data38T_10k.root

Template&GenErr DB resolutions


Geometrical coordinates are defined in:
Exceptions are to be defined from rawID:

Root notes

Standard things

TCanvas *c2_2 = new TCanvas("c2_2", "2_2",0,0,1355,523);

TH2D (const char *name, const char *title, 
      Int_t nbinsx, Double_t xlow, Double_t xup, 
      Int_t nbinsy, Double_t ylow, Double_t yup)

Custum canvas

TH2* h, std::string canname, 
      int gx = 0, int gy = 0,
      int histosize_x = 500, int histosize_y = 500,
      int mar_left = 80, int mar_right = 120,
      int mar_top = 20, int mar_bottom = 60,
      int title_align = 33, float title_y = 1.0, float title_x = 1.0,
      std::string draw="COLZ", bool norm=false, bool log=false
Some explanations:
  • title_align could be 11, 12, 13, 31, 32, 33 (upper right) For horizontal alignment the following convention applies: 1=left adjusted, 2=centered, 3=right adjusted
    For vertical alignment the following convention applies: 1=bottom adjusted, 2=centered, 3=top adjusted

prelim_lat_(double xmin, double xmax, double ymin, double ymax, bool in, double scale = -1)
draw_lat_(250, 198.0, "Module X", 1, 0.04, 0.0, 21);

-- TamasVami - 2016-01-14

Topic attachments
I Attachment History Action Size Date Who Comment
Cc LALayer.C r1 manage 9.7 K 2017-07-21 - 22:19 TamasVami  
Cc SiPixelLorentzAngleTree_.C r1 manage 1.5 K 2017-07-21 - 22:20 TamasVami  
Hh SiPixelLorentzAngleTree_.h r1 manage 7.2 K 2017-07-21 - 22:20 TamasVami  
Cc tdrstyle.C r1 manage 4.9 K 2017-07-21 - 22:20 TamasVami  
Edit | Attach | Watch | Print version | History: r29 < r28 < r27 < r26 < r25 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r29 - 2020-02-21 - TamasVami
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback