Disclaimer: sorry, sometimes not really up to date
Pilot Blade Reconstruction
Link to the git repository
Log entry on 2016-12-05
We are on for a WBC scan in global during this fill. Will is at P5.
Procedure:
0) Get one red-recycle in to start the running if we can. FED error FIFOs are stuck and this should reset them. If not favored by RC, we can still proceed.
1) Ask DAQ shifter to pause the run at a new LS boundary and note the LS; DQM will work better if the LS boundary is N*10+1, e.g. LS range 71 - 80
2) use the pixel supervisor for the pilot
http://vmepc-s2b18-08-01.cms:1970/urn:xdaq-application:lid=51
and configure using a new Key from list below
3) when reconfigured (~5-10 s) tell the DAQ shifter to resume
4) wait ~10 LS and repeat
Log entry on 2016-11-22
From Karl:
http://cmsonline.cern.ch/cms-elog/960640
Indeed this is troubling. Looking into log files[1] from the run where the timing scan happened, I see pixel FEC
programming errors reported during the pause/reconfigure/resume steps. For the fine timing scan we actually
are not changing WBC (on the ROCS), but the pause/resume steps do reprogram the ROC DACs in order to
disable ROCs and raise thresholds during PAUSE and re-enable during RESUME.
A possible explanation for the bad behavior we see is that for some ROCs the re-enable and/or threshold change
did not happen as intended due to a transmission error. Either of these should affect the entire ROC. From the
DQM hit map, it looked like this is consistent.
Many more of these errors happen for
BmI than
BmO . We've attributed this to marginal signal quality on the FEC
fibers to the pilot blades. Similar things happened during WBC scans.
A suggestion is to consider repeating the scan and watching the
PixelFECSupervisor log file for errors. We may
be able to Pause/Resume again to get good programming. We could also try to bypass the disable during the timing
scan. It would then give three attempts to get it correct (Pause, Reconfigure, Resume) rather than one.
Once uTCA FECs are ready for running (next week?) we will want to repeat the timing scan. We could possibly do it
sooner with VME.
Log entry on 2016-11-14
htemp->GetXaxis()->SetRangeUser(-1.7,1.7);
Log entry on 2016-10-27
TCanvas *c2_2 = new TCanvas("c2_2", "2_2",0,0,1355,523);
Log entry on 2016-10-10
wget https://raw.githubusercontent.com/tvami/cmssw80X-PilotBlade/PilotBladeStudy/DPGAnalysis/PilotBladeStudy/other/Makefile
wget https://raw.githubusercontent.com/tvami/cmssw80X-PilotBlade/PilotBladeStudy/DPGAnalysis/PilotBladeStudy/other/Makefile.arch
wget https://raw.githubusercontent.com/tvami/cmssw80X-PilotBlade/PilotBladeStudy/DPGAnalysis/PilotBladeStudy/other/PilotHistoMaker.C
wget https://raw.githubusercontent.com/tvami/cmssw80X-PilotBlade/PilotBladeStudy/DPGAnalysis/PilotBladeStudy/other/PilotHistoMaker.h
Log entry on 2016-10-09
Should ntuple-z this if it makes sense (geometry is probably not good at all)
/ExpressPhysics/tvami-PilotBlade_pp_data_RECO_Filtered_FEVT_ExpressPhysics-2016H_Runs282650_CMSSW8020_v1-657b52994e7bb8e0d2e0208b75894c9b/USER
Log entry on 2016-10-03
ToDo : copy the v2 to the 0RootFiles
Wait until 161004_204255:tvami_crab_PilotBlade_data_Ntuplizer_pp_Runs281602_CMSSW809_ZeroBias17_v1 is done and download it
Log entry on 2016-10-01
PBClustersMod_344130820
TCanvas *c4_2 = new TCanvas("c4_2", "4_2",252,47,468,180);
PBClustersMod_344131076
TCanvas *c2_2 = new TCanvas("c2_2", "2_2",252,47,468,180);
PBClustersMod_344132100
TCanvas *c3_2 = new TCanvas("c3_2", "3_2",252,47,468,180);
root All_PBClusterCharge.C
root All_PBClusterFEDErrType.C
root All_PBClusterFEDErrTypePerEvent.C
root All_PBClusterSize.C
mv All_PBClusterCharge.C Cs/.
mv All_PBClusterFEDErrType.C Cs/.
mv All_PBClusterFEDErrTypePerEvent.C Cs/.
mv All_PBClusterSize.C Cs/.
Log entry on 2016-09-13
# creating the local maps one can delete the stats using:
PBClustersMod_344134148__10->SetStats(0)
Log entry on 2016-09-12
miniDAQ:
http://cmsonline.cern.ch/cms-elog/947901
MiniDAQ runs:
BmI modules had WBC=166.
Runs:
dataset: /MiniDaq/Run2016G-v1/RAW
LHC Fill 5287
280329,280331,280332,280333,280334,280335,280336,280337,280338,280339,280341,280342,280343,280346
Run 280329 w/ BmI_D3_B3_P2
Run 280331 w/ BmI_D3_B3_P2
Run 280332 w/ BmI_D3_B3_P2
Run 280333 w/ BmI_D3_B3_P2
Run 280334 w/ BmI_D3_B3_P2
Run 280335 w/ BmI_D3_B3_P2
Run 280336 w/ BmI_D3_B3_P2
Run 280337 w/ BmI_D3_B3_P2
Run 280338 w/ BmI_D3_B2_P2
Run 280339 w/ BmI_D3_B3_P1
Run 280341 w/ BmI_D3_B2_P1
Run 280342 w/ All 4 BmI modules
Run 280343 w/ All 4 BmI modules
Run 280346 w/ All 4 BmI modules
Log entry on 2016-09-11
Runs: 280383, 280384, 280385
Global Runs:
BmO modules had WBS=167.
Output dataset: /ZeroBias/tvami-PilotBlade_pp_data_RECO_August_Filtered_RAW_ZeroBias-2016G_Runs280383-280385_v1-ffd4a2cc6f6b3c98894eea447cc8098f/USER
Log entry on 2016-09-02
Output dataset: /ZeroBias/tvami-PilotBlade_pp_data_RECO_August_Filtered_RAW_ZeroBias-2016G_Runs279853-279865_v1-ffd4a2cc6f6b3c98894eea447cc8098f/USER
Log entry on 2016-09-01
New global runs: 279853-279865
WBC moved 3 BX for
BmI module and 4 BX for
BmO modules.
Bora's talk-- page 34
Log entry on 2016-08-27
Working on the plot script.
3 scenarios on crab:
/data/vami/projects/pilotBlade/pp2016ReProcessing_v3/CMSSW_8_0_8/src/crab/Ntuple/August-2/crab_PilotBlade_data_Ntuplizer_pp_August_Runs279071-279073-aligned_v1
/data/vami/projects/pilotBlade/pp2016ReProcessing_v3/CMSSW_8_0_8/src/crab/Ntuple/August-2/crab_PilotBlade_data_Ntuplizer_pp_August_Runs279071-279073-aligned_fiducial_v1
/data/vami/projects/pilotBlade/pp2016ReProcessing_v3/CMSSW_8_0_8/src/crab/Ntuple/August-3/crab_PilotBlade_data_Ntuplizer_pp_August_Runs279071-279073-fiducial_v1
Log entry on 2016-08-26
The cluster sizes depend upon: the pixel dimensions [actually, because the local x y conventions were changed thereby messing-up the reconstruction code, they are the same, 100um in local x and 150um in local y], the pixel orientation [low pt tracks are now curving mostly in x instead of y], the distribution of local angles cot(alpha) and cot(beta) [completely different due to the orientation and different z position], thresholds [?], and Lorentz angle [is the pilot blade tilted at 20 deg and operated at 300V?, if not, then it’s probably different too]. When Viktor fixes the geometrical description of the Phase I FPix [wrong in many many details], there will be a simulation to compare with.
Log entry on 2016-08-24
Talk on Pixel Offline
https://tvami.web.cern.ch/tvami/presentations/20160824_PilotBladeAugustRuns_PixelOffline.pdf
Log entry on 2016-08-23
Ntuplizing + plotting
Log entry on 2016-08-22
RECOing the global runs
Log entry on 2016-08-21
Global run: 279045
Global run: 279071, 279072, 279073
Run 278971 - Data Transfer ON
LHC Fill 5211
w/ 3 modules
Duration: ~4h30min
Run 278972 - Data Transfer ON
LHC Fill 5211
w/ 5 modules
Duration: ~1h45min
Run 278880 - ~4 min run. No errors from ch3,4,7,8. ~10 TOs from ch 33, 34.
Run 278881 - 43K events
Run 278882 - 34K events
Run 278883 - 5.5K events
Run 278884 - 20K events
Run 278885 - 35LS -> 60M events - 1TO from ch7, 8 and 23TOs from ch33, 34
Log entry on 2016-08-15 (holidays)
Key 2042 - fedcard/28 - Master delay:2 - 48 clock cycles.
Key 2043 - fedcard/29 - Master delay:3 - 64 clock cycles.
Key 2044 - fedcard/30 - Master delay:0 - 0 clock cycles.
/////////////////////////////////////////////////////////////////////////
Runs w/ Key 2042
LHC: stable beams - L1A~94kHz
Run 278355 - duration:~1min
Run 278356 - 150K events
Run 278357 - 87K events
Run 278358 - 6K events
Run 278360 - duration:~40s
For a quick offline analysis Run 278358 should be used.
For a full LS analysis either Run 278355 or Run 278360 should be used.
/////////////////////////////////////////////////////////////////////////
Runs w/ Key 2043
LHC: stable beams - L1A~94kHz
Run 278361 - duration: ~6min
Run 278362 - 42K events
Run 278363 - 39K events
Run 278364 - 51K events
Run 278367 - duration: ~3-4min
Run 278368 - duration: ~3-4min
For a quick offline analysis Runs 278362, 278363 or 278364 should be used.
For a full LS analysis either Run 278361, 278367 or 278368 should be used.
///////////////////////////////////////////////////////////////////////////
Run w/ Key 2044
LHC: stable beams - L1A~82kHz
Run 278370 - 100K events - 3 OOS
278372 1 ~800K
278373 10 ~1.9M
278374 20 ~2.0M
278375 30 ~3.3M
278376 40 ~2.4M
fw from August 3rd
Run 278772 -> 10LS
Run 278773 -> 5.5K events
Run 278774 -> 41K events
Run 278775 -> 40K events
Run 278776 -> 12LS
fw from August 8th
Run 278777 -> 8LS
Run 278778 -> 45K events
Run 278779 -> 300K events
Run 278780 -> 38K events
Run 278781 -> 6K events
Run 278782 -> 10LS
fw from August 9th
Run 278783 -> 20LS
Run 278784 -> 6K events
Run 278785 -> 100K events
Run 278786 -> 26K events
Run 278787-> 6K events
Run 278788 -> 24LS
Log entry on 2016-08-06
miniDAQ run numbers 278242-278245
Log entry on 2016-07-28
miniDAQ reco... no disk replica so I have to do it interactively
Quality study meanwhile
Scenario 1)-2)
/ZeroBias/tvami-pp_data_RECO_275891-275914_v2-6c4af66f622f9a43efa69f953d3091db/USER
Scenario 3)
Log entry on 2016-07-21
Dataset: /MiniDaq/Run2016E-v2/RAW
Run 277150
- 3 modules in
BmI_D3_BLD3_PNL2 (ch 33, 34)
BmO_D3_BLD10_PNL1 (ch 7, 8)
BmO_D3_BLD11_PNL2 (ch 3, 4)
- LHC Stable Beams w/ L1~85kHz
TOs followed by ENEs from ch 3,4,8,33,34.
For the first 20 min: 1 OOS in 3 min.
For the next 5 min: 1 OOS in less than a min.
Stopped the run after 25 min.
I couldn't notice a change in LHC/CMS conditions in the last 5 min.
Run 277151 & 277152
- 5 modules in (ROC0 from ch27 is masked)
BmI_D3_BLD2_PNL2 (ch 27, 28)
BmI_D3_BLD3_PNL1 (ch 31, 32)
BmI_D3_BLD3_PNL2 (ch 33, 34)
BmO_D3_BLD10_PNL1 (ch 7, 8)
BmO_D3_BLD11_PNL2 (ch 3, 4)
- LHC Stable Beams w/ L1~85kHz
Same as before Ts from ch27 in every event. Run is blocked in ~200 triggers.
I never see TBM reset for ch27. In the beginning of each run we issue and should see in error dumps TBM resets for each channel.
Run 277153
- 1 channel in
BmO_D3_BLD10_PNL1 (ch 7)
- LHC Stable Beams w/ L1~85kHz
1 OOS in 1hour run. 1 TO, some ENEs.
A lot of NOR errors. The rest is PKAMs.
Log entry on 2016-07-21
New runs with T0 transfer ON.
Dataset: /MiniDaq/Run2016E-v2/RAW
Run 277108
- 2 modules in
BmI_D3_BLD3_PNL2 (ch 33, 34)
BmO_D3_BLD10_PNL1 (ch 7, 8)
- w/ collisions L1~65kHz
This was a long run >3h. Error dumps weren't different than Run 277107 with the same configuration.
Run 277109
- 1 channel in
BmO_D3_BLD11_PNL2 (ch 3)
- w/ collisions L1~65kHz
TO->Burst of ENEs->OOS
Run 277110
- 1 channel in
BmI_D3_BLD3_PNL1 (ch 31)
- w/ collisions L1~65kHz
TO->Burst of ENEs->OOS
Run 277111
- 1 channel in
BmI_D3_BLD3_PNL2 (ch 33)
- w/ collisions L1~65kHz
TO->Burst of ENEs->OOS
No luck in running only one channel for these modules.
Log entry on 2016-07-12
Timing scan 1&2 Ntuplizer
The resulted ntuples are located here:
/data/vami/projects/pilotBlade/0RootFiles/Ntuple/TimingScan1onZeroBias/*.root
/data/vami/projects/pilotBlade/0RootFiles/Ntuple/TimingScan2onZeroBias/*.root
Log entry on 2016-07-11
If I download then I kill the work of Marton, Janos and Adam... working on the Phase1 Quality problem instead
Log entry on 2016-07-10 (Sunday)
Timing scan 1 Ntuplizer
root://cms-xrd-global.cern.ch//store/user/tvami/PBTimingScans/First/*
Log entry on 2016-07-09 (Saturday)
Timing scan 1 Ntuplizer
on dataset:
/ZeroBias1/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZB1-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias1/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZB1-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias2/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias2-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias2/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias2-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias3/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias3-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias3/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias3-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias4/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias4-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias5/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias5-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias5/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias5-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias6/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias6-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias6/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias6-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias7/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias7-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias7/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias7-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias8/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias8-2016A_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
/ZeroBias8/tvami-PilotBlade_pp_data_RECO_FirstScan_Filtered_RAW_ZeroBias8-2016B_v1-1aa7bafc2c84e866fe85ed4ed7a53e27/USER
Log entry on 2016-07-07
Working in folder pp2016ReProcessing_v3
RECO of every RAW, but only Zminus side is RECOd
/data/vami/projects/pilotBlade/pp2016ReProcessing_v3/CMSSW_8_0_8/src/crab/RECO/
Timing scan 2 Ntuplizer
The ntuple files are located here
root://cms-xrd-global.cern.ch//store/user/tvami/PBTimingScans/Second/*
and
/data/vami/projects/pilotBlade/0RootFiles/Ntuple/TimingScan2onZeroBias/
Log entry on 2016-07-08
config.Data.runRange = '271087-272008'
TimingScan1 921 run in
Run2016A -v1 and
Run2016B -v1
config.Data.runRange = '274000-274954'
TimingScan2 954 run in
Run2016B -v2
Timing scan 1 RECO
on datasets:
/ZeroBias1/Run2016A-v1/RAW
/ZeroBias2/Run2016A-v1/RAW
/ZeroBias3/Run2016A-v1/RAW
/ZeroBias4/Run2016A-v1/RAW
/ZeroBias5/Run2016A-v1/RAW
/ZeroBias6/Run2016A-v1/RAW
/ZeroBias7/Run2016A-v1/RAW
/ZeroBias8/Run2016A-v1/RAW
/ZeroBias1/Run2016B-v1/RAW
/ZeroBias2/Run2016B-v1/RAW
/ZeroBias3/Run2016B-v1/RAW
/ZeroBias4/Run2016B-v1/RAW
/ZeroBias5/Run2016B-v1/RAW
/ZeroBias6/Run2016B-v1/RAW
/ZeroBias7/Run2016B-v1/RAW
/ZeroBias8/Run2016B-v1/RAW