0% found this document useful (0 votes)
30 views15 pages

ST Analysis: Introduction M. Needham Epfl

1) The document outlines the aims and results of an ST analysis meeting, including performance monitoring of the detector and releasing analysis code. 2) Key results showed many problems were solved but new ones occurring, with masked channels and bad beetles identified, and pseudo-header thresholds tuned. 3) Performance over time is tracked using the STPerformanceMonitor tool, and future regular problem detection by the ST shift crew was suggested.

Uploaded by

Danny Yolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views15 pages

ST Analysis: Introduction M. Needham Epfl

1) The document outlines the aims and results of an ST analysis meeting, including performance monitoring of the detector and releasing analysis code. 2) Key results showed many problems were solved but new ones occurring, with masked channels and bad beetles identified, and pseudo-header thresholds tuned. 3) Performance over time is tracked using the STPerformanceMonitor tool, and future regular problem detection by the ST shift crew was suggested.

Uploaded by

Danny Yolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 15

ST Analysis: Introduction

M. Needham
EPFL

Outline
Aims of the meeting
Releasing code
Performance Monitoring
Results (IT and TT):
Active fraction, noise rates
Problems
Header thresholds

Aims
To get an overview of how well the detector is working
Communicate information/known problems
Focus on analysis part
Discuss/understand plots
This meeting is dedicated to setting thresholds/Tell1 issues
What we are doing now
Future meetings driven by needs as we get data, e.g:
Alignment
Understanding the signal (S/N, cluster shapes)

Releasing Code
Following is the official release procedure
All code should be tested and run before release
Test for fpe exceptions
Document the code with doxygen/javaDoc comments
Update the release notes
Provide python support:
Update dictionaries
Test can instantiate and use all code in a python script
Tag the code and add to tag collector
https://twiki.cern.ch/twiki/bin/view/LHCb/CVSUsageGuidelines

Send email to lhcb-st-soft (and maybe also lhcb-project-trackingsoft)

Releasing Code
If you dont follow the official procedure
Expect to be named and shamed on lhcb-soft-talk
The python support: Catch-22
We are not obliged to provide tested python bindings for all code
But python people expect to be able to call all C++ code from python
Only way to avoid emails to lhcb-soft-talk
Provide python bindings/dicts for all code
Test that they actually work [using the class in python is the only
way]

"That's some catch, that catch-22," he observed.


"It's the best there is," Doc Daneeka agreed.

Releasing Code
Its a pain and Python has many problems:
syntax changes very often [e.g. see material scan script]
Many bindings are missing
Debugging problems is difficult for non-super gurus
see many emails on lhcb-soft-talk
Remark :
jobOptions moved to python: we all have to use python now

Monitoring Performance
Perform basic checks on all data we take:
Run ZS, NZS, error decoding (already give lots of info)
Basic performance checks: STClusterMonitor, STPerformanceMonitor
See next slides
Keep track of all problems we have
For now just a web page for each run I look at

http://lphe.epfl.ch/~mneedham/problems/index.php
Future ? Savannah, something else ?

Monitoring Performance
STPerformanceMonitor:
Simple representation of where we are
Simple C++ algorithm
For now two numbers (one versus time):
Fraction of the detector that is active (ie could give a cluster)
fraction of the detector where I found a bank that I could
decode
+ fraction not flagged as dead in the DetectorElement
Occupancy [= noise rate for now]
Possible extension: plot showing this per detector element

Results: IT
New firmware
errors properly
handled
IT3C not
configuring

Vfs 1000

Vfs 400
Vfs 400,
Lower threshold
(S/N ~ 4.5 )
Firmware bug

Results: IT Problems
Loss of sync in source id 76, pp 2, beetle 2 in some runs
Sometimes dont have IT3C as it doesnt configure
Source ID 41 pseudo-header error at low rate
Solve by tuning discrimination thresholds
Invalid + duplicate clusters: seems solved ?
At least in ~100 k events

Results: IT
Simple C++
algorithm

Forbidden region
112 - 144
(guess)

Masked channels
Forbidden region
112 - 137
(guess)

Results: TT
Bad ports/
Beetles masked

New firmware
errors properly
handled
Many PCN
errors ?

The HV off
New firmware
bug

Results: TT Problems
Source id 106, sync RAM full : pp3 beetle 5
Source id 32 tlklinkloss pp2 , beetle 5
Source id 42 pp2 beetle 4 tlklinkloss
Source ids 2,3 68 some beetles out of sync
Invalid + duplicate clusters: solved or not ?

Some may be fixed by


digitizer board
replacement this week

96.5 %
92.9 %
Lost boards
3,32,42 are the
problem boards

Run 28790

Results: TT
Masked
Channels

Bad Beetles
Source IDs
32, 42, 106

Tune to 112
and 137 cf IT
A little low ?
(IT has some
ports this low)

Alittle high
(IT has some
ports this high)

Summary
STPerformanceMonitor useful problem detecting tool
Tests described here could be run by ST shift crew ?
Many problems solved, many new problems occuring
History of performance versus time
Pseudo-header thresholds tuned
Lots of useful information in beetle headers
e.g. Masked and problem channel

You might also like