(draft) BEMC status/ped tables
[[Note: this is a draft of the documentation for generating the BEMC status/ped tables. It will be posted to the BEMC Drupal area when I have finished editing it.]]
Directions for generating the status/pedestal tables for the BEMC
You can take a look at the summary webpage for the 2012 pp500 run to get an idea of what you will be doing, here: http://online.star.bnl.gov/emcStatus2012/pp500/ If you click on any of the links that say "BTOW" you will get a PDF document showing the ADC spectrum for every tower for that fill. What you will be doing is generating these spectra and then checking whether the spectra look "normal" - i.e. if the towers are functioning properly or not - you will see some histograms are drawn in red, those were marked bad for some reason. You will also be calculating the location of the pedestal, which is the location of the peak to the far left of the spectrum. (Looking at one of the heavy-ion runs might give you a better idea of what "bad" towers look like, since the histograms have more counts in them... try looking at: http://online.star.bnl.gov/emcStatus2012/UU193/ )
The code that performs these calculations is located on onl05.starp.bnl.gov here: /ldaphome/onlmon/emcstatus2013/pp500.l2status.py
All you need to do to execute the code is:
source star_env
python l2status.py
You can monitor the progress of the script by looking at l2status.log (I usually just open another terminal and do 'tail l2status.log -n50' periodically or something like that). As the script progresses you will see the summary pdf files posted to the webpage http://online.star.bnl.gov/emcStatus/pp500/ , the status and pedestal tables will be written as text files into db/bemc and db/eemc, the actual root histograms will be written into the histoFiles directory, and the results will be written into the database file l2status2013.sqlite3. The statuses and pedestals should be generated once for every good fill.
If you want to clear everything and start fresh, you can do this: clean out the folders db/bemc/ and db/eemc/, remove the log file, remove runList.list, and do 'cp empty.sqlite3 l2status2013.sqlite3'
In a perfect world you would run over the code over the entire dataset once and you would have the status tables which are then uploaded to the STAR database. However, its usually not that simple. There are often problems with a handful of channels that aren't caught by the status checking code, or some that are flagged as bad, but shouldn't be. My suggestion would be to run the code over all the files first and then then we can use the information in the pdfs and in l2status2013.sqlite3 to isolate problem channels. Some of the parameters in the code you may need to tweak to improve the status tables, and some channels you may have to hard code as a bad status for a period of time.
So as a first step, let l2status.py run for a while (it will take time, I often run it in screen so that I don't have to keep a terminal open). We would like to look at all the runs from pp500, which means you can kill it (ctrl+c) around Run 14183023 or so. You're welcome to stop it early if you would like to look at things. When you start it running again it should just pick up (approximately) where it left off.
The code only computes status/pedestal tables if there are enough hits to get good-quality calibrations. The median number of hits above the pedestal must surpass some threshold (at the moment the threshold is set to 25, which is quite low so we may want to change this parameter) in a given fill. When this happens you will see some messages in the log file like
2012-06-07 00:26:36 PID: 21394 INFO medianHits = 635
2012-06-07 00:26:36 PID: 21394 INFO begin status computation
2012-06-07 00:28:05 PID: 21394 INFO end status computation -- found 122 bad channels
2012-06-07 00:28:05 PID: 21394 INFO begin endcap status computation
2012-06-07 00:28:05 PID: 21394 INFO 04TB07 status=136 nonzerohists=22
2012-06-07 00:28:05 PID: 21394 INFO 06TA07 status=136 nonzerohists=18
2012-06-07 00:28:05 PID: 21394 INFO 08TB07 status=136 nonzerohists=21
2012-06-07 00:28:06 PID: 21394 INFO 11TA12 status=0 nonzerohists=60
2012-06-07 00:28:06 PID: 21394 INFO end status computation -- found 11 bad endcap channels
2012-06-07 00:28:10 PID: 21394 INFO current state has been saved to disk
2012-06-07 00:28:10 PID: 21394 INFO creating PostScript file
2012-06-07 00:29:09 PID: 21394 INFO calling pstopdf
2012-06-07 00:29:42 PID: 21394 INFO removing ps file
2012-06-07 00:29:43 PID: 21394 INFO creating endcap PostScript file
2012-06-07 00:29:52 PID: 21394 INFO calling pstopdf
2012-06-07 00:29:57 PID: 21394 INFO removing ps file
2012-06-07 00:29:58 PID: 21394 INFO Finished writing details webpage for F16732_R13108069_R13108080
2012-06-07 00:30:00 PID: 21394 INFO goodnight -- going to sleep now
Once the status/ped tables are generated, the way I do the QA is in two parts:
1) I spot check the pdf files by eye. I pick about 5 or 6 fills evenly spaced throughout the run and go through each page of the pdf files looking for any strange-looking towers (for example, towers with stuck bits are pretty easy to see, and they don't always get caught by the algorithm). Yes, this takes a while, but we don't have to look at the pdfs for every fill!
For examples of the bad channels you are looking for, have a look at Suvarna's nice QA of the tables last year:
https://drupal.star.bnl.gov/STAR/blog/sra233/2012/dec/05/bemc-statuscuau200hard-coded-towers
https://drupal.star.bnl.gov/STAR/blog/sra233/2012/aug/15/bemc-statuspp200towers-with-status-marked-oscillating
2) The status and pedestal information is stored in the sqlite database file, and we can spin over this quickly in order to look at the data over many runs/fills. I have attached a script which is used to analyze the database file l2status2013.sqlite3. The first step is to download l2status2013.sqlite3 somewhere *locally* where you can look at it, and save statusCheckfill2.py in the same place. In statusCheckfill2.py you should change line 27 to the appropriate run range that you are analyzing, and change 2012-->2013 if necessary. Then all you need to do to run the code is something like:
setenv PYTHONPATH $ROOTSYS/lib
python statusCheckfill2.py
This python code allows you to look at the statuses and pedestals from run to run. At the moment I make lists of the statuses and pedestals for each tower (in the variables u and y -- sorry for my horrible naming scheme!), and then I can print these lists to the screen or graph them. At the moment I have the code so that it prints the statuses for any tower that has status 1 some, but not all, of the time (line 138). This script can be used to find channels which change status frequently over the course of the run... for example, sometimes there are channels which are right on the edge of satisfying the criteria for being marked as "cold" and therefore their status alternates 1 18 1 1 18 18 1 1 18 1 18 etc. Then we can look at the pdf files to see if the tower always looks cold, or if its behavior really changes frequently. If it is truly cold, then at that point we can either adjust the criteria for being marked as cold, or hard-code the channel as status 18 (I typically just hard-code it).
Also I can look at the histograms histoPedRatio and histoPedRatioGood (which are saved out in histogramBEMCfill.root), which are plots of the ratio of the pedestal for a given run to the pedestal of the first run as a function of tower id. histoPedRatio is filled for every tower for every run (unless the pedestal for that tower in the first run is zero), and histoPedRatioGood is only filled if the tower's status is 1. I haven't needed to cut out any towers based on these plots, but I think they would be a good way to find any towers whose pedestals are fluctuating wildly over time (or maybe you might want want to plot the difference, not the ratio).
So you can take a look at the code and play around with it so that it allows you to do the QA that you think is best.
If you look at l2status.py, you can see where I've hard-coded a bunch of channels I thought were bad (the lines which hard-code bad channels are commented out right now because I prefer to start from scratch each year). I've pasted the code here too:
## hard code few bad/hot channels
#if int(tower.softId)==939 : ##hard code hot channel
# tower.status |= 2
#if (int(tower.softId)==3481 or int(tower.softId)==3737) : ##hard code wide ped
# tower.status |= 36
#if (int(tower.softId)==220 or int(tower.softId)==2415 or int(tower.softId)==1612 or int(tower.softId)==4059): ##hard code stuck bit
# tower.status |= 72
if (int(tower.softId)==671 or int(tower.softId)==1612 or int(tower.softId)==2415 or int(tower.softId)==4059) : ##hard code stuck bit (not sure which bit, or stuck on/off)
tower.status |= 8
if (int(self.currentFill) > 16664 and (int(tower.softId)==1957 or int(tower.softId)==1958 or int(tower.softId)==1977 or int(tower.softId)==1978 or int(tower.softId)==1979 or int(tower.softId)==1980 or int(tower.softId)==1997 or int(tower.softId)==1998 or int(tower.softId)==1999 or int(tower.softId)==2000 or int(tower.softId)==2017 or int(tower.softId)==2018 or int(tower.softId)==2019 or int(tower.softId)==2020)) : ##hard code stuck bit
tower.status |= 8
if (int(tower.softId)==410 or int(tower.softId)==504 or int(tower.softId)==939 or int(tower.softId)==1221 or int(tower.softId)==1409 or int(tower.softId)==1567 or int(tower.softId)==2092) : ##hard code cold channel
tower.status |= 18
if (int(tower.softId)==875 or int(tower.softId)==2305 or int(tower.softId)==2822 or int(tower.softId)==3668 or int(tower.softId)==629 or int(tower.softId)==2969 or int(tower.softId)==4006) : ##either cold or otherwise looks weird
tower.status |= 18
Some of these channels are persistently problematic, so I expect that your list of bad channels will look similar to mine from previous years.
It may take a few iterations of generating the tables, finding bad towers, tweaking the code or hard-coding the bad channels, regenerating the tables, etc before you are satisfied with the quality of the tables. Once you have run l2status.py one last time and are satisfied with the quality of the tables you have generated, they are ready to be uploaded to the database by the software coordinator.
- aohlson's blog
- Login or register to post comments