[SDU] How to start a STAR data analysis -- software

This page is an introduction for beginners analyzing STAR data. 

The goal is to demonstrate how to create analyzer and submit jobs with STAR scheduler.

The page provides example codes based on STAR Run18 27GeV AuAu data, which focus on TPC and EPD information.

 

1. Connect to STAR cluster

First connect to bnl server with, "-Y" ensures graphic options so you can make plot interactively,

ssh -Y user@rssh.rhic.bnl.gov

Then login STAR nodes, xx ranges from 05 to 15

ssh -Y rcas60xx

 

2. Set STAR software version

This is needed to properly read picoDst data with different production software version,

starver SL19b

SL19b is for Run18 27 GeV AuAu production, after login STAR nodes a list of production version will show up, pick the right one for your analysis

3. Make your own analyzer

First copy the example analyzer into your work place

cp -r /gpfs01/star/pwg/zhchen/AuAu27/AnaExample YOURPATH

All analyzers are put in StRoot folder, in this example there are two packages, StDeCorrSPEPDTreeMaker demonstrate how to find centrality, read TPC tracks and EPD information. 

StRefMultCorr is the package needed for official centrality definition, which we do not want to touch.

In StDeCorrSPEPDTreeMaker, StDeCorrSPEPDTreeMaker.cxx is the main source code for analysis, StDeCorrSPEPDTreeMaker.h is the corresponding header file, StEpdGeom.cxx and StEpdGeom.h are for EPD geometry.

For your own analyzer, modify StDeCorrSPEPDTreeMaker.cxx and StDeCorrSPEPDTreeMaker.h while keeping others intact.

The two pieces of codes contains explanatory text, read them to learn which part does what.

 

4. Compile your analyzer

Every new analyzer or any change to old analyzer need to be compiled before use, do

cd YOURPATH/AnaExample

cons

The example codes will generate bunch of warnings for unused variables, ignore them.

 

5. Interactive running of the analyzer

Before submitting jobs on massive data, be sure to test locally your code is doing what you need

root4star -q -b  doEvent_test.C

Check the output is what you expected

 

6. Submit jobs with STAR scheduler

Now ready to run on massive data via submitting jobs, what you need is the file list “AuAu27_full_good.list” which contains all run numbers of Run18 27 GeV data, the C script for running your analyzer “doEvent.C”, the XML file for STAR scheduler “Analysis.xml”, and finally the python script submitting the jobs “condorAuAu27.pl”

For the details of scheduler XML configuration please refer to https://www.star.bnl.gov/public/comp/Grid/scheduler/manual.htm#4.8

Before submitting jobs, check all the files mentioned above and make sure to substitute “YOURPATH” with proper paths

To submit jobs

./condorAuAu27

NOTE that please test with a small number of jobs (~10) before submitting the entire data set, this can be controlled by editing the loop in condorAuAu27.pl, make sure all jobs finish properly and provide the desired output.

You can check your job status with

condor_q

Remove your job with

condor_rm username

If your jobs turn into HOLD status, check the reason with

condor_q -hold -af HoldReason

 

7. Enjoy!