The Data Carousel should be used by STAR users to retrieve data from HPSS. The purpose of the data carousel is to organize the requests of users to retrieve data and to prevent chaos. User file requests are stored in a MySQL database. The Data Carousel consists of a collection of perl scripts (written by Jérôme Lauret) which provides the user with a simple way to submit requests. Keeping track of the number of requests made by the physics analysis and/or hardware/software groups ("accounting") is made by aa server which takes care of submitting requests according to needs.
In addition, the Data Carousel will warn you if a file you are requesting has already been requested by another user, so you do not waste the bandwidth in trying to restore something which is already there.
IMPORTANT Note: If you have an AFS HOME directory, you will NOT be able to use the Data Carousel tool due to a Unix-to-AFS poor hand-shaking (authentication failure). The solution : move back to NFS !!! There are more and more tools we discover on a monthly basis which does not work with AFS home directories ...
The client-perl scripts can be executed from any of the rcas nodes.
To do this, you need to create a file containing your requests from HPSS. There are several format you may use. All of the above example will assume you want to restore files from HPSS into a disk path /star/rcf/test/carousel/. You can of course restore files only to directories you actually have access to so, adapt the examples accordingly.
HPSSFileFor example, this file may contain the following lines
% echo "/home/starreco/hpsslogs/logfile01_9903260110" >file.lis % echo "/home/starreco/hpsslogs/logfile01_9903260110" >>file.lisNote at this stage that the HPSS file name may be specified as a relative path. The default prepended path is /home/starreco and the above file could have been written like this
% echo "hpsslogs/logfile01_9903260110" >file.lis % echo "hpsslogs/logfile01_9903260110" >>file.lisHowever, it is a good idea to write the HPSS file names with no ambiguities.
HPSSFile TargetFiles
For example, file2.lis may contain the following request:
/home/starreco/reco/central/P01hb/2000/08/st_physics_1235013_raw_0002.dst.root /star/rcf/test/carousel/New/jeromel/bla.dstIn this example, input and output file do not necessarily match. It is your entire choice to organized how you want to save those files. Also, it is to be noted that the current version of the Data Carousel will actually create the target directory if it does not exists. So, beware of the potential mess you may create if you miss-type the output file names ...
As another example, file3.lis contains 4 files to be transferred:
/home/starreco/reco/central/P01hb/2000/08/st_physics_1235013_raw_0001.event.root /star/rcf/test/carousel/New/jeromel/physics/st_physics_1235013_raw_0001.event.root /home/starreco/reco/central/P01hb/2000/08/st_physics_1235013_raw_0001.hist.root /star/rcf/test/carousel/New/jeromel/physics/st_physics_1235013_raw_0001.hist.root /home/starreco/reco/central/P01hb/2000/08/st_physics_1235013_raw_0001.runco.root /star/rcf/test/carousel/New/jeromel/physics/st_physics_1235013_raw_0001.runco.root /home/starreco/reco/central/P01hb/2000/08/st_physics_1235013_raw_0001.tags.root /star/rcf/test/carousel/New/jeromel/physics/st_physics_1235013_raw_0001.tags.root
Note that the full documentation specifies that the TargetFiles should be specified in the form pftp://user@node.domain.zone/UnixVisiblePathFromThatNode/Filename where user is your user name, and node.domain.zone being the node where to restore the file to. However, la later version of the Data Carousel (V01.150 or up) will add "smartness" in choosing the node to connect to in order to access the disk you want. But if you want to use the full syntax (actually the preferred unbreakable method), you MUST specify the machine where the disk physically sits. Be aware that doing otherwise will create unnecessary NFS traffic and will slow down your file restoration.
Now you are ready to execute hpss_user.pl (this should already be in your path at this stage). You should execute it from a cas node.
Following our examples, you would issue one of the following
% hpss_user.pl -r /star/rcf/test/carousel/New/ -f file.lis % hpss_user.pl /home/starreco/hpsslogs/logfile01_9903271502 /star/rcf/test/carousel/New/logfile01_9903271502 % hpss_user.pl -f file2.lis $ hpss_user.pl -f file3.lis
The first and second line are both equivalent. They both restore the file /home/starreco/hpsslogs/logfile01_9903271502 in /star/rcf/test/carousel/New/ ; the first using a file list, the second a request fully specified from the command line. Whenever -r is specified, this option MUST be specified prior to the -f option.
NOTE hpss_user.pl script will alter your ~/.shosts file and your ~/.netrc file. If these files do not already exist in your area, they will be automatically created. If they do already exist in your area, then they will simply be updated. The .shosts file must contain an entry to allow starrdat to remotly access your account, and the .netrc file allows you to access HPSS (via pftp) without prompting you for a username and password.
On the server end (rmds05), starrdat runs the server script. This script is invoked every 10 minutes by a cron job. The server script inspects the MySQL database and provides a list of input and output files to the script ORNL batch system (adapted by Tom Throwe) and report the operation status in its own internal accounting tables. In short, this process restores the files from HPSS.
In order to inspect the submissions, you may use a script which inspects the content of the Accounting table. The last column of that table reflects the Success of the reason for a failure.
The Data Carousel may call user scripts or user hooks at each file transfer. The mechanism works as follow :
Those user hooks have been provided to help user execute commands/action before and after the file transfer. Amongst the possibilities are actions like
There a certain number of variables you may use in both of those scripts and which will be known globally :
An example of beforeFtp.pl script follows. Here, we check the available space on the target disk and rebuild a list of files we cannot retore due to space restrictions. This list will be palattable to the Data Carousel for later re-submit ...
# Solaris only. You should write a more fancy script for a # full proof remaining disk size procedure. $result = `/bin/df -k $dirname | awk '{print $2}' | tail -1`; if ($fsize == -1){ # Here, we return 0 but you can also take a guesstimate # of the minimal required space and save time. return 0; } else { # The file size is known ... if ($result < $fsize){ # will skip this file, keep the list though # so we can re-submit later on ... open(FO,">>$HOME/skipped.lis"); print FO "$inFile $outFile\n"; close(FO); return 1; } else { return 0; } }
In this afterFtp.pl script, we keep trakc of what is restored (note that only successful restored call this user hook):
if ( open(FO,">>$HOME/success.lis")){ # keep track of files restored print FO "$outFile restored on ".localtime()."\n"; close(FO); } chmod(oct(775),$outFile); # make this file group rw :-) 1; # always return success - be aware that failure <=> retry
Feel free to try and correct those examples yourself ...
Note that those hooks are there to help you accomplish tasks which otherwise would required external scripts. But please, think carefully of the script you are writting and keep in mind that they will be executed for each file restored ... For example, a script doing a du -k instead of a df will be disastrous on the NFS server's load. Or a script touching or doing a stat() of all files in a directory tree for each file restored from HPSS would be an equally bad and considered as sabotage... In other words, keep your user hooks lite weighted.
% klistIf this command shows lines like
Ticket cache: FILE:/tmp/krb5cc_5650 Default principal: xxx@RHIC.BNL.GOV Valid starting Expires Service principal 10/13/14 14:55:56 10/18/14 14:55:53 krbtgt/RHIC.BNL.GOV@RHIC.BNL.GOV 10/13/14 14:56:01 10/18/14 14:55:53 afs@RHIC.BNL.GOVthen you have Kerberos credentials and can proceed to execute
% aklogto get an AFS token. If the lines shows instead
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_5650)your session does not have any Kerberos credentials and you will need to use instead
% kinit {enter your Kerberos password at the prompt} % aklog
% tokens
% kpasswd
% fs la directoryTo set directories' ACLs:
% fs sa -dir directories -acl ACLentryEach ACLentry has two parts: a user or group name and the access control rights, separated by a space (for example, star rlidw). Type a combination of the seven letters representing the rights, or one of the four shorthand words.
Access Control Rights:
r: read l: lookup i: insert d: delete w: write k: lock
Shorthand Notation:
write = rlidwk read = rl all = rlidwka none = removes entry
% fs flushvolume -path pathwill trigger an AFS cache flush for the AFS volume that includes the specified file. Using 'flush' rather than 'flushvolume' will flush the file only. Sometimes when using AFS to access remote disks this can be necessary to avoid getting outdated material (I don't know why AFS fails to take care of updating itself).
For more information:
AFS is particularly convenient for STAR computing at non-RCF sites. Under AFS, the offsite user is presented with the identical STAR AFS directory tree hierarchy and files. Because of the remote access capability, AFS requires additional security measures beyond the fairly lax procedures associated with the standard UNIX access restrictions. Under AFS, access to the managed directories and files is restricted via ACLs to users who are members of specific AFS groups. A given AFS directory may have different levels of access privileges and restrictions for one or more AFS groups. These groups are comprised of STAR AFS users who have need to access the particular library directories. ACL info is above.
/packages/repository - CVS Repository for STAR software. Includes source codes, idl files, kumacs, etc. but not the compiled binaries and executables. /packages/SL* - Built software releases including sources, libraries, executables /packages/dev, /new, /pro, /frozen, /old - Official release versions; generally links to SL* /bin - SOFI tools (binary executables) /doc/www - Web documentation files /group - Setup and login scripts (based on HEPiX system)
Based on Tom Nguyen's original of March 1996.
This tutorial is very specific to the cases where you have a process running (batch typically) and one of two conditions below is happening and you do not know why (but eventually, would like too :-) ):
Slow programs are still programs in execution mode. The best way to see what is happening is to use "strace".
[An example will be provided soon]
If your program is really stuck, you will not be able to strace it as explained in the previous section. In fact, here is an example of what you would see under such conditions:
% ps -ef | grep root4star starreco 11209 10955 97 May20 ? 1-07:30:59 root4star -b -q bfc.C(100000,"DbV20080512 ry2007 in tpc_daq tpc fcf svt_daq SvtD Physics Cdst Kalman l0 tags Tree evout l3onl emcDY2 fpd ftpc trgd ZDCvtx -dstout CMuDst hitfilt Corr4 OSpaceZ2 OGridLeak3D","st_upc_8097121_raw_1150008.daq") starreco 9178 8953 0 19:39 pts/0 00:00:00 grep root4star % strace -p 11209 Process 11209 attached - interrupt to quit ...
strace would not show any unrolling call stack but simply wait there forever. You know you have a "stuck" process for sure and need to revert to the use of gdb. For the same process, this is what to do
% gdb GNU gdb Red Hat Linux (6.3.0.0-1.143.el4rh) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-redhat-linux-gnu". (gdb) attach 11209 Attaching to process 11209 Reading symbols from /afs/rhic.bnl.gov/star/packages/release10/SL08b/.sl44_gcc346/OBJ/asps/rexe/root4star.3...done. Using host libthread_db library "/lib/tls/libthread_db.so.1". Reading symbols from /afs/rhic.bnl.gov/star/ROOT/20/5.12.00/.sl44_gcc346/root/lib/libCore.so...done. Loaded symbols for /afs/rhic.bnl.gov/star/ROOT/5.12.00/.sl44_gcc346/root/lib/libCore.so ...
there will be a lot of shared library loaded and finally
Reading symbols from /lib/libnss_files.so.2...done. Loaded symbols for /lib/libnss_files.so.2 0x43f00fa1 in StarMagField::Search (this=0x54e16008, N=37, Xarray=0x54ec64c4, x=-nan(0x400000), low=@0x43f0a4c4) at .sl44_gcc346/OBJ/StRoot/StarMagField/StarMagField.cxx:1084 1084 if ( (Int_t)( x >= Xarray[middle] ) == ascend ) #0 0x43f00fa1 in StarMagField::Search (this=0x54e16008, N=37, Xarray=0x54ec64c4, x=-nan(0x400000), low=@0x43f0a4c4) at .sl44_gcc346/OBJ/StRoot/StarMagField/StarMagField.cxx:1084 #1 0x43f0071b in StarMagField::Interpolate3DBfield (this=0x54e16008, r=-nan(0x400000), z=-2.99321151e+33, phi=-nan(0x400000), Br_value=@0x45, Bz_value=@0x45, Bphi_value=@0x45) at .sl44_gcc346/OBJ/StRoot/StarMagField/StarMagField.cxx:844 #2 0x43efeafb in StarMagField::B3DField (this=0x54e16008, x=0xbffc8e70, B=0xbffc8e60) at .sl44_gcc346/OBJ/StRoot/StarMagField/StarMagField.cxx:482 #3 0x4d00c203 in StFtpcTrack::MomentumFit (this=0x1b5d72d8, vertex=0xbffc9610) at .sl44_gcc346/OBJ/StRoot/StFtpcTrackMaker/StFtpcTrackingParams.hh:344 #4 0x4d00b391 in StFtpcTrack::Fit (this=0x1b5d72d8, vertex=0xbffc9610, max_Dca=100, primary_fit=true) at .sl44_gcc346/OBJ/StRoot/StFtpcTrackMaker/StFtpcTrack.cc:575 #5 0x4d0257cc in StFtpcTracker::Fit (this=0xbffc93f0, primary_fit=Variable "primary_fit" is not available. ) at .sl44_gcc346/OBJ/StRoot/StFtpcTrackMaker/StFtpcTracker.cc:751 #6 0x4d01ee7e in StFtpcTrackMaker::Make (this=0xc9cf538) at .sl44_gcc346/OBJ/StRoot/StFtpcTrackMaker/StFtpcTracker.hh:64 #7 0x4285b0c0 in StMaker::Make (this=0xc9c58c0) at .sl44_gcc346/OBJ/StRoot/StChain/StMaker.cxx:965 #8 0x4285b0c0 in StMaker::Make (this=0xaf3e4c0) at .sl44_gcc346/OBJ/StRoot/StChain/StMaker.cxx:965 #9 0x4285426b in StChain::Make (this=0xaf3e4c0) at .sl44_gcc346/OBJ/StRoot/StChain/StChain.cxx:105 #10 0x4296022d in StBFChain::Make (this=0xaf3e4c0) at .sl44_gcc346/OBJ/StRoot/StBFChain/StBFChain.h:78 #11 0x4285522d in StMaker::IMake (this=0xaf3e4c0, number=275) at .sl44_gcc346/OBJ/StRoot/StChain/StMaker.h:110 #12 0x4295fd89 in StBFChain::Make (this=0xaf3e4c0, number=275) at .sl44_gcc346/include/StChain.h:49 #13 0x428545b0 in StChain::EventLoop (this=0xaf3e4c0, jBeg=1, jEnd=100000, outMk=0x0) at .sl44_gcc346/OBJ/StRoot/StChain/StChain.cxx:165 #14 0x4286d4a2 in G__StChain_Cint_541_9_0 () at /afs/rhic.bnl.gov/star/ROOT/5.12.00/.sl44_gcc346/root/include/TString.h:248 #15 0x40855c0c in G__ExceptionWrapper () from /afs/rhic.bnl.gov/star/ROOT/5.12.00/.sl44_gcc346/root/lib/libCint.so #16 0x40911cda in G__call_cppfunc () from /afs/rhic.bnl.gov/star/ROOT/5.12.00/.sl44_gcc346/root/lib/libCint.so #17 0x408ffd40 in G__interpret_func () from /afs/rhic.bnl.gov/star/ROOT/5.12.00/.sl44_gcc346/root/lib/libCint.so #18 0x408ebb95 in G__getfunction () from /afs/rhic.bnl.gov/star/ROOT/5.12.00/.sl44_gcc346/root/lib/libCint.so ---Type <return> to continue, or q <return> to quit---q Quit (gdb)
Withing this stack, you see that the reason for this particular process to be stuck is
0x43f00fa1 in StarMagField::Search (this=0x54e16008, N=37, Xarray=0x54ec64c4, x=-nan(0x400000), low=@0x43f0a4c4) at .sl44_gcc346/OBJ/StRoot/StarMagField/StarMagField.cxx:1084 1084 if ( (Int_t)( x >= Xarray[middle] ) == ascend )
Your bug reporting could begin OR if this is your code, now starts the debugging process, editing code, setting breakpoints and stepping in the call stack inspecting the variables and why/when they are assigned to NaN and so on. Your logic may simply be obviously flaw so your first step is definitely editing the code and understanding its logic.
Conditions:To use the Fortran subroutines and functions 3 pre-conditions have to be met:
It is provided by "cons" for all source files with the upper case "*.F" extension
It has to be written "by hand"
The Fortran Run-time libraries are provided by "root4star". They are not provided by the "plain" root.exe
The STAR "cons" automatically creates the RootCint dictionary for the all header files it discovery.
To use the fortran subroutine from ROOT C++ macro you are advised to use STAR build env as follows
Let's assume you need to call the subroutine:
SUBROUTINE FRAGMF(IPART,X,Q2,XGLUE,XUQ,XDQ,XSQ,XUSEA,XDSEA,XSSEA,
& XCHARM,XCHARMS,XBEAUTY,XBEAUTYS)
from C++ code (including ROOT macro).
You should:
where
Fragmf.h
: defines the C++ interface for your fortan code
#ifndef STAR_FRAGMF_H
#define STAR_FRAGMF_H
#include "TObject.h"
class Fragmf {
public:
void operator()
(int IPART, double &X, double &Q2, double &XGLUE
, double &XUQ, double &XDQ, double &XSQ
, double &XUSEA,double &XDSEA,double &XSSEA
, double &XCHARM, double &XCHARMS, double &XBEAUTY
, double &XBEAUTYS) const;
ClassDef(Fragmf,0)
};
#endif
Fragmf.cxx
: the implementation of your C++ code:
#include "Fragmf/Fragmf.h"
ClassImp(Fragmf)
extern "C" {
// definition fo the FORTRAN sunbroutine interface
void fragmf_(int *, double *X, double *Q2, double *XGLUE
, double *XUQ, double *XDQ, double *XSQ
, double *XUSEA, double *XDSEA, double *XSSEA
, double *XCHARM, double *XCHARMS,double *XBEAUTY
, double *XBEAUTYS);
}
void Fragmf::operator()
(int IPART, double &X, double &Q2, double &XGLUE
, double &XUQ, double &XDQ, double &XSQ
, double &XUSEA,double &XDSEA,double &XSSEA
, double &XCHARM, double &XCHARMS, double &XBEAUTY
, double &XBEAUTYS) const
{
// definition of the C++ wrapper to simplify the FORTRAN
// subroutine invocation
int i = IPART;
fragmf_(&i, &X, &Q2, &XGLUE, &XUQ
, &XDQ, &XSQ, &XUSEA, &XDSEA
, &XSSEA,&XCHARM, &XCHARMS,&XBEAUTY,&XBEAUTYS);
}
3. You can add the ROOT C++ macro to test your inteface
StRoot/Fragmf/Fragmf.C
:
{
gSystem->Load("Fragmf");
int IPART =8;
double X,Q2,XGLUE, XUQ, XDQ, XSQ;
double XUSEA,XDSEA,XSSEA, XCHARM, XCHARMS, XBEAUTY;
double XBEAUTYS;
Fragmf fragmf;
// use "operator()" to proviide Fortran "Look and Feel"
fragmf(IPART,X,Q2,XGLUE, XUQ, XDQ, XSQ
,XUSEA,XDSEA,XSSEA, XCHARM, XCHARMS, XBEAUTY
,XBEAUTYS);
printf(" The values from Fortran:\n"
" %f %f,%f, %f, %f, %f \n"
" %f,%f,%f, %f, %f, %f %f\n"
, X,Q2,XGLUE, XUQ, XDQ, XSQ
, XUSEA,XDSEA,XSSEA, XCHARM, XCHARMS, XBEAUTY
, XBEAUTYS);
}
Now, you are ready to create the shared library with "cons"
> cons
and execute it with the ROOT
> root.exe -q StRoot/Fragmf/Fragmf.C
or
> root4star -q StRoot/Fragmf/Fragmf.C
One needs the very "root4star" as soon his / her Fortran code requires some Fortran Run-Time functions. For example it calls the Fortran I/O or PRINT statements.
In real life, you may not create a dedicated subdirectory StRoot/Fragmf you can add the 3 files in question to any existent STAR offline package.
$ cvs co StRoot/StarGenerators/macros/starsim.pythia6.standalone.C $ ln -s StRoot/StarGenerators/macros/starsim.pythia6.standalone.C starsim.C $ root4star -q -b starsim.CTo run pythia8:
$ cvs co StRoot/StarGenerators/macros/starsim.pythia8.standalone.C $ ln -s StRoot/StarGenerators/macros/starsim.pythia8.standalone.C starsim.C $ root4star -q -b starsim.CTo run hijing:
$ cvs co StRoot/StarGenerators/macros/starsim.hijing.standalone.C $ ln -s StRoot/StarGenerators/macros/starsim.hijing.standalone.C starsim.C $ root4star -q -b starsim.C
STAR uses CVS, the Concurrent Version control System, to manage the STAR software repositories. The current repository in use in STAR are:
STAR offline repository | $CVSROOT/asps/ $CVSROOT/kumac/ $CVSROOT/mgr/ $CVSROOT/OnlTools/ $CVSROOT/pams/ $CVSROOT/QtRoot/ $CVSROOT/StarDb/ $CVSROOT/StDb/ $CVSROOT/StRoot/ |
This repository contains most of the STAR software used for data reconstruction, simulation and analysis. All codes in this category are subject to nightly builds. |
STAR offline generic code repository | $CVSROOT/offline/ | This area contains a mix of (a) a set of test codes and Makers which are not yet ready for inclusion under StRoot (b) project specific codes such as xrootd, the data carousel (c) the paper area containing raw materials for STAR notes and published papers documentation (d) a generic user area. |
STAR online repository | $CVSROOT/online/ | Is an area containing codes for detector sub-systems and aimed to be used online, but this area typically do not contain shared codes such as OnlTools (aka Online Plots or PPlots). |
STAR login and script areas |
$CVSROOT/scripts/ |
Several areas contains scripts and configurations used for support of STAR's global login, general production related scripts and CGI or analysis related supported commands and configurations. |
STAR ROOT pacthes | $CVSROOT/root/ $CVSROOT/root3/ $CVSROOT/root5/ |
Areas contains patches for ROOT in support of STAR software. |
One should never ever go to, browse in, touch, edit, or otherwise interfere with a CVS repository itself. Use the web browser, do a CVS checkout of the software of interest, make your changes and commit.
$CVSROOT/ current value is /afs/rhic.bnl.gov/star/packages/repository. Access in write mode to STAR's CVS repository is hence subject to AFS ACL in addition of the CVS Karma mechanism. Only STAR authenticated users have write access.
The STAR offline CVS repository employs an access control mechanism that allows only authorized accounts to modify parts of the hierarchy. Authorization control is for commits only; everyone can check out any part of the repository as far as you have access to AFS.
If you get a message Insufficient karma (yes, it's a weird message) it's an indication you do not have write access to the area you're trying to commit to.
The access list is $CVSROOT/CVSROOT/avail
. A line like
avail |jeromel,didenko |scripts
means that jeromel and didenko can write (that is, modify and add items) to any directory under scripts/.
Note that you will not be able to commit a brand new package to the repository (unless you have a lot of karma) because there will be no entry giving you access to the area you're trying to create. You can create sub-directories under directories you have access to. Please, DO NOT attempt to add packages or directories but send a request to the starsofi list.
cvs co module = check out a copy of 'module' into your working directory (you only need to do this once) cvs co -r 1.12 module = check out a copy of 'module' of version 1.12 cvs co -r SL09a module = check out a copy of 'module' that was tagged as SL09a cvs update = update all the files in the current directory cvs -n update = tells what it would update, but the -n means it doesn't do it cvs -n update -A = to get list of what it would update (-n means to not do it) The "-A" is needed if you originally checked out a particular tag - it tells the update to override the tag cvs commit = commit all of the files you have changed to the main repository cvs ci -m "message" = same as above, but enter explanatory message cvs add file1 = alert CVS that file1 should be added to the repository at the next 'cvs commit' invocation. cvs rm file1 = alert CVS that file1 should be removed from the repository at the next 'cvs commit' invocation. cvs status -v module = get list of all versions tagged
Whenever you check out, files may appear with a flag. As listed, the meanings are:
U = file was updated in your area
M = file was modified in your area (revisit)
? = cvs doesn't know about it
cvs history -Ta -n module = show all tags on 'module' cvs diff -r HEAD file1 = show difference between my checked out file1 and the most recent version in CVS cvs rtag tagname module = tag 'module' in CVS with 'tagname' (NOTE: this creates a history entry, the 'cvs tag' command does not) CVS/Entries = this file shows you the tag and RCS version of what you checked out
Please see at the bottom of this page for a quick reference card.
If you want to check out a certain version of the code, it is not enough to do "starpro" and then "cvs co pams/tpc"!!! The same applies for any specific version of te STAR code you may want to use. Whenever you do a simple "cvs co", you will still get the dev version of pams/tpc.
To check out the a specific version, you must find which one (let's say SL09a) version it is. In our example case, it is SL09a. Then you would do
% cvs co -r SL14b pams/tpc
Now you must still do
% starver SL14b
and now you're totally in the environment set for SL09a and code checked out based on this library tag.
If you want to know what tag has used for a particular file (for example pams/tpc/tpt.F) then you just need to type
% cvs status -v $STAR_ROOT/dev/pams/tpc/tpt/tpt.F
To check out something for the first time use a command like this
% cvs co pams/tpc/tss
CVS will create directories called 'pams/tpc/tfs' in your current working directory and copy all of the files and sub-directories associated with the module 'tfs' into it. All of the remaining CVS commands should be issued from the 'tfs' directory.
If you have done nothing special when you cvs checkout the code, this process will be straight forward. You can edit any of the files you want in your working directory, and when you are ready to commit your changes to the master repository do,
% cvs commit {filename|*|}
CVS will start an editor (vi, pico or emacs depending on your environment variables EDITOR) and ask you to enter a text that will be stored as part of a history of changes. If this is successful, then your versions of the files will be copied back to the main site so that everyone can access them. Note that you can also, for a single file, use a form
% cvs commit -m "My relevant message explaining the change" tfs_ini.F
for committing the changed you made to tfs_ini.F and adding at the same time the relevant comment. Please, always think of adding a comment and remember that you are not the only one (or will not be) working on this code and hence, help to other developers as a courtesy is important.
If someone else changes the files while you are working on your own copy, you must update your copy with the command cvs update to get the new versions. You can chose to update your files at any time. Before committing any files, CVS will check to see if you need to do an update. If you do, CVS prints a message and aborts the commit. If someone else has changed a file that you have also changed, CVS will attempt to merge the new changes into your file. If it is unsuccessful at doing so it will alert you to a 'conflict.'
If there is a conflict, the new file will contain sections that have both your changes and the other persons changes. These sections will be flagged by lines like
>>>>>>>>>>>> and <<<<<<<<<<<<
The user must then look at the conflicting regions, choose what is correct and edit the file.
If you create new files, you have to tell CVS to put them into the repository. First do,
% cvs add newfile1 newfile2
then do a 'cvs commit' (do not forget the initial message / comment). The 'add' command alerts CVS to commit the named files at the next invocation of cvs commit.
Every time you commit a file to CVS it increments a revision number. The first commit is 1.1 then 1.2 and 1.3 etc... (I don't know if it ever gets to 2.1 by itself though). To revert to an old version of a file you can use the version number as a tag as in
% cvs co -r 1.7 tfs_init.F
to check out version 1.6 of tfs_init.F.
You could also use the update command
% cvs update -r 1.7 tfs_init.F
but be careful that you don't commit the old version on top of the current version (jn the above example, you would). To see what version number the current file is do
% cvs status tfs_init.F
You can also revert to files that were tagged as a particular version by the user. For example
% cvs co -r sl97btpc pams/tpc/tfs
would check out all of the files that were tagged by somebody as version sl97btpc. To see what user tags have been created do
% cvs history -Ta -n pams/tpc/tss
You may also use commands that uses the revision differences (-j for join) and merge them into your current code. Let us say that you have version 1.5 of a code named filename and would like to revert it to version 1.4. What you would need to then is to
% cvs update -j 1.5 -j 1.4 filename % cvs commit filename
The order of the versions are very important here. What you've asked cvs to do in the above command is to take the difference between versions 1.5 and 1.4 and apply them to your working copy. Because you apply to the current directory the differences between 1.5 to 1.4 (and yu have 1.5 in your current directory) you have effectively asked cvs to apply the changes necessary to revert your code to version 1.4 .
cvs can branch the repository into separate fork in the development until such a time that you are satisfied with it and ready to re-merge. Usually, we do not use this feature in STAR so everyone works from the same code base. Besides, if too many branches are created, re-merging code may create conflict that will take time to sort (see the above note on code conflict while merging).
Sticky tags are used to create branches. To create a branch named BranchName, simply go into a working directory having all code that are in the repository and issue the command:
% cvs tag -b BranchName {Dir|File}
In the above {Dir|File} indicate a choice of either Dir or File: Dir represents the directory tree you would like to update or File a single file you would like to tag (probably not the greatest idea to generate to many branches for each code you have but, noted for consistency).
If unspecified, the whole code tree will be tagged. More generally, you WILL WANT to do this instead
% cvs tag Revisionname {Dir|File} % cvs tag -r Revisionname -b BranchName {Dir|File}
What this latest form will do is create a normal tag Revisionname first, then attach the branch named BranchName to that tag. This method may later facilitate merging or seeing differences made since a tagged version (otherwise, you may have a hard time to figure out the changes you made since you created the branch). Note that CVS has a special branch named MAIN which should always correspond to the CVS HEAD (the latest version of the code).
To work with branches, it is simple ...
% cvs co -r BranchName Dir
would checkout the code from the tree Dir that is in the branch named BranchName into your working directory. From this point on, you have nothing else to do. Just modify your code and cvs commit as usual. By default, the checkout command with a branch specification will create a "sticky tag" in your working directory, that is, cvs will persistently remember that you will be working from that branch. You may alos use "cvs update -r BranchName Dir" for similar effects (you would essentially update your working directory Dir and make it move to the branch BranchName).
** WARNING ***
You may destroy the sticky tag by using the folowing commands
% cvs update -A Dir
If you do this, what you are asking CVS is to update ignoring all sticky tags and updated based on the the MAIN branch. The command will remove all 'stickyness' of your working directory. Further commit will then go into the MAIN branch (or CVS HEAD). If you want to ensure you commit in the branch you intended, you will need to specify the branch name in your commit command as follows
% cvs commit -r BranchName Dir
Before discussing merging ...
Hint #1: It was noted that the most convenient way to create a sticky tag (or branch) is to use a regular tag and a branch tag combination (second set of commands in our cvs tag example above). This is because when you will be ready to remerge, you may find that you need to merge several branches together and those create conflicts you may have a hard time to sort out. You may be able to resolve those conflicts by issuing cvs diff commands with the tag revision Branchrevision where you branched in the first place
% cvs diff -r Revisionname -b BranchName {Dir|File}will do that for you. But if you did NOT create a tag and branched from CVS HEAD, it will not be easy to make differences with the "base" you started with when you branched out (since CVS HEAD keeps changing).
% cvs diff -c -r BranchName1 -r Branchname2 >diffs.logand inspecting what you got. If you are the only person to modify those codes though, the advantage will be minimal.
Now, imagine you are satified with your changes and want to remerge them into MAIN. Then the fun begins ... An easy way to do this (if there has not been any other code divergence in MAIN hence no potential additional conflicts) would be to do this
% cvs update -j BranchName Dir
where Dir here is the location of code you want to update by "joining" the branch BranchName to it. If no conflict appears, you are set to go (with a cvs commit and voila! the code is remerged).
Generally, if you want to merge two branches together, here are a few examples. Using checkout on a brand new directory tree ...
% cvs checkout -j BranchName1 -j BranchName2 {Dir|File}
this command will locate the differences between BranchName1 and BranchName2. Of the lines that are different between them, the lines in BranchName2 will be patched, or merged, into the latest revision on the main trunk of {Dir|File}. Any conflicts must be manually resolved by editting the file and verifying everything compiles and run properly.
Even more optimistic, you may remerge two branches into yet a third different branch. Here is an example building on the above
% cvs checkout -r BranchToMergeTo -j BranchName1 -j BranchName2 {Dir|File}
However, file that were created between BranchName1 and BranchName2 do not get created automatically.
Hint #2: When working with branches. cvs status is your friend. See why in the example below
% cvs status StRoot/StiPxl/StiPxlChairs.cxx =================================================================== File: StiPxlChairs.cxx Status: Up-to-date Working revision: 1.1 Sat Feb 1 19:19:34 2014 Repository revision: 1.1 /afs/rhic.bnl.gov/star/packages/repository/StRoot/StiPxl/StiPxlChairs.cxx,v Sticky Tag: StiHFT_1b (branch: 1.1.4) Sticky Date: (none) Sticky Options: (none)
In the above example, it is clear that the code checkout in the workig directory Dir = StRoot HAS a sticky tag StiHFT_1b . In contrast, the same command in a directory without a sticky tag would show
% cvs status StRoot/StiPxl/StiPxlChairs.cxx =================================================================== File: StiPxlChairs.cxx Status: Up-to-date Working revision: 1.1 Sat Feb 1 19:19:34 2014 Repository revision: 1.1 /afs/rhic.bnl.gov/star/packages/repository/StRoot/StiPxl/StiPxlChairs.cxx,v Sticky Tag: (none) Sticky Date: (none) Sticky Options: (none)
Sticky Tag: (none) cannot be clearer.
The following command issued either from an empty directory or a directory containing (ONLY!) the material you want added to the new module
% cvs import -m "Message" repo_name tag_name tag_name
will create a new module called repo_name containing all the files in your directory. The repo_name can be just a directory name if the module sits at the top of the repository, or a path if the module lives down in the repository directory tree. The tag_name is the initially assigned tag (eg. 'V1.0').
Be aware that in STAR, you may be able to add a new repository to CVS but you will not be able to add files to it until you are granted appropriate karma. This additional step keeps STAR's CVS repository clean as the layout presented at the beginning of this document should be followed.
Before you can do anything you need an RCF account and an AFS account. Visit the Getting a computer account in STAR for information. When you obtain an account you're automatically given an AFS account as well. The contact person in the last part of the information must be your council representative who must vouch for you. Please, remember to bug your council representative so he can also inform Liz Mogavero of your recent arrival in STAR and send her the updated list of active members from your institution.
Once you have an account you have access to the RHIC AFS cell and the STAR directories and files mounted thereon. You must get an AFS token with the command aklog before accessing any AFS area.
See the Computing Environment or information on STAR and RHIC computers and printers. If you're bringing a computer of your own to BNL contact Jerome Lauret or Wayne Betts regarding an IP address.
This login environment is partly related to HEPiX, a standard within the HENP community and used at many labs. Starting in the middle of 2001, the standard STAR login was chosen to be a stand-alone minimal login instead of the default HEPix login and stack suite (which became unmaintainable and cumbersome). Older users should follow the above instructions and update their .cshrc and .login file if not already done.
The STAR group on the RCF machines is rhstar. The group directory with scripts for the STAR environment is GROUP_DIR = /afs/rhic.bnl.gov/rhstar/group. There is also a link /afs/rhic.bnl.gov/rhstar -> /afs/rhic.bnl.gov/star. Note that the initial /afs/rhic.bnl.gov/rhstar AFS volume needed to change name in mid-2003 to comply with Kerberos 5 authentication and token passing mechanism. It is however a link toward the rhic.bnl.gov volume but its use should be discontinued.
If your Unix group is different from star or rhstar, first of all you should ask to the RCF team that you be added to the rhstar group (submit a ticket, see the Software Infrastructure for more instructions). Several possibilities/scenario are now described :
% cp /afs/rhic.bnl.gov/star/group/templates/cshrc ~/.cshrc % cp /afs/rhic.bnl.gov/star/group/templates/login ~/.login
Then you can customize both .cshrc and .login for your taste (or source separate personal setups from them).
When you next logout and log back in, your PATH, MANPATH, and LD_LIBRARY_PATH will include the proper STAR directories. Your environment variables will also include several STAR variables (below).
The group environment variables are defined in the group_env.csh script. Some of the more important environment variables set up by this script follows. The variable in blue, if defined before the login scripts are loaded, will not be superseded. Variables in green are likely already defined if the optional component has been deployed on your machine. Variable in red are fixed values on purpose to ensure compatibility and the strict install path MUST be present for STAR's environment to load properly.
Other variables are
The following environment variables may, if defined, affect run-time of our STAR code - DO NOT set those yourself unless permission granted.
The PATH environment variable is appended with directories containing executables which are needed by the STAR computing environment. For example:
While at the RCF, /opt/star is a soft link to /afs/rhic.bnl.gov/opt/star/, there is a misleading component to the the path /afs/rhic.bnl.gov/opt/star ... The real path is /afs/rhic.bnl.gov/@sys/opt/star and depends on the result of the translation of the AFS @sys set on YOUR client. The current translation has been so far as follow
Linux release support | Notes or Also supports |
Result of fs sysname MUST be |
---|---|---|
RH 6.1 | (obsolete) | i386_redhat61 |
RH 7.2 | (obsolete) | i386_linux24 |
RH 8 | (obsolete) | i386_redhat80 |
SL 3.0.2 | (obsolete - was for gcc 3.2.3, SL 3.0.4, rh3 but no build node available at BNL - EOL was end of 2008) | i386_sl302 |
SL 3.0.5 | (obsolete - was gcc 3.2.3 - SL 3.0.{6|7|8|9}, rh3) | i386_sl305 |
SL 4.4 | gcc 3.4.6 - SL 4.5, rh4 | i386_sl4 |
SL 5 | gcc 4.3.2 - SL 5.3 native, {sl|rh}55 & 56 & 57 with gcc 4.3.2 | x8664_sl5 (32 / 64 bits) i386_sl5 (32 bits only) |
SL6 | gcc 4.4.6 - SL 6.2 native | x8664_sl6 (32 and 64 bits) |
A typical problem for off-site facilities is to deploy a version of Linux with no match for sysname (or wrong match). For example, RedHat8.0 set with i386_linux24 will pick up the program in the wrong AFS area, a RedHat 9.0 system would be equally problematic as currently not supported. There are some backward compatibility support for other Linux OS versions we document in the second column. If your OS does not appear in this table, you could send a note for a request support to the offsites or starsofi Hypernews fora.
The sysname is configured in /etc/sysconfig/afs . It uses a default value or is set via a line similar to the following
AFS_POST_INIT="/usr/bin/fs sysname -newsys i386_sl5"
This web page is an access point to documentation for STAR Offline Software libraries with special needs. For example, if the documentation in written as a LaTeX file and needs to be converted to a PostScript or PDF file.
This tutorial was imported from KDE's Umbrello tutorial giving a nice overview of the UML elements.
Class Diagrams show the different classes that make up a system and how they relate to each other. Class Diagrams are said to be “static” diagrams because they show the classes, along with their methods and attributes as well as the static relationships between them: which classes “know” about which classes or which classes “are part” of another class, but do not show the method calls between them.
Umbrello UML Modeller showing a Class Diagram.
A Class defines the attributes and the methods of a set of objects. All objects of this class (instances of this class) share the same behaviour, and have the same set of attributes (each object has its own set). The term “Type” is sometimes used instead of Class, but it is important to mention that these two are not the same, and Type is a more general term.
In UML, Classes are represented by rectangles, with the name of the class, and can also show the attributes and operations of the class in two other “compartments” inside the rectangle.
Visual representation of a Class in UML
In UML, Attributes are shown with at least their name, and can also show their type, initial value and other properties. Attributes can also be displayed with their visibility:
+
Stands for public attributes
#
Stands for protected attributes
-
Stands for private attributes
Operations (methods) are also displayed with at least their name, and can also show their parameters and return types. Operations can, just as Attributes, display their visibility:
+
Stands for public operations
#
Stands for protected operations
-
Stands for private operations
Classes can relate (be associated with) to each other in different ways:
Inheritance is one of the fundamental concepts of Object Orientated programming, in which a class “gains” all of the attributes and operations of the class it inherits from, and can override/modify some of them, as well as add more attributes and operations of its own.
In UML, a Generalisation association between two classes puts them in a hierarchy representing the concept of inheritance of a derived class from a base class. In UML, Generalisations are represented by a line connecting the two classes, with an arrow on the side of the base class.
Visual representation of a generalisation in UML
An association represents a relationship between classes, and gives the common semantics and structure for many types of “connections” between objects.
Associations are the mechanism that allows objects to communicate to each other. It describes the connection between different classes (the connection between the actual objects is called object connection, or link.
Associations can have a role that specifies the purpose of the association and can be uni- or bidirectional (indicates if the two objects participating in the relationship can send messages to the other, of if only one of them knows about the other). Each end of the association also has a multiplicity value, which dictates how many objects on this side of the association can relate to one object on the other side.
In UML, associations are represented as lines connecting the classes participating in the relationship, and can also show the role and the multiplicity of each of the participants. Multiplicity is displayed as a range [min..max] of non-negative values, with a star (*
) on the maximum side representing infinite.
Visual representation of an Association in UML
Aggregations are a special type of associations in which the two participating classes don't have an equal status, but make a “whole-part” relationship. An Aggregation describes how the class that takes the role of the whole, is composed (has) of other classes, which take the role of the parts. For Aggregations, the class acting as the whole always has a multiplicity of one.
In UML, Aggregations are represented by an association that shows a rhomb on the side of the whole.
Visual representation of an Aggregation relationship in UML
Compositions are associations that represent very strong aggregations. This means, Compositions form whole-part relationships as well, but the relationship is so strong that the parts cannot exist on its own. They exist only inside the whole, and if the whole is destroyed the parts die too.
In UML, Compositions are represented by a solid rhomb on the side of the whole.
Class diagrams can contain several other items besides classes.
Interfaces are abstract classes which means instances can not be directly created of them. They can contain operations but no attributes. Classes can inherit from interfaces (through a realisation association) and instances can then be made of these classes.
Datatypes are primitives which are typically built into a programming language. Common examples include integers and booleans. They can not have relationships to classes but classes can have relationships to them.
Enums are a simple list of values. A typical example is an enum for days of the week. The options of an enum are called Enum Literals. Like datatypes they can not have relationships to classes but classes can have relationships to them.
Program | Path | Comment |
---|---|---|
awk | /bin/awk | MAC has it in /usr/bin |
basename | /bin/basename | MAC has it in /usr/bin |
cat | /bin/cat | |
cp | /bin/cp | |
cut | /usr/bin/cut | |
crontab | /usr/bin/crontab | |
df | /bin/df | |
date | /bin/date | |
domainname | /bin/domainname | |
find | /usr/bin/find | |
grep | /bin/grep |
On Solaris, /usr/xpg4/bin/grep allows for extended pattern
while the default do not (for example, -E only with xpg4) MAC has it in /usr/bin |
hostname | /bin/hostname | |
id | /usr/bin/id | (checked only on Solaris & Linux) |
mkdir | /bin/mkdir | |
mkfifo | /usr/bin/mkfifo | |
netstat |
/bin/netstat /usr/sbin/netstat |
Linux/Solaris True64 |
nm | This program do not have a standard location | |
ps | /bin/ps | |
pwd | /bin/pwd | |
rm | /bin/rm | |
sed | /bin/sed | MAC has it in /usr/bin |
sort | /bin/sort | |
tail | /usr/bin/tail | |
test | /usr/bin/test | On MAC, it is /bin/test but on Linux, there are NO such file |
uname | /bin/uname | MAC has it in /usr/bin |
uniq | /usr/bin/uniq | |
touch | /bin/touch | |
uptime | /usr/bin/uptime | |
vmstat | /usr/bin/vmstat | |
wc | /usr/bin/wc | |
xargs | /usr/bin/xargs |