Difference between revisions of "AMA on Stoomboot 14 2 22"
(4 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
− | Below is a (personal) recipe to run on ESDs at Nikhef. | + | Below is a (personal) recipe to run on ESDs at Nikhef. The instructions are when logged on interactively to Stoomboot. Instructions to submit a job to the '''Stoomboot batch''' can be found at the bottom of the page. |
From your local desktop log in to Stoomboot: | From your local desktop log in to Stoomboot: | ||
− | + | <pre>qsub -X -I -q qlong</pre> | |
Go to your working area and make a working directory | Go to your working area and make a working directory | ||
− | + | <pre> | |
+ | bash | ||
cd /project/atlas/users/egge/testarea/ | cd /project/atlas/users/egge/testarea/ | ||
mkdir twiki | mkdir twiki | ||
cd twiki | cd twiki | ||
− | mkdir 14.2.22 | + | mkdir 14.2.22 |
+ | </pre> | ||
Create a requirements file, for example | Create a requirements file, for example | ||
− | + | <pre> | |
+ | set CMTSITE STANDALONE | ||
set SITEROOT /data/atlas/offline/14.2.22 | set SITEROOT /data/atlas/offline/14.2.22 | ||
macro ATLAS_TEST_AREA /project/atlas/users/egge/testarea/twiki | macro ATLAS_TEST_AREA /project/atlas/users/egge/testarea/twiki | ||
Line 31: | Line 34: | ||
use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) | use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) | ||
set CMTCONFIG i686-slc4-gcc34-opt | set CMTCONFIG i686-slc4-gcc34-opt | ||
− | + | </pre> | |
Then | Then | ||
− | + | <pre> | |
− | cmt config | + | source /data/atlas/offline/14.2.22/CMT/v1r20p20080222/mgr/setup.sh |
+ | cmt config | ||
+ | </pre> | ||
The following two lines should be done every next time to set up 14.2.22 | The following two lines should be done every next time to set up 14.2.22 | ||
− | + | <pre> | |
− | source setup.sh | + | cd /project/atlas/users/egge/testarea/twiki/14.2.22/ |
+ | source setup.sh | ||
+ | </pre> | ||
To be able to checkout AMA from CERN | To be able to checkout AMA from CERN | ||
− | + | <pre> | |
− | + | /usr/kerberos/bin/kinit -5 egge@CERN.CH | |
+ | </pre> | ||
Then actually checkout AMA | Then actually checkout AMA | ||
− | + | <pre> | |
+ | cmt co -r AMACore-00-00-02-01 PhysicsAnalysis/AnalysisCommon/AMA/AMACore | ||
cmt co -r AMAAthena-00-00-07 PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena | cmt co -r AMAAthena-00-00-07 PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena | ||
− | + | </pre> | |
and compile | and compile | ||
− | + | <pre> | |
+ | cd PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/cmt | ||
cmt br gmake | cmt br gmake | ||
− | + | </pre> | |
− | + | ||
− | In ANOTHER shell: set up dq2 | + | ---- |
− | + | ---- | |
+ | In '''ANOTHER''' shell: set up dq2 | ||
+ | <pre> | ||
+ | source /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF | ||
voms-proxy-init -voms atlas | voms-proxy-init -voms atlas | ||
− | + | </pre> | |
Check if all is ok with | Check if all is ok with | ||
− | + | <pre> | |
− | + | voms-proxy-info -all | |
+ | </pre> | ||
To see what SARA destinations exist | To see what SARA destinations exist | ||
− | + | <pre> | |
− | + | dq2-destinations | grep "SARA" | |
+ | </pre> | ||
To my knowledge SARA-MATRIX_DATADISK is the one to use (experts, please comment). To see the available (complete) datasets on SARA-MATRIX_DATADISK | To my knowledge SARA-MATRIX_DATADISK is the one to use (experts, please comment). To see the available (complete) datasets on SARA-MATRIX_DATADISK | ||
− | + | <pre> | |
− | + | dq2-list-dataset-site -c SARA-MATRIX_DATADISK | |
+ | </pre> | ||
Let's take "data08_cosmag.00090721.physics_MBTS_BCM_LUCID.recon.ESD.o4_f70" as an example | Let's take "data08_cosmag.00090721.physics_MBTS_BCM_LUCID.recon.ESD.o4_f70" as an example | ||
− | + | <pre> | |
+ | cd /project/atlas/users/egge/testarea/twiki/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/ | ||
define_dq2_sample -n cosmag90721 data08_cosmag.00090721.physics_MBTS_BCM_LUCID.recon.ESD.o4_f70 SARA-MATRIX_DATADISK | define_dq2_sample -n cosmag90721 data08_cosmag.00090721.physics_MBTS_BCM_LUCID.recon.ESD.o4_f70 SARA-MATRIX_DATADISK | ||
− | + | </pre> | |
− | |||
− | + | ---- | |
− | + | ---- | |
+ | |||
+ | '''BACK''' in Stoomboot shell. Also setup Grid here | ||
+ | <pre> | ||
+ | cd /project/atlas/users/egge/testarea/twiki/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/ | ||
source /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF | source /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF | ||
voms-proxy-init -voms atlas | voms-proxy-init -voms atlas | ||
− | + | </pre> | |
− | The warning "bash: /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/latest/setup.sh: No such file or directory" seems to be harmless. | + | ''The warning "bash: /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/latest/setup.sh: No such file or directory" seems to be harmless.'' |
AFTER setting up Grid on Stoomboot ALWAYS do | AFTER setting up Grid on Stoomboot ALWAYS do | ||
− | + | <pre> | |
+ | unset PYTHONPATH | ||
source /project/atlas/users/egge/testarea/twiki/setup.sh | source /project/atlas/users/egge/testarea/twiki/setup.sh | ||
export TMPDIR=/tmpdir | export TMPDIR=/tmpdir | ||
− | + | </pre> | |
− | + | To make sure you have correct (14.2.22) Database settings please do | |
− | + | <pre> | |
+ | cp /project/atlas/users/egge/testarea/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/setupNikhefDB.sh . | ||
+ | source setupNikhefDB.sh | ||
+ | </pre> | ||
+ | |||
+ | Then, modify the following in "share/AMAAthena_jobOptionsESD.py" | ||
+ | <pre> | ||
+ | EvtMax = 1000 | ||
sampleFile = "samples/cosmag90721.def" # File with input collections | sampleFile = "samples/cosmag90721.def" # File with input collections | ||
− | + | </pre> | |
Then copy "share/input_FileStagerRFCP" to "share/input_FileStager.py" and implement the following in "share/input_FileStager.py" | Then copy "share/input_FileStagerRFCP" to "share/input_FileStager.py" and implement the following in "share/input_FileStager.py" | ||
− | + | <pre> | |
+ | #stagetool.CpCommand = "rfcp" | ||
#stagetool.CpArguments = [] | #stagetool.CpArguments = [] | ||
#stagetool.OutfilePrefix = "" | #stagetool.OutfilePrefix = "" | ||
Line 110: | Line 139: | ||
svcMgr.EventSelector.SkipBadFiles = True | svcMgr.EventSelector.SkipBadFiles = True | ||
− | + | </pre> | |
Then all should be ready to run the job | Then all should be ready to run the job | ||
− | + | <pre> | |
+ | cd /project/atlas/users/egge/testarea/twiki/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena | ||
athena -s share/AMAAthena_jobOptionsESD.py | athena -s share/AMAAthena_jobOptionsESD.py | ||
− | '' | + | </pre> |
+ | |||
+ | === Submitting in stoomboot batch mode === | ||
+ | If you remember from above instruction, setting up the Grid on Stoomboot results in an error: | ||
+ | ''"bash: /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/latest/setup.sh: No such file or directory"'' | ||
+ | |||
+ | This can be overcome by using only the following line from /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF: | ||
+ | <pre>source /global/ices/lcg/current/etc/profile.d/grid_env.sh</pre> | ||
+ | |||
+ | When submitting the job to Stoomboot you need to have your Grid proxy available on Stoomboot. To accomplish this do (thanks to Max: [https://twiki.cern.ch/twiki/bin/view/Main/FileStager FileStager twiki]) '''in a clean shell on your local desktop''' | ||
+ | <pre> | ||
+ | source /global/ices/lcg/current/etc/profile.d/grid_env.sh | ||
+ | voms-proxy-init -voms atlas -out $HOME/.globus/gridproxy.cert | ||
+ | export X509_USER_PROXY=${HOME}/.globus/gridproxy.cert | ||
+ | </pre> | ||
+ | |||
+ | And then start your submit-job with | ||
+ | <pre> | ||
+ | #!/bin/sh | ||
+ | ## script for restarting grid proxy certificate | ||
+ | source /global/ices/lcg/current/etc/profile.d/grid_env.sh | ||
+ | export X509_USER_PROXY=${HOME}/.globus/gridproxy.cert | ||
+ | voms-proxy-init -voms atlas -noregen | ||
+ | |||
+ | #plus rest of instructions | ||
+ | </pre> | ||
+ | |||
+ | followed by the 'rest of the instructions'. As a personal example I used the following submit.sh | ||
+ | <pre> | ||
+ | #!/bin/sh | ||
+ | |||
+ | source /global/ices/lcg/current/etc/profile.d/grid_env.sh | ||
+ | export X509_USER_PROXY=${HOME}/.globus/gridproxy.cert | ||
+ | voms-proxy-init -voms atlas -noregen | ||
+ | |||
+ | cd /project/atlas/users/egge/testarea/14.2.22/ | ||
+ | unset PYTHONPATH | ||
+ | source setup.sh | ||
+ | |||
+ | export TMPDIR=/tmpdir | ||
+ | mkdir /tmpdir/egge | ||
+ | |||
+ | cd /project/atlas/users/egge/testarea/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/ | ||
+ | |||
+ | source setupNikhefDB.sh | ||
+ | |||
+ | athena -s AMAAthena_jobOptionsESD.py | ||
+ | </pre> | ||
+ | |||
+ | Which is submitted (to qlong que!) from local desktop by | ||
+ | <pre> | ||
+ | qsub -V -q qlong submit.sh | ||
+ | </pre> | ||
+ | |||
+ | Have fun! |
Latest revision as of 10:45, 5 November 2008
Running at Nikhef on SARA ESDs
Below is a (personal) recipe to run on ESDs at Nikhef. The instructions are when logged on interactively to Stoomboot. Instructions to submit a job to the Stoomboot batch can be found at the bottom of the page.
From your local desktop log in to Stoomboot:
qsub -X -I -q qlong
Go to your working area and make a working directory
bash cd /project/atlas/users/egge/testarea/ mkdir twiki cd twiki mkdir 14.2.22
Create a requirements file, for example
set CMTSITE STANDALONE set SITEROOT /data/atlas/offline/14.2.22 macro ATLAS_TEST_AREA /project/atlas/users/egge/testarea/twiki macro ATLAS_DIST_AREA ${SITEROOT} macro SITE_PROJECT_AREA ${SITEROOT} macro EXTERNAL_PROJECT_AREA ${SITEROOT} apply_tag opt apply_tag setup apply_tag simpleTest apply_tag 14.2.22 apply_tag 32 apply_tag runtime use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) set CMTCONFIG i686-slc4-gcc34-opt
Then
source /data/atlas/offline/14.2.22/CMT/v1r20p20080222/mgr/setup.sh cmt config
The following two lines should be done every next time to set up 14.2.22
cd /project/atlas/users/egge/testarea/twiki/14.2.22/ source setup.sh
To be able to checkout AMA from CERN
/usr/kerberos/bin/kinit -5 egge@CERN.CH
Then actually checkout AMA
cmt co -r AMACore-00-00-02-01 PhysicsAnalysis/AnalysisCommon/AMA/AMACore cmt co -r AMAAthena-00-00-07 PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena
and compile
cd PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/cmt cmt br gmake
In ANOTHER shell: set up dq2
source /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF voms-proxy-init -voms atlas
Check if all is ok with
voms-proxy-info -all
To see what SARA destinations exist
dq2-destinations | grep "SARA"
To my knowledge SARA-MATRIX_DATADISK is the one to use (experts, please comment). To see the available (complete) datasets on SARA-MATRIX_DATADISK
dq2-list-dataset-site -c SARA-MATRIX_DATADISK
Let's take "data08_cosmag.00090721.physics_MBTS_BCM_LUCID.recon.ESD.o4_f70" as an example
cd /project/atlas/users/egge/testarea/twiki/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/ define_dq2_sample -n cosmag90721 data08_cosmag.00090721.physics_MBTS_BCM_LUCID.recon.ESD.o4_f70 SARA-MATRIX_DATADISK
BACK in Stoomboot shell. Also setup Grid here
cd /project/atlas/users/egge/testarea/twiki/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/ source /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF voms-proxy-init -voms atlas
The warning "bash: /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/latest/setup.sh: No such file or directory" seems to be harmless.
AFTER setting up Grid on Stoomboot ALWAYS do
unset PYTHONPATH source /project/atlas/users/egge/testarea/twiki/setup.sh export TMPDIR=/tmpdir
To make sure you have correct (14.2.22) Database settings please do
cp /project/atlas/users/egge/testarea/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/setupNikhefDB.sh . source setupNikhefDB.sh
Then, modify the following in "share/AMAAthena_jobOptionsESD.py"
EvtMax = 1000 sampleFile = "samples/cosmag90721.def" # File with input collections
Then copy "share/input_FileStagerRFCP" to "share/input_FileStager.py" and implement the following in "share/input_FileStager.py"
#stagetool.CpCommand = "rfcp" #stagetool.CpArguments = [] #stagetool.OutfilePrefix = "" stagetool.checkGridProxy = False ... svcMgr.EventSelector.SkipBadFiles = True
Then all should be ready to run the job
cd /project/atlas/users/egge/testarea/twiki/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena athena -s share/AMAAthena_jobOptionsESD.py
Submitting in stoomboot batch mode
If you remember from above instruction, setting up the Grid on Stoomboot results in an error: "bash: /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/latest/setup.sh: No such file or directory"
This can be overcome by using only the following line from /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF:
source /global/ices/lcg/current/etc/profile.d/grid_env.sh
When submitting the job to Stoomboot you need to have your Grid proxy available on Stoomboot. To accomplish this do (thanks to Max: FileStager twiki) in a clean shell on your local desktop
source /global/ices/lcg/current/etc/profile.d/grid_env.sh voms-proxy-init -voms atlas -out $HOME/.globus/gridproxy.cert export X509_USER_PROXY=${HOME}/.globus/gridproxy.cert
And then start your submit-job with
#!/bin/sh ## script for restarting grid proxy certificate source /global/ices/lcg/current/etc/profile.d/grid_env.sh export X509_USER_PROXY=${HOME}/.globus/gridproxy.cert voms-proxy-init -voms atlas -noregen #plus rest of instructions
followed by the 'rest of the instructions'. As a personal example I used the following submit.sh
#!/bin/sh source /global/ices/lcg/current/etc/profile.d/grid_env.sh export X509_USER_PROXY=${HOME}/.globus/gridproxy.cert voms-proxy-init -voms atlas -noregen cd /project/atlas/users/egge/testarea/14.2.22/ unset PYTHONPATH source setup.sh export TMPDIR=/tmpdir mkdir /tmpdir/egge cd /project/atlas/users/egge/testarea/14.2.22/PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/ source setupNikhefDB.sh athena -s AMAAthena_jobOptionsESD.py
Which is submitted (to qlong que!) from local desktop by
qsub -V -q qlong submit.sh
Have fun!