Difference between revisions of "AnalysisAtNikhef"

From ALICE Wiki
Jump to navigation Jump to search
(3 intermediate revisions by the same user not shown)
Line 84: Line 84:
 
  done < "$1"
 
  done < "$1"
  
 +
=== Using singularity to run Jetscape at Nikhef ===
  
 +
On the Nikhef stoomboot cluster and login nodes, [https://sylabs.io/guides/3.6/user-guide/introduction.html singularity] is available to run code in 'containers' (a kind of system inside a system mode), which is for example useful for running the Jetscape generator.
  
* Quark at Utrecht University. You need a UU computing (Solis) account for this one. Some [http://www.staff.science.uu.nl/~leeuw179/bachelor_research/quark_cluster.html basic instructions are here].
+
The singularity executable is available in this directory:  
  
If you are new in the group, [https://mailman.nikhef.nl/mailman/listinfo/alice-group subscribe to our mailing list]
+
/cvmfs/oasis.opensciencegrid.org/mis/singularity/bin/singularity
 +
 
 +
You can use the full path name, or put an alias in your bashrc file.
 +
 
 +
The steps to obtain and run Jetscape are:
 +
 
 +
1) Download/checkout Jetscape from github following '''Step 2''', point 1, of [https://github.com/JETSCAPE/JETSCAPE/tree/master/docker Jetscape Docker instructions].  
 +
: '''NB''': it is better not to use your home directory ~/jetscape-docker, but make a dedicated directory on the project disk ( /project/alice/users/$USER )
 +
 
 +
2) 'Pull' the docker container that is mentioned under point 2:
 +
/cvmfs/oasis.opensciencegrid.org/mis/singularity/bin/singularity pull docker://jetscape/base:v1.4
 +
: this probably fails with a 'disk quota Exceeded' message. To fix this, move your singularity cache to project:
 +
mv ~/.singularity/cache /project/alice/users/$USER/singularity-cache
 +
ln -s /project/alice/users/$USER/singularity-cache ~/.singularity/cache
 +
:: (Another way to achieve this is by setting the SINGULARITY_CACHEDIR environment variable)
 +
: and try again. This has downloaded some cache files and produces a singularity configuration file named: base_v1.4.sif
 +
 
 +
3) Enter the Jetscape container:
 +
/cvmfs/oasis.opensciencegrid.org/mis/singularity/bin/singularity run --home /project/alice/users/$USER/jetscape-docker:/home/jetscape-user base_v1.4.sif
 +
: you still have to do 'cd' once to get to the home directory.
 +
 
 +
4) Compile Jetscape, following step 2.3 from the Jetscape instruction
 +
: You now have a container that is ready to run Jetscape. For running instructions, see the [https://indico.bnl.gov/event/8660/timetable/ Jetscape summer school]
 +
 
 +
=== Quark cluster at Utrecht University ===
 +
 
 +
If you have an Utrecht University computing (Solis) account, you can get access to the quark cluster in Utrecht. Discuss with your supervisor whether this is useful and who to ask to activate your account. Further information can be on the [https://uugrasp.github.io/UUComputingDocs/ UU GRASP computing page].

Revision as of 16:31, 21 July 2020

Running analyses at Nikhef

General information

This is an index of Technical/computing info for the ALICE group

If you want to launch your analysis at the batch farm of Nikhef and you try to connect outside the Nikhef domain then first connect to the login server

. ssh <username>@login.nikhef.nl

Please note that you should never try to run anything on this machine (e.g. not even using a browser), since this is the entry point for everybody to the Nikhef domain. For light things (e.g. browsing a web page, downloading a paper etc) connect to one of the desktops of our group:

. vesdre, romanche, hamme, blavet, mella, mulde, olona, sacco, luhe

In case you need to run some analysis interactively that needs to access data stored on the common disk then you need to connect on one of the interactive nodes:

. stbc-i1, stbc-i2, stbc-i4

Storage information

  • Each user is allocated a limited space under the home directory. Please note that you should use this space for keeping light-weighted files e.g. macros, pdf files,...
  • As a group we are given 300GB under /project/alice that is used to install our common software there. Please do not use this space to store large files e.g. root files. This directory can be accessed by all machines i.e. both desktops (see above the names) and the interactive and batch nodes of Nikhef.
  • The current common storage of Nikhef is based on dcache, an efficient system to store and retrieve large files. It is accessed only by the interactive and batch nodes of Nikhef. As a group we have ~300TB of disk space under /dcache/alice. This is currently the place where you can store your files for analysis, production etc. In case you want access to it please send a mail to Panos Christakoglou. Note that this storage won't be seen by the AliEn but rather it is reserved for local usage. It has the possibility to see the GRID file catalogue and thus allows copying productions (i.e. the typical use case of this storage for the group).

ALICE software and how to access it

The ALICE software is not installed anymore locally at Nikhef. If you still require a version to be installed (e.g. for debugging, development) please send a mail to Panos Christakoglou, indicating the tag you need.

Accessing the ALICE software can now be done via cvmfs which is centrally installed on all machines at Nikhef. To get it working, first add the following line to your .bashrc (or whichever shell script you use):

. source /cvmfs/alice.cern.ch/etc/login.sh

To list the available modules type:

. alienv q | grep AliPhysics

To load the ALICE environment on your Nikhef node, use:

. alienv enter VO_ALICE@AliPhysics::vAN-20160905-1

by replacing the physics tag with the one you want to use

Running analyses on the Stoomboot cluster

We have two local computer clusters:

A new storage is deployed at Nikhef which is based on dcache. Overall, close to 300TB of disk space are available for the group. This storage is intended to be used for local, batch (i.e. using stoomboot) analysis of data samples that are moved to dcache. Currently the LHC10h (AOD160) and the low intensity runs of LHC15o Pb-Pb periods are stored under /dcache/alice/panosch/alice/data/. Below you can find a template of a script, called submit.sh, (it can certainly be written better) that allows you to launch a series of batch jobs and analyze either the 2010 or 2015 data. The way to run it is from one of the Nikhef desktops you do

. source submit.sh lhc10h.txt 2010 

where the text file contains all the run numbers of 2010 copied at Nikhef and 2010 indicates the production year.

#!/bin/bash
SCRIPT="runAnalysis.sh"
while IFS= read -r runNumber || -n "$runNumber" ; do
    echo "Adding run number from file: $runNumber"

#make the script to submit
    (echo "#!/bin/bash"
echo "source /cvmfs/alice.cern.ch/etc/login.sh"
echo "eval $(alienv printenv VO_ALICE@AliPhysics::vAN-20161005-1)"
echo "which aliroot || exit 1"
if [ "$2" == "2010" ]
then
    echo "cd /dcache/alice/panosch/alice/data/2010/LHC10h/AOD160/$runNumber"
elif [ "$2" == "2015" ]
then
    echo "cd /dcache/alice/panosch/alice/data/2015/LHC15o/000$runNumber"
    echo "cd pass2_lowIR"
else 
    exit
fi
echo "pwd"
echo "if [ -f AnalysisResults.root ]"
echo "  then "
echo "rm -rf AnalysisResults.root"
echo "fi"
echo "if [ ! -f runFlowPIDSPTask.C ]"
echo " then "
echo "ln -s /user/panosch/ALICE/Flow/HigherHarmonics/Stoomboot/runFlowPIDSPTask.C ." 
echo "fi"
echo "exec aliroot -b -q runFlowPIDSPTask.C"
    ) > $SCRIPT 

qsub -q stbcq $SCRIPT 

done < "$1"

Using singularity to run Jetscape at Nikhef

On the Nikhef stoomboot cluster and login nodes, singularity is available to run code in 'containers' (a kind of system inside a system mode), which is for example useful for running the Jetscape generator.

The singularity executable is available in this directory:

/cvmfs/oasis.opensciencegrid.org/mis/singularity/bin/singularity 

You can use the full path name, or put an alias in your bashrc file.

The steps to obtain and run Jetscape are:

1) Download/checkout Jetscape from github following Step 2, point 1, of Jetscape Docker instructions.

NB: it is better not to use your home directory ~/jetscape-docker, but make a dedicated directory on the project disk ( /project/alice/users/$USER )

2) 'Pull' the docker container that is mentioned under point 2:

/cvmfs/oasis.opensciencegrid.org/mis/singularity/bin/singularity pull docker://jetscape/base:v1.4
this probably fails with a 'disk quota Exceeded' message. To fix this, move your singularity cache to project:
mv ~/.singularity/cache /project/alice/users/$USER/singularity-cache
ln -s /project/alice/users/$USER/singularity-cache ~/.singularity/cache
(Another way to achieve this is by setting the SINGULARITY_CACHEDIR environment variable)
and try again. This has downloaded some cache files and produces a singularity configuration file named: base_v1.4.sif

3) Enter the Jetscape container:

/cvmfs/oasis.opensciencegrid.org/mis/singularity/bin/singularity run --home /project/alice/users/$USER/jetscape-docker:/home/jetscape-user base_v1.4.sif 
you still have to do 'cd' once to get to the home directory.

4) Compile Jetscape, following step 2.3 from the Jetscape instruction

You now have a container that is ready to run Jetscape. For running instructions, see the Jetscape summer school

Quark cluster at Utrecht University

If you have an Utrecht University computing (Solis) account, you can get access to the quark cluster in Utrecht. Discuss with your supervisor whether this is useful and who to ask to activate your account. Further information can be on the UU GRASP computing page.