Difference between revisions of "Qsub-tunnel"
|Line 1:||Line 1:|
You can submit jobs
You can submit jobs from the login systems (lena and tevere) directly to the NDPF grid clusters using the qsub command. This allows for limited out-scaling of local analysis jobs to the grid clusters, benefiting from more compute power and better connectivity to the storage systems. We hope you find this facility useful and appreciate feed-back. The initial stage will consist of a limited end-user triel in which we aim to identify configuration errors and inconsistencies.
= Current status =
= Current status =
Revision as of 12:36, 30 January 2018
You can submit jobs from the login systems (lena and tevere) or the stoomboot interactive nodes directly to the NDPF grid clusters using the qsub command. This allows for limited out-scaling of local analysis jobs to the grid clusters, benefiting from more compute power and better connectivity to the storage systems. We hope you find this facility useful and appreciate feed-back. The initial stage will consist of a limited end-user triel in which we aim to identify configuration errors and inconsistencies.
This is an experimental service: there is no service level defined of the Qsub tunnel, and it is not guaranteed to be available or to suit any purpose. For the time being, this service is open to selected users in Atlas and LHCb in order to evaluate its usefulness and to identify lacking features and bugs. In order to gain access, users must send a request to email@example.com and ask their account to be enabled for this service.
- Shared file system
- is at /data/tunnel, with a user/name directory and a group/experiment directory for installing your programs
- can be done via qsub and some magic, but nsub (/global/ices/toolset/bin/nsub) has all the magic built in
- What about my home directory?
- your home directory on your desktop and windows box is not available to your job. If it were, all files over there could be read by anyone running a grid job! So, you have a dedicated home directory on the grid cluster (/data/tunnel/user/name)
- Running out of space?
- your 'tunnel' directory is for the exchange of log file and stdout/stderr only. Use the lcg-cr tools to access the fast grid storage from your job and the desktop, or from your desktop use rfio (with your grid proxy) to read/write from disk at Nikhef.
- But rfdir is not found in my path on the desktop!
- source . /global/ices/lcg/current/etc/profile.d/grid-env.sh and get a valid proxy with voms-proxy-init -voms experiment. Then, try e.g. rfdir /dpm/nikhef.nl/home/atlas/users/, or rfdir /dpm/nikhef.nl/home/lhcb
- contact firstname.lastname@example.org, or drop by in H1.50 - H1.59!
The NDPF qsub service has the following features:
- scaling to a larger number of nodes
- improved connectivity to storage at Nikhef (up to 160 Gbps, or almost 20 GByte/s). For Atlas, this includes the AODs and local group disk
- dedicated connectivity to storage at SARA (10 Gbps). For Atlas, this includes ESD and RAW data
- default installation of the experiment-recommended software has already been done for you (in /data/esia/exp/, as pointed to by $VO_exp_SW_DIR)
- all grid tools available by default
- fast locak disk space is available on each worker node. Use $TMPDIR for all your data storage and cache needs -- and not your home! The size of the TMPDIR is 30 GByte per job slot.
Due to the nature of the setup, the following limitations apply
- your home directory on your desktop is NOT availabel in the NDPF grid clusters (making it available would make your directory really world readable!)
- the only shared file system is /data/tunnel, which appears on both sides (desktop network an NDPF) in the same way. The shared file system has a user area ".../user/uid/" and a group area for installing software ".../group/experiment/".
- you must not use the shared file system to transfer files in and out. Use the grid storage for that purpose!
- the shared file system is a data disk: there is no backup of the files stored there
- for users that do not have their own home directory on the NDPF grid UI, the home directory is set to the tunnel cache area.
- no more than 3 GByte of RAM and 30 Gbyte of TMPDIR per core please
If you were to try and abuse the shared file system for data-intensive work, you will kill the file server and you will impede all other users of the cluster. So please don't!
Example job script
In the 'NDPF' side of the cluster, you will see the tunnel file system and the grid tools (including the experiment software distributions that have been installed for you by your experiment computing coordinators or VO operations manager).
The 'shared file system' (/data/tunnel/) is not intended for transfer of data files, and should only be used to log files, standard input and output, and for hosting job scripts and your own programs. Quota on this file system is intentionally limited (to 256 MByte per user). If you were to use it for data transfer, you would just kill the server and prevent all other users from doing any work.
To store and retrieve files, use the following:
- when submitting, make sure you have a valid grid proxy. Use voms-proxy-init -voms yourVO to get a proxy.
- to store temporary or large data files, use the directory pointed to by $TMPDIR. This is local to each worker node and on a very fast disk. It will be purged on job completion
- to retrieve data, use your experiment's data management tools, or the lcg-cp command
- to write result data a the end of the job ditto: write the results to grid storage using your data management tools or lcg-cp and retrieve them on your desktop or laptop. Note that you cannot overwrite files on grid storage -- select unique names or remove your old data with lcg-del.
A typical job script looks like:
#! /bin/sh # # @(#)/user/davidg/ndpf-example.sh # #PBS -q email@example.com # cd $TMPDIR lcg-cp srm://tbn18.nikhef.nl/dpm/nikhef.nl/home/pvier/davidg/welcome.txt file:`pwd`/welcome.txt cat welcome.txt > goodbye.txt lcg-cp file:`pwd`/goodbye.txt srm://tbn18.nikhef.nl/dpm/nikhef.nl/home/pvier/davidg/goodbye.txt dt=`date` nodes=`cat $PBS_NODEFILE` echo "Copied the welcome to the goodbye file on $dt!" echo "and used hosts $nodes for that"
which you would submit like:
For more complex data movement scenarios, contact firstname.lastname@example.org for suggestions.
Queues and PBS server
The NDPF PBS server is (currently) called korf.nikhef.nl and has one queue enabled for local job submission. This queue is called medium and has a maximum wall clock time of 36 hours. By default, the nsub utility will submit to this medium queue.
nsub wrapper (/global/ices/toolset/bin/nsub)
The nsub utility submits batch scripts from ikohefnet hosts to the NDPF using the shared file system. The jobs can be submitted from selected hosts on the ikohef network and will run on the NDPF clusters. Usage: nsub [-o output] [-e error] [-q queue] [-v] [--quiet] [-I] [-N name] [-f proxyfile] [-c rcfile] <job> ... Arguments: -o outputfile destination of stdout, must be in sharedfs -e errorfile destination of stderr, must be in sharedfs -q queue queue to which to submit on stro.nikhef.nl -v/--quiet be more verbose/do not warn about relocations -N name name on the job in the pbs queue -I interactive jobs are NOT SUPPORTED -n do not actually submit job (dryrun) -c rcfile additional definitions and defaults -f file | --noproxy use proxy in <file>, or ignore proxies altogether <job> ... more than one job may be submitted to PBS, (no -o or -e possible if more than one job given) Defaults: shared file system /data/tunnel userdirectory /data/tunnel/user/davidg pbs server stro.nikhef.nl queue infra
When using grid tools on your desktop or on stoomboot, you may have references to the desktop grid installation (/global/ices/lcg/current/...) set in your environment variables. These will not work on the NDPF, where the grid tools have been installed as part of the operating system. For consistency reasons, the nsub wrapper will remove such variables from your environment as you submit your job, unless you have them pointing explicitly to the shared file system in /data/tunnel.
If you submit jobs manually, check or remove the following variables:
LCG_GFAL_INFOSYS GLITE_LOCATION GLOBUS_TCP_PORT_RANGE GLITE_LOCATION_VAR GLITE_SD_SERVICES_XML EDG_LOCATION EDG_WL_LOCATION GLITE_SD_PLUGIN GLITE_WMS_LOCATION GLITE_LOCAL_CUSTOMIZATION_DIR X509_CERT_DIR X509_VOMS_DIR