Difference between revisions of "Qsub-tunnel"

From PDP/Grid Wiki
Jump to navigationJump to search
m
 
m
Line 1: Line 1:
 
 
= Examples =
 
= Examples =
  
Line 15: Line 14:
 
- to retrieve data, use your experiment's data management tools, or the <tt>lcg-cp</tt> command
 
- to retrieve data, use your experiment's data management tools, or the <tt>lcg-cp</tt> command
 
- to write result data a the end of the job ditto: write the results to grid storage using your data management tools or <tt>lcg-cp</tt> and retrieve them on your desktop or laptop.
 
- to write result data a the end of the job ditto: write the results to grid storage using your data management tools or <tt>lcg-cp</tt> and retrieve them on your desktop or laptop.
 
  
 
A typical job script looks like:
 
A typical job script looks like:

Revision as of 14:05, 1 June 2010

Examples

Example job script

In the 'NDPF' side of the cluster, you will see the tunnel file system and the grid tools (including the experiment software distributions that have been installed for you by your experiment computing coordinators or VO operations manager).

The 'shared file system' (/data/tunnel/) is not intended for transfer of data files, and should only be used to log files, standard input and output, and for hosting job scripts and your own programs. Quota on this file system is intentionally limited (to 256 MByte per user). If you were to use it for data transfer, you would just kill the server and prevent all other users from doing any work.

To store and retrieve files, use the following: - when submitting, make sure you have a valid grid proxy. Use voms-proxy-init -voms yourVO to get a proxy. - to store temporary or large data files, use the directory pointed to by $TMPDIR. This is local to each worker node and on a very fast disk. It will be purged on job completion - to retrieve data, use your experiment's data management tools, or the lcg-cp command - to write result data a the end of the job ditto: write the results to grid storage using your data management tools or lcg-cp and retrieve them on your desktop or laptop.

A typical job script looks like:

#! /bin/sh
#
# @(#)/user/davidg/ndpf-example.sh
#
#PBS -q medium@stro.nikhef.nl
#
cd $TMPDIR
lcg-cp srm://tbn18.nikhef.nl/dpm/nikhef.nl/home/pvier/davidg/welcome.txt file:`pwd`/welcome.txt
cat welcome.txt > goodbye.txt
lcg-cp file:`pwd`/goodbye.txt srm://tbn18.nikhef.nl/dpm/nikhef.nl/home/pvier/davidg/goodbye.txt

which you would submit like:

/global/ices/toolset/ibin/nsub /user/davidg/ndpf-example.sh

For more complex data movement scenarios, contact grid.support@nikhef.nl for suggestions.


Documentation

nsub wrapper

The nsub utility submits batch scripts from ikohefnet hosts to the
NDPF using the shared file system. The jobs can be submitted from selected
hosts on the ikohef network and will run on the NDPF clusters.
Usage: nsub [-o output] [-e error] [-q queue] [-v] [--quiet] [-I]
      [-N name] [-f proxyfile] [-c rcfile] <job> ...
Arguments:
  -o outputfile            destination of stdout, must be in sharedfs
  -e errorfile             destination of stderr, must be in sharedfs
  -q queue                 queue to which to submit on stro.nikhef.nl
  -v/--quiet               be more verbose/do not warn about relocations
  -N name                  name on the job in the pbs queue
  -I                       interactive jobs are NOT SUPPORTED
  -n                       do not actually submit job (dryrun)
  -c rcfile                additional definitions and defaults
  <job> ...                more than one job may be submitted to PBS,
                           (no -o or -e possible if more than one job given)
Defaults:
  shared file system       /data/tunnel
  userdirectory            /data/tunnel/user/davidg
  pbs server               stro.nikhef.nl
  queue                    infra