Difference between revisions of "Stoomboot"

From Atlas Wiki
Jump to navigation Jump to search
Line 85: Line 85:
 
* You can directly login to node stbc-16 (this only ''only'') to test and/or debug your problem. You should keep CPU consumption and testing time to a minimum as regularly scheduled batch jobs run on this machine too.
 
* You can directly login to node stbc-16 (this only ''only'') to test and/or debug your problem. You should keep CPU consumption and testing time to a minimum as regularly scheduled batch jobs run on this machine too.
  
* You can request an 'interactive' batch job through <tt>qsub -q qlong -X -I</tt>. In this mode you can consume as much CPU resources as the queue that the interactive job was submitted to allow.
+
* You can request an 'interactive' batch job through <tt>qsub -q qlong -X -I</tt>. In this mode you can consume as much CPU resources as the queue that the interactive job was submitted to allows. The 'look and feel' of interactive bacth jobs is nearly identical to that of <tt>ssh</tt>. The main exception is that when no free job slot is available the </tt>qsub</tt> command will hang until one becomes available.
  
 
=== Scratch disk usages and NFS disk access ===
 
=== Scratch disk usages and NFS disk access ===
  
 
When running on stoomboot please be sure to locate all local 'scratch' files to the directory pointed to by the environment variable <tt>$TMPDIR</tt> and ''not'' <tt>/tmp</tt>. The latter is very small (a few Gb) and when filled up will give all kinds of problems for you and other users. The disk pointed to by <tt>$TMPDIR</tt> is typically 200 Gb. Also here be sure to clean up when your job ends to avoid filling up the disk as well.
 
When running on stoomboot please be sure to locate all local 'scratch' files to the directory pointed to by the environment variable <tt>$TMPDIR</tt> and ''not'' <tt>/tmp</tt>. The latter is very small (a few Gb) and when filled up will give all kinds of problems for you and other users. The disk pointed to by <tt>$TMPDIR</tt> is typically 200 Gb. Also here be sure to clean up when your job ends to avoid filling up the disk as well.
 +
 +
=== Scheduling policies and CPU quota ===

Revision as of 14:12, 8 December 2008

What is stoomboot

Stoomboot is a batch farm for local use at NIKHEF. It is in principle open to all NIKHEF users, but a login account does not give automatic access to stoomboot. Contact helpdesk@nikhef.nl to gain access

Hardware

Stoomboot consists of 16 nodes (stbc-01 through stbc-16) that are each a equipped with dual quad-core Intel Xeon E5335 2.0 Ghz processors and 16 Gb of memory. The total number of cores is 128.

Software & disk access

All stoomboot nodes run Scientific Linux 4.7. All NFS mountable disks at NIKHEF are visible (/project/* and /data/*). Stoomboot does not run AFS so no AFS directories including /afs/cern.ch are not visible.

How to use stoomboot

Submitting batch jobs

Stoomboot is a batch-only facilities and jobs can be submitted through the PBS qsub command

unix> qsub test.sh
9714.allier.nikhef.nl

The argument passed to qsub is a script that will be executed in your home directory. The returned string is the job identifier and can be used to look up the status of the job, or to manipulate it later. Jobs can be submitted from any linux desktop at nikhef as well as login.nikhef.nl. If you cannot submit jobs from your local desktop contact helpdesk@nikhef.nl to have the batch client software installed.

The output of the job appears in files named <jobname>.o<number>, e.g. test.sh.o9714 in example of previous page. The following default settings apply when you submit a batch job

  • Job runs in home directory ($HOME)
  • Job starts with clean shell (any environment variable from the shell from which you submit are not transferred to batch job) E.g. if you need ATLAS software setup, it should be done in the submitted script
  • Job output (stdout) is sent to a file in directory in which job was submitted. Job stderr output is sent to separate file E.g. for example of previous slide file test.sh.o9714 contains stdout and file test.sh.e9714 contains stderr. If there is no stdout or stderr, an empty file is created
  • A mail is sent to you the output files cannot be created

Here is a listed of frequently desired changes in default behavior and their corresponding option in qsub

  • Merge stdout and stderr in a single file. Add option -j oe to qsub command (single file *.o* is written)
  • Choose batch queue. Right now there are two queues: test (30 min) and qlong (48h) Add option -q <queuename> to qsub command
  • Choose different output file for stdout. Add option -o <filename> to qsub command
  • Pass all environment variables of submitting shell to batch job (with exception of $PATH). Add option -V to qsub command

A full list of options can be obtained from man qsub

Examining the status of your jobs

The qstat command shows the stats of all your jobs. Status code 'C' indicates completed, 'R' indicates running and 'Q' indicates queued.

unix> qstat
Job id              Name             User            Time Use S Queue
------------------- ---------------- --------------- -------- - -----
9714.allier         test.sh          verkerke        00:00:00 C test

The qstat command only shows your own jobs, not those of other users. Only completed jobs that completed less than 10 minutes ago are listsed with status 'C'. Output of jobs that completed longer ago is kept, but they are simply no longer listed in the status overview.

To see activity of other users on the system you can use the lower-level maui command showq which will show jobs of all users. The showq command works without arguments on login, on any other host add --host=allier to run it successfully.

The general level of activity on stoomboot is graphically monitored in this location http://www.nikhef.nl/grid/stats/stbc/

Suggestions for debugging and trouble shooting

If you want to debug a problem that occurs on a stoomboot batch job, or you want to make a short trial run for a larger series of batch jobs there are two ways to gain interactive login access to stoomboot.

  • You can directly login to node stbc-16 (this only only) to test and/or debug your problem. You should keep CPU consumption and testing time to a minimum as regularly scheduled batch jobs run on this machine too.
  • You can request an 'interactive' batch job through qsub -q qlong -X -I. In this mode you can consume as much CPU resources as the queue that the interactive job was submitted to allows. The 'look and feel' of interactive bacth jobs is nearly identical to that of ssh. The main exception is that when no free job slot is available the qsub command will hang until one becomes available.

Scratch disk usages and NFS disk access

When running on stoomboot please be sure to locate all local 'scratch' files to the directory pointed to by the environment variable $TMPDIR and not /tmp. The latter is very small (a few Gb) and when filled up will give all kinds of problems for you and other users. The disk pointed to by $TMPDIR is typically 200 Gb. Also here be sure to clean up when your job ends to avoid filling up the disk as well.

Scheduling policies and CPU quota