The Stoomboot cluster is the local batch computing facility at Nikhef. It is accessible for users from scientific groups to perform for example data analysis or Monte Carlo calculations.
The cluster consists of 93 nodes with 8 cores each, running Scientific Linux CERN 6 as operating system. For a limited time, there are also 15 nodes (8 cores per node) running Scientific Linux CERN 5, to ease the transition to SLC6.
The Stoomboot cluster uses a combination of the Torque resource manager and the Maui scheduler. To interact with the batch system (e.g. submit a job, query a job status or delete a job), you need to login on a linux machine (either on the console or via ssh). Machines that can be used include desktops managed by the CT department, the login hosts and the interactive Stoomboot nodes.
The command qsub is used to submit jobs to the cluster. A typical use of this command is:
qsub [-q <queue>] [-l resource_name[=[value]][,resource_name[=[value]],...]] [script]
The optional argument script is the user-provided script that does the work. If no script is provided, the input is read from the console (STDIN).
Please read the section Queues below for more information about available queues and their properties. It is recommended to specify a queue.
For simple jobs, it usually not needed to provide the option -l with its resource list. However, if the job needs more than one core or node, or if the wall time limits of the job (maximum time that the job can exist) should be specified, this option should be used. Example:
- -l nodes=1:ppn=4 requests 4 cores on 1 node
- -l walltime=32:10:05 requests a wall time of 32 hours, 10 minutes and 5 seconds
More detailed information can be found in the manual page for qsub: