Difference between revisions of "Stoomboot cluster"

From CT Wiki
Jump to navigation Jump to search
(Created page with 'UNDER CONSTRUCTION For the time being please consult the Atlas documentation following URI http://www.nikhef.nl/pub/experiments/atlaswiki/index.php/Stoomboot But be aware: t…')
 
Line 1: Line 1:
UNDER CONSTRUCTION
+
The Stoomboot cluster is the local batch computing facility at Nikhef. It is accessible for users from scientific groups to perform for example data analysis or Monte Carlo calculations.
  
For the time being please consult the Atlas documentation following URI
+
The cluster consists of 93 nodes with 8 cores each, running Scientific Linux CERN 6 as operating system. For a limited time, there are also 15 nodes (8 cores per node) running Scientific Linux CERN 5, to ease the transition to SLC6.  
http://www.nikhef.nl/pub/experiments/atlaswiki/index.php/Stoomboot
 
  
But be aware: this page is not kept up to date.
+
The Stoomboot cluster uses a combination of the Torque resource manager and the Maui scheduler. To interact with the batch system (e.g. submit a job, query a job status or delete a job), you need to login on a linux machine (either on the console or via ssh). Machines that can be used include desktops managed by the CT department, the login hosts and the interactive Stoomboot nodes.
 +
 
 +
 
 +
== Job Submission ==
 +
 
 +
The command qsub is used to submit jobs to the cluster. A typical use of this command is:
 +
qsub [-q <queue>] [-l resource_name[=[value]][,resource_name[=[value]],...]] [script]
 +
 
 +
The optional argument ''script'' is the user-provided script that does the work. If no ''script'' is provided, the input is read from the console (STDIN).
 +
 
 +
Please read the section Queues below for more information about available queues and their properties. It is recommended to specify a queue.
 +
 
 +
For simple jobs, it usually not needed to provide the option -l with its resource list. However, if the jobs needs more than one core or node, or if the wall time limits of the job (maximum time that the job can exist) should be specified, this option should be used.
 +
 
 +
More detailed information can be found in the manual page for qsub:
 +
man qsub
 +
 
 +
 
 +
== Job Status ==
 +
 
 +
 
 +
 
 +
== Queues ==

Revision as of 15:28, 31 March 2014

The Stoomboot cluster is the local batch computing facility at Nikhef. It is accessible for users from scientific groups to perform for example data analysis or Monte Carlo calculations.

The cluster consists of 93 nodes with 8 cores each, running Scientific Linux CERN 6 as operating system. For a limited time, there are also 15 nodes (8 cores per node) running Scientific Linux CERN 5, to ease the transition to SLC6.

The Stoomboot cluster uses a combination of the Torque resource manager and the Maui scheduler. To interact with the batch system (e.g. submit a job, query a job status or delete a job), you need to login on a linux machine (either on the console or via ssh). Machines that can be used include desktops managed by the CT department, the login hosts and the interactive Stoomboot nodes.


Job Submission

The command qsub is used to submit jobs to the cluster. A typical use of this command is:

qsub [-q <queue>] [-l resource_name[=[value]][,resource_name[=[value]],...]] [script]

The optional argument script is the user-provided script that does the work. If no script is provided, the input is read from the console (STDIN).

Please read the section Queues below for more information about available queues and their properties. It is recommended to specify a queue.

For simple jobs, it usually not needed to provide the option -l with its resource list. However, if the jobs needs more than one core or node, or if the wall time limits of the job (maximum time that the job can exist) should be specified, this option should be used.

More detailed information can be found in the manual page for qsub:

man qsub


Job Status

Queues