Difference between revisions of "Grid@Nikhef"

From PDP/Grid Wiki
Jump to navigationJump to search
(remove CREAM-CE references)
 
(13 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The EGEE grid infrastructure can be accessed from any Nikhef desktop running Scientific Linux 3, 4 or 5. In order to use the EGEE gLite grid middleware, type
 
  . /global/ices/lcg/current/etc/profile.d/grid-env.sh        ## bash/ksh/zsh
 
or
 
  source /global/ices/lcg/current/etc/profile.d/grid-env.csh  ## csh/tcsh
 
first. This will set up all required environment variables for running the gLite middleware commands.
 
(with special thanks to Egge van der Poel for finding all csh related errors ;-))
 
  
Depending on  your operating system the appropriate version of the gLite grid middleware will be selected:
+
== Overview ==
* Scientific Linux 3 : glite 3.0 (deprecated!)
 
* Scientific Linux 4 32bit and 64bit: glite 3.1
 
* Scientific Linux 5 32bit: glite 3.1
 
* Scientific Linux 5 64bit: glite 3.2
 
  
Type
+
The EGI / WLCG grid infrastructure can be accessed from any Nikhef-managed machine running a variant of Red Hat Enterprise Linux 6 or higher (e.g. CentOS 6 or 7) or from a Grid UI (i.e., bosui.nikhef.nl or boslook.nikhef.nl).
 +
 
 +
To access the Grid from a Nikhef-managed machine, you need to get the grid middleware tools first:
 +
  source /cvmfs/grid.cern.ch/etc/profile.d/setup-cvmfs-ui.sh        ## bash/ksh/zsh
 +
Csh users will have to first run
 +
  /bin/bash
 +
or perhaps even better, change their login shell to bash or zsh. (If you are accessing the Grid from a UI, you can skip this step.)
 +
 
 +
Sourcing the setup script will set up all required environment variables for running the middleware commands.
 +
 
 +
Once you have sourced the grid middleware tools or have logged into the UI, type
 
   voms-proxy-init --voms <YOUR-VO>
 
   voms-proxy-init --voms <YOUR-VO>
 
to create a voms proxy. Sample output:
 
to create a voms proxy. Sample output:
Line 24: Line 24:
 
   ................................................................... Done
 
   ................................................................... Done
 
   Your proxy is valid until Fri Dec  7 00:08:49 2007
 
   Your proxy is valid until Fri Dec  7 00:08:49 2007
If you see a warning similar to
 
  Wrong ownership on file: /global/ices/lcg/glite3.0.21/glite/etc/vomses
 
  Expected: either (0,0) or (UID, GID) = (0, 0)
 
then you safely ignore it. This is a ''feature'' of the voms software.
 
  
Congratulations! You are now ready to submit jobs to the EGEE grid using
+
or you could use the arcproxy tool to generate a proxy:
  glite-wms-job-list-match myjob.JDL
+
 
and
+
  $ arcproxy --voms <YOUR-VO>
   glite-wms-job-delegate-proxy
+
  Enter Password for PKCS12 certificate:
   glite-wms-job-submit myjob.JDL
+
  Your identity: /DC=org/DC=terena/DC=tcs/C=NL/O=Nikhef/CN=<Some User>
  glite-wms-job-status
+
  Contacting VOMS server (named pvier): voms.grid.sara.nl on port: 30000
  glite-wms-job-get-output
+
  Proxy generation succeeded
...
+
  Your proxy is valid until: 2020-11-18 04:32:41
 +
 
 +
Congratulations! You are now ready to use grid middleware tools.
 +
 
 +
== Submitting your job ==
 +
NOTICE: If you submitted jobs using glite-ce-*, you will need to use a new set of commands to submit your jobs to the ARC-CEs.
 +
 
 +
Therefore, we are asking users to submit job description files that are compatible with ARC-CE
 +
 
 +
Information about how to create your job description files for an ARC-CE can be found at (http://www.nordugrid.org/arc/arc6/users/xrsl.html)[http://www.nordugrid.org/arc/arc6/users/xrsl.html]
 +
 
 +
Please be sure to specify a queue in your job description with
 +
 
 +
  ("queue" = "short|medium|long|..." )
 +
 
 +
After creating a voms or arc proxy and have your job description file ready, some commands to start running your job can look something like:
 +
 
 +
  # Submit your job to an ARC endpoint with your xrsl or adl file specified
 +
  arcsub -c brug.nikhef.nl [YOUR XRSL OR ADL FILE]
 +
 
 +
  # Check the status of all your jobs. Adding -l will give you a long description of each of your jobs.
 +
  arcstat -a(l)
 +
 
 +
  # Or you can add the unique ID for your jobs with:
 +
  arcstat [gsiftp|https]://brug.nikhef.nl:443/[jobs|arex]/[UNIQUE JOB ID]
 +
 
 +
  # Fetch your job output, logs etc with...
 +
  arcget -a
 +
   # (or arcget with a single job id)
 +
 
 +
 
 +
----
 +
 
 +
 
 +
General queues and walltimes available:
 +
 
 +
{| class="wikitable"
 +
|-
 +
! Queue Name ||   Max. Walltime (hh:mm:ss) || Allowed VOs
 +
|-
 +
| short ||  04:00:00 || alice atlas dans projects.nl pvier virgo dune lsgrid lofar tutor enmr.eu bbmri.nl xenon.biggrid.nl chem.biggrid.nl drihm.eu
 +
|-
 +
| medium || 36:00:00 || alice atlas dans projects.nl pvier virgo dune lsgrid lofar tutor enmr.eu bbmri.nl xenon.biggrid.nl chem.biggrid.nl drihm.eu
 +
|-
 +
| long || 96:00:00  || alice atlas dans projects.nl pvier virgo dune lsgrid lofar tutor bbmri.nl xenon.biggrid.nl chem.biggrid.nl drihm.eu
 +
|-
 +
|}
 +
 
 +
For more information or find other queues, use lcg-info or lcg-infosites which will give you more information about what is available to your VO. For example,
 +
 
 +
lcg-infosites --vo pvier -f NIKHEF-ELPROD all
 +
 
 +
More information is also available on the SURFsara wiki: http://doc.grid.surfsara.nl/en/latest/Pages/Service/system_specifications/gina_specs.html#queues
 +
 
 +
== Specifying Job Requirements ==
 +
 
 +
The default values for memory, nodes, CPUs and local scratch space may not be adequate for your use case. It is possible to specify the requirements
 +
for your jobs in the XRSL file which will then be translated into requirements on the local batch system. This will either match suitable resources,
 +
or match nothing at all if your requirements exceed what is available. If this is the first time you need to specify additional requirements, please
 +
ask the site administrators for advice.
 +
 
 +
=== Memory requirements ===
 +
 
 +
The amount of main memory (RAM) required for the job can be passed by adding this line to the XRSL file:
 +
 
 +
  (memory=<8192>)
 +
 
 +
This example requests 8GB of RAM (the unit is Megabytes). Be aware that exceeding the requested amount in the actual job may result in termination
 +
of the job by the batch system.
 +
 
 +
=== Multi-core jobs ===
  
Sample output:
+
XRSL parameters:  
  
   $ glite-wms-job-delegate-proxy -d janjust
+
   (count=4)
  Connecting to the service https://graszode.nikhef.nl:7443/glite_wms_wmproxy_server
+
   (countpernode=1)
  ================== glite-wms-job-delegate-proxy Success ==================
 
  Your proxy has been successfully delegated to the WMProxy:
 
   https://graszode.nikhef.nl:7443/glite_wms_wmproxy_server
 
  with the delegation identifier: janjust
 
  ==========================================================================
 
  
  $ glite-wms-job-submit -d janjust HelloWorld.jdl
 
  Connecting to the service https://graszode.nikhef.nl:7443/glite_wms_wmproxy_server
 
  ====================== glite-wms-job-submit Success ======================
 
  The job has been successfully submitted to the WMProxy
 
  Your job identifier is:
 
    https://grasveld.nikhef.nl:9000/WlhqlLNLgnYc0tCqwiIM5A
 
  ==========================================================================
 
  
etc...
+
These examples request 4 cores on 1 node. See also [[Enabling_multicore_jobs_and_jobs_requesting_large_amounts_of_memory]].

Latest revision as of 16:52, 8 February 2022

Overview

The EGI / WLCG grid infrastructure can be accessed from any Nikhef-managed machine running a variant of Red Hat Enterprise Linux 6 or higher (e.g. CentOS 6 or 7) or from a Grid UI (i.e., bosui.nikhef.nl or boslook.nikhef.nl).

To access the Grid from a Nikhef-managed machine, you need to get the grid middleware tools first:

 source /cvmfs/grid.cern.ch/etc/profile.d/setup-cvmfs-ui.sh        ## bash/ksh/zsh

Csh users will have to first run

 /bin/bash

or perhaps even better, change their login shell to bash or zsh. (If you are accessing the Grid from a UI, you can skip this step.)

Sourcing the setup script will set up all required environment variables for running the middleware commands.

Once you have sourced the grid middleware tools or have logged into the UI, type

 voms-proxy-init --voms <YOUR-VO>

to create a voms proxy. Sample output:

 $ voms-proxy-init -voms pvier
 Enter GRID pass phrase:
 Your identity: /O=dutchgrid/O=users/O=nikhef/CN=Some User
 Creating temporary proxy .......................................... Done
 Contacting  voms.grid.sara.nl:30000
 [/O=dutchgrid/O=hosts/OU=sara.nl/CN=voms.grid.sara.nl] "pvier" Done
 Creating proxy
 ................................................................... Done
 Your proxy is valid until Fri Dec  7 00:08:49 2007

or you could use the arcproxy tool to generate a proxy:

 $ arcproxy --voms <YOUR-VO>
 Enter Password for PKCS12 certificate:
 Your identity: /DC=org/DC=terena/DC=tcs/C=NL/O=Nikhef/CN=<Some User>
 Contacting VOMS server (named pvier): voms.grid.sara.nl on port: 30000
 Proxy generation succeeded
 Your proxy is valid until: 2020-11-18 04:32:41
 

Congratulations! You are now ready to use grid middleware tools.

Submitting your job

NOTICE: If you submitted jobs using glite-ce-*, you will need to use a new set of commands to submit your jobs to the ARC-CEs.

Therefore, we are asking users to submit job description files that are compatible with ARC-CE

Information about how to create your job description files for an ARC-CE can be found at (http://www.nordugrid.org/arc/arc6/users/xrsl.html)[1]

Please be sure to specify a queue in your job description with

 ("queue" = "short|medium|long|..." )

After creating a voms or arc proxy and have your job description file ready, some commands to start running your job can look something like:

 # Submit your job to an ARC endpoint with your xrsl or adl file specified
 arcsub -c brug.nikhef.nl [YOUR XRSL OR ADL FILE]
 
 # Check the status of all your jobs. Adding -l will give you a long description of each of your jobs.
 arcstat -a(l)
 
 # Or you can add the unique ID for your jobs with:
 arcstat [gsiftp|https]://brug.nikhef.nl:443/[jobs|arex]/[UNIQUE JOB ID]
 
 # Fetch your job output, logs etc with...
 arcget -a
 # (or arcget with a single job id)




General queues and walltimes available:

Queue Name Max. Walltime (hh:mm:ss) Allowed VOs
short 04:00:00 alice atlas dans projects.nl pvier virgo dune lsgrid lofar tutor enmr.eu bbmri.nl xenon.biggrid.nl chem.biggrid.nl drihm.eu
medium 36:00:00 alice atlas dans projects.nl pvier virgo dune lsgrid lofar tutor enmr.eu bbmri.nl xenon.biggrid.nl chem.biggrid.nl drihm.eu
long 96:00:00 alice atlas dans projects.nl pvier virgo dune lsgrid lofar tutor bbmri.nl xenon.biggrid.nl chem.biggrid.nl drihm.eu

For more information or find other queues, use lcg-info or lcg-infosites which will give you more information about what is available to your VO. For example,

lcg-infosites --vo pvier -f NIKHEF-ELPROD all

More information is also available on the SURFsara wiki: http://doc.grid.surfsara.nl/en/latest/Pages/Service/system_specifications/gina_specs.html#queues

Specifying Job Requirements

The default values for memory, nodes, CPUs and local scratch space may not be adequate for your use case. It is possible to specify the requirements for your jobs in the XRSL file which will then be translated into requirements on the local batch system. This will either match suitable resources, or match nothing at all if your requirements exceed what is available. If this is the first time you need to specify additional requirements, please ask the site administrators for advice.

Memory requirements

The amount of main memory (RAM) required for the job can be passed by adding this line to the XRSL file:

 (memory=<8192>)

This example requests 8GB of RAM (the unit is Megabytes). Be aware that exceeding the requested amount in the actual job may result in termination of the job by the batch system.

Multi-core jobs

XRSL parameters:

 (count=4)
 (countpernode=1)


These examples request 4 cores on 1 node. See also Enabling_multicore_jobs_and_jobs_requesting_large_amounts_of_memory.