GRAM5 In EGI
A computing element (CE) in the EGI is far more than an interface to submit jobs: the ecosystem has a lot of implicit assumptions as to what a CE is supposed to do, the set of services that it should run, the information it should publish in a variety of formats, and the programs that have to be installed on the CE itself to 'play nice' with the other EGI services. In this writeup, we attempt to document what needs to be done to turn the basic GRAM5 CE into a service which integrated with EGI. To do this, we use the GRAM5 service from the Initiative for Globus in Europe (IGE), as well as other components distributed via the EGI UMD repository.
Goals of the EGI GRAM5 CE
The GRAM5 service we have in mind should support the following:
- provide a GRAM5/GRAM2 compatible submission interface (what you get from IGE gram5 proper)
- resemble the LCG-CE application interface
- be visible to end-users using lcg-infosites
- be visible to the EMI Workload Management System (WMS) and 'automatically' attract jobs for supported VOs
- interoperate with GLUE2 information system clients
- support legacy GLUE1.3 (mainly for the WMS and lcg-infosites)
- support all IGTF CAs and honour CRLs
- fully support VOMS FQAN mappings and poolaccounts
- also allow local accounts to be used in conjunction with (and override) VOMS FQANs
- be configured for scalability using the Scheduler Event Generator (SEG)
- be resilient to users: use TMPDIR as default home, and find executables using PATH
- support accounting output in NDPF (CREAMy) compatible form
- log details in conformance to the CSIRT requirements
Requisite software installation
Add the repository at http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/ to your yum repos, preferably using a local mirror:
[NIKHEF-UMD-GRAM5] name=Nikhef UMD GRAM5 extras baseurl=http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/ gpgcheck=0 enabled=1
Base installation
Install either "ige-meta-globus-gram5-2.1-1.el5.noarch.rpm" and then remove the superfluous packages, or install the list of RPMs for basic GRAM5. Be sure to install your favourite scheduler, though, including MAUI if you use Torque, as these are needed for the dynamic scheduler information providers:
torque torque-client maui maui-client
Information system
- Add the BDII server from UMD by adding/installing the following RPMs (for Torque)
"bdii","5.2.5-2.el5","noarch" "glite-yaim-bdii","4.3.4-1.el5","noarch" "glite-yaim-torque-utils","5.0.0-1.sl5","noarch"
- Install Yaim to be able to configure BDII and the Glue1.3 information providers, if not already installed via Yum because of glite-yaim-bdii
"glite-yaim-core","5.0.2-1.sl5","noarch"
- Install the GLUE schemas and basic information providers (in GIP style) from gLite/EMI
"glue-schema","2.0.8-1.el5","noarch" "glite-info-provider-service", "1.7.0-1.el5", "noarch"
- Install "globus-gram5-glue2-info-providers" from the Nikhef repo (source https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-gram5-glue2-info-providers/) or the RPM http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/noarch/globus-gram5-glue2-info-providers-0.2-2.noarch.rpm
- Install "globus-yaim-gram5" (source https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-yaim-gram5/) to get the GLUE1.3 providers which are written entirely by Yaim, and the configuration for the Glue2 providers. The RPM is at http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/noarch/globus-yaim-gram5-0.3-3.noarch.rpm
- Install the dynamic infoproviders from LCG, which are used for GLUE1.3 information (GLUE2 only has static bits for the moment):
"lcg-info-dynamic-pbs", "2.0.0-1.sl5", "noarch" "lcg-info-dynamic-scheduler-pbs", "2.2.1-1.sl5", "noarch" "lcg-info-dynamic-scheduler-generic", "2.3.5-1.sl5", "noarch"
Mkgridmap support for local users
To support local users, either dynamically generated from LDAP or VOMS or via local additions, install the mkgridmap tool:
"edg-mkgridmap", "4.0.0-1", "noarch"
CRL support
Install fetch-crl version 3 from EPEL for RHEL5:
"fetch-crl3", "3.0.7-1.el5", "noarch"
WMS support
The gLite WMS assumed all kinds of LB stuff are already magically installed on every CE in the world. Let's make the WMS happy, and add:
"glite-lb-logger", "2.2.6-1.sl5", "x86_64" "glite-lb-client", "5.0.8-1.sl5", "x86_64"
Scalability and accounting fixes for the PBS job manager
This only works for you if you use Torque/PBS: install the Nikhef pbs job manager to get TMPDIR relocation, proper accounting log files, and VOMS FQAN logging for accounting. Install the one with Yum from the Nikhef repo:
"globus-gram-job-manager-pbs-nikhef", "0.2-1", "noarch"
from the Nikhef NDPF repository. This package obsoletes the default Globus pbs.pm job manager and replaces it with a modified version. The source it at https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-gram-job-manager-pbs-nikhef/ and the RPM at http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/noarch/globus-gram-job-manager-pbs-nikhef-0.2-2.noarch.rpm
Optional package: VOMS support
To get proper accounting information in the log files, install the voms-clients package (needed in case you want the Nikhef-pbs-jobmanager to write accounting data):
voms-clients-2.0.7-1.el5
If voms-proxy-info is available and "vomsinfo" set in the /etc/globus/globus-pbs.conf file, the nikhef pbs jobmanager will use it to extract the primarty and secondary FQANs and write thse to a specific accounting log entry in syslog. Accouting post-processing tools and collectors can then use it to link it up to the PBS/Torque job records and generate the proper usage records using e.g. NDPFAccouting.
Dependencies
Installing the above packages via Yum, with the EPEL and UMD repositories enabled, will automatically download and install all dependencies. For Quattor, use the "checkdeps" tool.
Configuration
Configuring the job managers
Globus provides a set of commands "globus-gram-job-manager-LRMS-setup-type" to enable and configure the various job back-end systems like Torque/PBS. These are fine if you run interactively or use script-drived install, otherwise just make the appropriate symlinks yourself. Pick "poll" for the fork JM, and "SEG" for the Torque/PBS one, and make fork the default:
/etc/grid-services/jobmanager-fork -> available/jobmanager-fork-poll /etc/grid-services/jobmanager-pbs -> available/jobmanager-pbs-seg /etc/grid-services/jobmanager -> jobmanager-pbs /etc/globus/scheduler-event-generator/fork -> available/fork /etc/globus/scheduler-event-generator/pbs -> available/pbs
You must configure the fork jobmanager if you want to support gLite WMS submission, as this service insists on running grid-manager processes on the CE.
Jobmanager pbs enhanced version
The "Nikhef" pbs jobmanager can collect VOMS FQAN data for accounting, and adds a single configuration variable for this in globus-pbs.conf:
# Generated by Quattor log_path="/var/spool/pbs/server_logs" pbs_default="stro.nikhef.nl" mpiexec=yes qsub="/usr/bin/qsub" qstat="/usr/bin/qstat" qdel="/usr/bin/qdel" cluster="1" remote_shell="no" cpu_per_node="1" softenv_dir= vomsinfo="/usr/bin/voms-proxy-info"
note that softenv_dir is not used in EGI: some VOs use the lcg-VOmanager tag system (not yet available for IGE GRAM5 but simple to do by adding the scripts and configuring a gridftp server on the CE), others just write stuff to the software experiment area and will 'just work'.
Split Torque/CE servers
In all not the most trivial cases, the Torque server and the CEs are on different systems. The 'advertised' solution ofr GRAM5 (just like for CREAM) is to rely on NFS and share the PBS server logs between all machines -- and make them readable to the GRAM5 SEG. This may work fine, but since we don't like NFS on critical servers we use a frequent "rsync --append", initiated by the torque server, to mimic this behaviour. This makes the torque server stand-alone and not subject to arbitrary load from other services -- and thus hopefully more stable. Every minute we run:
/usr/bin/rsync -azqx --delete-after --partial --append /var/spool/pbs/server_logs/* --password-file=/etc/rsyncc.secrets dissel.nikhef.nl::pbslogfiles
and on the GRAM5 CE run the rsync deamon in a sandbox. In /etc/rsyncd.conf (with an appropriate secrets file):
uid = accuser gid = accuser use chroot = yes max connections = 10 syslog facility = daemon pid file = /var/run/rsyncd.pid [pbslogfiles] path = /var/spool/pbs/server_logs read only = no list = false secrets file = /etc/rsyncd.secrets hosts deny = 0.0.0.0/0 hosts allow = torqueserver.nikhef.nl auth users = accuser
Yaim: site-info.def example
The following site-info def example must be tuned to your local environment:
# # Yaim configuration file managed by quattor ncm-yaim. # Please do not edit - all manual changes will be overwritten! # BATCH_SERVER="stro.nikhef.nl" BATCH_VERSION="2.3.8" BDII_HOST="bdii03.nikhef.nl" BDII_LIST="kraal.nikhef.nl:2170,bdii03.nikhef.nl:2170,bdii.grid.sara.nl:2170,bdii2.grid.sara.nl:2170" CE_CAPABILITY="CPUScalingReferenceSI00=2493 Share=alice:0 Share=atlas:0 Share=lhcb:0 glexec" CE_HOST="dissel.nikhef.nl" CE_OTHERDESCR="Cores=4.218009478672986,Benchmark=11.08-HEP-SPEC06" CREAM_CE_STATE="Production" CRON_DIR="/etc/cron.d" FUNCTIONS_DIR="/opt/glite/yaim/functions" GLITE_HOME_DIR="/var/local/glite" GLITE_USER="glite" GLOBUS_TCP_PORT_RANGE="20000,25000" GRIDMAP_AUTH="" GROUPS_CONF="/etc/siteinfo/groups.conf" JOB_MANAGER="pbs" MON_HOST="still.required.by.lcg-ce" QUEUES="ekster gratis infra medium short" SE_LIST="tbn18.nikhef.nl" SE_MOUNT_INFO_LIST="none" SITE_NAME="NIKHEF-ELPROD" TORQUE_VAR_DIR="/var/spool/pbs" USERS_CONF="/etc/siteinfo/users.conf" VO_SW_DIR="/data/esia" YAIM_HOME="/opt/glite/yaim" YAIM_VERSION="4.0" # # section CE configuration # CE_BATCH_SYS="pbs" CE_CPU_MODEL="IA32" CE_CPU_SPEED="2500" CE_CPU_VENDOR="intel" CE_INBOUNDIP="FALSE" CE_LOGCPU="3560" CE_MINPHYSMEM="8192" CE_MINVIRTMEM="4096" CE_OS="CentOS" CE_OS_ARCH="x86_64" CE_OS_RELEASE="5.8" CE_OS_VERSION="Final" CE_OUTBOUNDIP="TRUE" CE_PHYSCPU="844" CE_RUNTIMEENV="LCG-2 LCG-2_7_0 GLITE-3_2_0 NIKHEF NIKHEFLCG2ELPROD LCG_SC3 nl.vl-e.poc-release-3 nl.vl-e.poc-release-3.0 " CE_SF00="1327" CE_SI00="2240" CE_SMPSIZE="8" # # Queue configuration # QUEUES="infra gratis ekster short medium" EKSTER_GROUP_ENABLE="" GRATIS_GROUP_ENABLE="" INFRA_GROUP_ENABLE="" MEDIUM_GROUP_ENABLE="" SHORT_GROUP_ENABLE="" # # VO configuration # VOS="bbmri.nl biomed xenon.biggrid.nl pvier ops.biggrid.nl" # # LFC configuration # LFC_CENTRAL="xenon.biggrid.nl" # # section free configuration # CONFIG_USERS="no" SPECIAL_POOL_ACCOUNTS="1" MAUI_KEYFILE="/etc/maui-key" GRIDMAPDIR="/share/gridmapdir" EKSTER_GROUP_ENABLE="vo.gear.cern.ch" GRATIS_GROUP_ENABLE="biomed " INFRA_GROUP_ENABLE="ops pvier ops.biggrid.nl" MEDIUM_GROUP_ENABLE="atlas /atlas/Role=production /atlas/Role=pilot pvier vo.gear.cern.ch ops.biggrid.nl bbmri.nl xenon.biggrid.nl" SHORT_GROUP_ENABLE="enmr.eu pvier bbmri.nl xenon.biggrid.nl"
Lists and templates
RPM package lists
Superfluous packages in IGE meta-package via UMD
The following packages are not needed for proper functioning, once you already have a functioning batch system client setup. You will not need client and server tools for each and every scheduler in the world, which is what you get once you do dependency resolution through UMD. UMD ships all kinds of batch system client that may not be the version you want. Install your own favourite version of a batch system and be content. Also: only install the batch system plugins for the jobmanager that you really need:
fedora-usermgmt fedora-usermgmt-core fedora-usermgmt-default-fedora-setup fedora-usermgmt-shadow-utils gridengine libtorque munge munge-libs torque torque-client
and after installing your own batch system client, pickone of
globus-gram-job-manager-condor globus-gram-job-manager-pbs globus-gram-job-manager-sge
or they'll complain about missing dependencies.
Specific Packages
globus-gram-job-manager-pbs-nikhef
Replaces the standard pbs job manager, adding VOMS-based accounting info, applying the VDT and EDG patches on executable validation, and forcing cwd to $TMPDIR by default to forestall inadvertent home directory use by the end-users. The configuration file /etc/globus/globus-pbs.conf is extended with a single variable setting
vomsinfo=path-to-voms-proxy-info
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-gram-job-manager-pbs-nikhef-0.2.src.tgz
Example config:
log_path="/var/spool/pbs/server_logs" pbs_default="stro.nikhef.nl" mpiexec=yes qsub="/usr/bin/qsub" qstat="/usr/bin/qstat" qdel="/usr/bin/qdel" cluster="1" remote_shell="no" cpu_per_node="1" softenv_dir= vomsinfo="/usr/bin/voms-proxy-info"
globus-gram5-glue2-info-providers
The BDII-GIP style information providers for static information in GLUE2 format for the gatekeeper. This has the arbitrary choices for some of the GLUE2 values
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-gram5-glue2-info-providers-0.2.src.tgz
globus-yaim-gram5
The GIP confoguration function, and the gram5 node type that also configs the use of LCAS/LCMAPS, the VOMSES information, and GIP. The node-type is full-fledged including:
gram5_FUNCTIONS=" config_vomsdir config_vomses config_users config_vomsmap config_mkgridmap config_lcas_lcmaps_gt4 config_gip_gram5_glue2 config_gip_gram5_glue13 "
Source: https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-yaim-gram5/
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-yaim-gram5-0.3.src.tgz