Difference between revisions of "GRAM5 In EGI"
(5 intermediate revisions by the same user not shown) | |||
Line 35: | Line 35: | ||
maui | maui | ||
maui-client | maui-client | ||
+ | |||
+ | == JobManager from 5.2.1 == | ||
+ | |||
+ | If you install IGE2.0 (UMD<=1.8), then you '''must''' update the globus-jobmanager package. The one shipped with GT5.2.0 has a rare race condition that may prevent longish proxies, in particular ones with VOMS extensions, from working and results in a hanging job manager. Pick the job manager from GT5.2.1, at least version 13.33: | ||
+ | |||
+ | rpm -Uvh http://www.globus.org/ftppub/gt5/5.2/5.2.1/packages/rpm/redhat/5Server/x86_64/globus-gram-job-manager-13.33-1.x86_64.rpm | ||
+ | rpm -Uvh http://www.globus.org/ftppub/gt5/5.2/5.2.1/packages/rpm/redhat/5Server/x86_64/globus-gram-job-manager-doc-13.33-1.x86_64.rpm | ||
+ | |||
+ | or add them as updates to your package management system. | ||
== Information system == | == Information system == | ||
Line 68: | Line 77: | ||
"edg-mkgridmap", "4.0.0-1", "noarch" | "edg-mkgridmap", "4.0.0-1", "noarch" | ||
+ | |||
+ | == CA support == | ||
+ | |||
+ | Assuming you want the EGI package, see [https://wiki.egi.eu/wiki/EGI_IGTF_Release https://wiki.egi.eu/wiki/EGI_IGTF_Release]: | ||
+ | * Install http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo in /etc/yum.repos.d/ | ||
+ | * Install the EGI meta-package: <tt>yum install ca-policy-egi-core</tt> | ||
+ | * Install the wLCG meta-package if your policy says so: <tt>yum install ca-policy-lcg</tt>. | ||
+ | * Add any additional local/national CAs if your policy says so | ||
== CRL support == | == CRL support == | ||
Line 96: | Line 113: | ||
voms-clients-2.0.7-1.el5 | voms-clients-2.0.7-1.el5 | ||
− | If voms-proxy-info is available and "vomsinfo" set in the /etc/globus/globus-pbs.conf file, the nikhef pbs jobmanager will use it to extract the primarty and secondary FQANs and write thse to a specific accounting log entry in syslog. Accouting post-processing tools and collectors can then use it to link it up to the PBS/Torque job records and generate the proper usage records using e.g. [[ | + | If voms-proxy-info is available and "vomsinfo" set in the /etc/globus/globus-pbs.conf file, the nikhef pbs jobmanager will use it to extract the primarty and secondary FQANs and write thse to a specific accounting log entry in syslog. Accouting post-processing tools and collectors can then use it to link it up to the PBS/Torque job records and generate the proper usage records using e.g. [[NDPFAccounting]]. |
== Dependencies == | == Dependencies == | ||
Line 103: | Line 120: | ||
= Configuration = | = Configuration = | ||
+ | |||
+ | == Configure torque == | ||
+ | |||
+ | * Add the proper server_name to the torque client configuration, so that the PBS jobmanager can get job information and submit new jobs. The user accounts must be matched between CE and torque server. | ||
+ | * Install the MAUI client | ||
+ | * add the information-system user (usually: edguser or edginfo) to the list of ADMIN3's for maui (to get ERT), and users "ldap" (BDII), "edginfo", and "edguser" to the operators group in torque (also both static GLUE content and ERT) | ||
+ | |||
+ | == Configuring the job managers == | ||
+ | |||
+ | Globus provides a set of commands "globus-gram-job-manager-''LRMS''-setup-''type''" to enable and configure the various job back-end systems like Torque/PBS. These are fine if you run interactively or use script-drived install, otherwise just make the appropriate symlinks yourself. Pick "poll" for the fork JM, and "SEG" for the Torque/PBS one, and make fork the default: | ||
+ | |||
+ | /etc/grid-services/jobmanager-fork -> available/jobmanager-fork-poll | ||
+ | /etc/grid-services/jobmanager-pbs -> available/jobmanager-pbs-seg | ||
+ | /etc/grid-services/jobmanager -> jobmanager-pbs | ||
+ | |||
+ | /etc/globus/scheduler-event-generator/fork -> available/fork | ||
+ | /etc/globus/scheduler-event-generator/pbs -> available/pbs | ||
+ | |||
+ | You '''must''' configure the fork jobmanager if you want to support gLite WMS submission, as this service insists on running grid-manager processes on the CE. | ||
+ | |||
+ | == Jobmanager pbs enhanced version == | ||
+ | |||
+ | The "Nikhef" pbs jobmanager can collect VOMS FQAN data for accounting, and adds a single configuration variable for this in globus-pbs.conf: | ||
+ | |||
+ | # Generated by Quattor | ||
+ | log_path="/var/spool/pbs/server_logs" | ||
+ | pbs_default="stro.nikhef.nl" | ||
+ | mpiexec=yes | ||
+ | qsub="/usr/bin/qsub" | ||
+ | qstat="/usr/bin/qstat" | ||
+ | qdel="/usr/bin/qdel" | ||
+ | cluster="1" | ||
+ | remote_shell="no" | ||
+ | cpu_per_node="1" | ||
+ | softenv_dir= | ||
+ | vomsinfo="/usr/bin/voms-proxy-info" | ||
+ | |||
+ | note that softenv_dir is not used in EGI: some VOs use the lcg-VOmanager tag system (not yet available for IGE GRAM5 but simple to do by adding the scripts and configuring a gridftp server on the CE), others just write stuff to the software experiment area and will 'just work'. | ||
+ | |||
+ | == Split Torque/CE servers == | ||
+ | |||
+ | In all not the most trivial cases, the Torque server and the CEs are on different systems. The 'advertised' solution for GRAM5 (just like for CREAM) is to rely on NFS and share the PBS server logs between all machines -- and make them readable to the GRAM5 SEG. This may work fine, but since we don't like NFS on critical servers we use a frequent "rsync --append", initiated by the torque server, to mimic this behaviour. This makes the torque server stand-alone and not subject to arbitrary load from other services -- and thus hopefully more stable. Every minute we run: | ||
+ | |||
+ | /usr/bin/rsync -azqx --delete-after '''--partial --append''' /var/spool/pbs/server_logs/* --password-file=/etc/rsyncc.secrets dissel.nikhef.nl::pbslogfiles | ||
+ | |||
+ | and on the GRAM5 CE run the rsync deamon in a sandbox. In /etc/rsyncd.conf (with an appropriate secrets file): | ||
+ | |||
+ | uid = accuser | ||
+ | gid = accuser | ||
+ | use chroot = yes | ||
+ | max connections = 10 | ||
+ | syslog facility = daemon | ||
+ | pid file = /var/run/rsyncd.pid | ||
+ | |||
+ | [pbslogfiles] | ||
+ | path = /var/spool/pbs/server_logs | ||
+ | read only = no | ||
+ | list = false | ||
+ | secrets file = /etc/rsyncd.secrets | ||
+ | hosts deny = 0.0.0.0/0 | ||
+ | hosts allow = torqueserver.nikhef.nl | ||
+ | auth users = accuser | ||
+ | |||
+ | == Fixup configuration for PBS SEG module | ||
+ | |||
+ | In /etc/sysconfig/globus-scheduler-event-generator write: | ||
+ | |||
+ | # Generated by Quattor | ||
+ | GLOBUS_SEG_PIDFMT="/var/run/globus-scheduler-event-generator-%s.pid" | ||
+ | GLOBUS_SEG_LOGFMT="/var/lib/globus/globus-seg-%s" | ||
+ | GLOBUS_SEG_LRM_DIR="/etc/globus/scheduler-event-generator" | ||
+ | |||
+ | == Configure the GRAM5 job manager == | ||
+ | |||
+ | Add the following to /etc/globus/globus-gram-job-manager.conf to: | ||
+ | * set the (site-specific) TCP post range (default: 20000-25000) | ||
+ | * preserve key environment variables for accounting and security logging | ||
+ | * preserve logs to syslog so they survive a break-in or crash | ||
+ | |||
+ | -globus-toolkit-version 5.2.0 | ||
+ | -log-pattern /var/log/globus/gram_$(LOGNAME).log | ||
+ | -audit-directory /var/lib/globus/gram-audit | ||
+ | -usagestats-targets statistics.ige-project.eu:4810 | ||
+ | -save-logfile on_error | ||
+ | -globus-gatekeeper-host ''thishostname''.nikhef.nl | ||
+ | -globus-tcp-port-range 20000,25000 | ||
+ | -globus-tcp-source-range 20000,25000 | ||
+ | -extra-envvars GRID_ID,GATEKEEPER_PEER,GATEKEEPER_JM_ID | ||
+ | -enable-syslog | ||
+ | |||
+ | == Mure the verybose logging of LCAS/LCMAPS == | ||
+ | |||
+ | By default, in UMD1 the LCAS and LCMAPS sewcurity components are kind-of verybose. Limit their talkative nature by adding to /etc/sysconfig/globus-gatekeeper: | ||
+ | |||
+ | # Force writing of a separate gatekeeper logfile | ||
+ | GLOBUS_GATEKEEPER_LOG="/var/log/globus-gatekeeper.log" | ||
+ | # fix for old GT5.2.0 packaging bug | ||
+ | GLOBUS_THREAD_MODEL=none; export GLOBUS_THREAD_MODEL | ||
+ | # fix a verybose 1.4 version LCAS/LCMAPS framework | ||
+ | LCMAPS_LOG_LEVEL=0; export LCMAPS_LOG_LEVEL | ||
+ | LCMAPS_DEBUG_LEVEL=0; export LCMAPS_DEBUG_LEVEL | ||
+ | LCAS_LOG_LEVEL=0; export LCAS_LOG_LEVEL | ||
+ | LCAS_DEBUG_LEVEL=0; export LCAS_DEBUG_LEVEL | ||
+ | |||
+ | == Host certificates == | ||
+ | |||
+ | Get yourself a host certificate from a reputable CA, and install it in two places, with the key mode 0600: | ||
+ | |||
+ | /etc/grid-security/host{cert,key}.pem (owned by root, for use with the gatekeeper) | ||
+ | /etc/grid-security/glite{cert,key}.pem (owner by user glite, for use with the L&B server) | ||
+ | |||
+ | == Ban list == | ||
+ | |||
+ | Put a basic ban list in /etc/lcas/ban_users.db if your install framework does not do so. | ||
+ | |||
+ | |||
+ | |||
+ | == BDII Generic Information Provider (GIP) framework == | ||
+ | |||
+ | The GIP framework that we use from the BDII runs as a separate user. Create this user "edginfo" as a system account if you don't have it already. | ||
+ | |||
+ | == CA security configuration == | ||
+ | |||
+ | Having installed fetch-crl version 3, make it faster by adding a /etc/fetch-crl3.conf (on EL6+: /etc/fetch-crl.conf): | ||
+ | |||
+ | infodir = /etc/grid-security/certificates | ||
+ | agingtolerance = 24 | ||
+ | parallelism = 5 | ||
+ | nosymlinks | ||
+ | nowarnings | ||
+ | |||
+ | == Configuring [[NDPFAccounting]] == | ||
+ | |||
+ | If you use NDPF Accounting, the syslog messages (in BLAH style) can be picked up automatically and put in a database for sending on further to APEL via SSM or the old OpenWire system. Only the CE "joiner" runs on the GRAM5 CE, and it uses the existing database records generated on the batch system server to find the jobs. Periodically (e.g. once a day) run the joiner (and have the configuration in /etc/cejoiner.cfg): | ||
+ | |||
+ | /usr/local/sbin/ndpf-cejoiner.pl /var/log/messages /var/log/messages.1 /var/log/messages.1.gz | ||
+ | |||
+ | == Configuring L&B == | ||
+ | |||
+ | To suppoer gLite WMS submission, you need a local L&B instance running on the CE. Fortunately, this is a very easy and simple service which needs no databases or so. It uses the "glite" user to run, and needs a host cert (see above). The /etc/glite.conf file contains these settings: | ||
+ | |||
+ | GLITE_USER=glite; export GLITE_USER; | ||
+ | GLITE_HOST_CERT=/etc/grid-security/glitecert.pem; export GLITE_HOST_CERT; | ||
+ | GLITE_HOST_KEY=/etc/grid-security/glitekey.pem; export GLITE_HOST_KEY; | ||
+ | LL_PIDFILE=/var/local/glite/glite-lb-logd.pid; export LL_PIDFILE; | ||
+ | IL_PIDFILE=/var/local/glite/glite-lb-interlogd.pid; export IL_PIDFILE; | ||
+ | |||
+ | And make sure it starts: | ||
+ | |||
+ | chkconfig glite-lb-locallogger on && service glite-lb-locallogger start | ||
+ | |||
== Yaim: site-info.def example == | == Yaim: site-info.def example == | ||
Line 193: | Line 361: | ||
MEDIUM_GROUP_ENABLE="atlas /atlas/Role=production /atlas/Role=pilot pvier vo.gear.cern.ch ops.biggrid.nl bbmri.nl xenon.biggrid.nl" | MEDIUM_GROUP_ENABLE="atlas /atlas/Role=production /atlas/Role=pilot pvier vo.gear.cern.ch ops.biggrid.nl bbmri.nl xenon.biggrid.nl" | ||
SHORT_GROUP_ENABLE="enmr.eu pvier bbmri.nl xenon.biggrid.nl" | SHORT_GROUP_ENABLE="enmr.eu pvier bbmri.nl xenon.biggrid.nl" | ||
+ | |||
+ | A VO definition file looks like this: | ||
+ | |||
+ | # | ||
+ | # section VO xenon.biggrid.nl configuration | ||
+ | # | ||
+ | DEFAULT_SE="tbn18.nikhef.nl" | ||
+ | LFC="central" | ||
+ | MAP_WILDCARDS="yes" | ||
+ | SW_DIR="/data/esia/xenon.biggrid.nl" | ||
+ | UNPRIVILEGED_MKGRIDMAP="true" | ||
+ | VOMSES="'xenon.biggrid.nl voms.grid.sara.nl 30008 /O=dutchgrid/O=hosts/OU=sara.nl/CN=voms.grid.sara.nl xenon.biggrid.nl' " | ||
+ | VOMS_CA_DN="'/C=NL/O=NIKHEF/CN=NIKHEF medium-security certification auth' " | ||
+ | VOMS_SERVERS="'vomss://voms.grid.sara.nl:8443/voms/xenon.biggrid.nl?/xenon.biggrid.nl/' " | ||
+ | |||
+ | The users.conf file, if you only have a single poolaccount or have these accounts managed elsewhere) a bit like: | ||
+ | 0:pxensm00:0,0:pxenonsm,pxenon:xenon.biggrid.nl:sgm: | ||
+ | 0:pxeno000:0:pxenon:xenon.biggrid.nl:: | ||
+ | |||
+ | and the groups.conf file: | ||
+ | |||
+ | "/xenon.biggrid.nl/Role=sgm":::sgm: | ||
+ | "/xenon.biggrid.nl":::: | ||
+ | |||
+ | == Running Yaim == | ||
+ | |||
+ | Populate the rest of /etc/siteinfo with the proper files, just like you would do for any other gLite service. In particular the users.conf and groups./conf file must be there, and the VO definitions configured in /etc/siteinfo/vo.d/. | ||
+ | |||
+ | Then, run yaim | ||
+ | |||
+ | /opt/glite/yaim/bin/yaim -c -s /etc/siteinfo/lcg-quattor-site-info.def -n gram5 | ||
+ | |||
+ | == Starting and configuring the gatekeeper and scheduler event generators (SEG) == | ||
+ | |||
+ | Make sure the gatekeeper and SEG are started on system boot: | ||
+ | |||
+ | chkconfig globus-scheduler-event-generator on && service globus-scheduler-event-generator start | ||
+ | chkconfig globus-gatekeeper on && service globus-gatekeeper start | ||
+ | |||
+ | == Starting BDII == | ||
+ | |||
+ | After running Yaim, both the GLUE1.3 and GLUE2 information templates will have been created c.q. configured. Starting the BDII just needs | ||
+ | |||
+ | chkconfig bdii on && service bdii start | ||
+ | |||
+ | and wait for a few secodns for the infosys to appear on port ldap://''FQDN'':2170/. The DIT for GLUE1.3 starts at "o=grid", the DIT for GLUE2 starts at "o=glue". | ||
= Lists and templates = | = Lists and templates = | ||
Line 269: | Line 483: | ||
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-yaim-gram5-0.3.src.tgz | Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-yaim-gram5-0.3.src.tgz | ||
+ | |||
+ | = Notes and caveats = | ||
+ | |||
+ | == Gatekeeper fails to start on IPv4 only systems == | ||
+ | |||
+ | If your system has IPv6 disabled (which is different from it being ''not configured''), then the gatekeeper may fail to start. Re-enable IPv6 in /etc/sysctl.conf: | ||
+ | |||
+ | net.ipv6.conf.all.disable_ipv6 = 0 | ||
+ | net.ipv6.conf.default.disable_ipv6 = 0 | ||
+ | |||
+ | and when you're at it, make sure you have all your nodes IPv6 ready ;-)! |
Latest revision as of 07:37, 2 July 2012
A computing element (CE) in the EGI is far more than an interface to submit jobs: the ecosystem has a lot of implicit assumptions as to what a CE is supposed to do, the set of services that it should run, the information it should publish in a variety of formats, and the programs that have to be installed on the CE itself to 'play nice' with the other EGI services. In this writeup, we attempt to document what needs to be done to turn the basic GRAM5 CE into a service which integrated with EGI. To do this, we use the GRAM5 service from the Initiative for Globus in Europe (IGE), as well as other components distributed via the EGI UMD repository.
Goals of the EGI GRAM5 CE
The GRAM5 service we have in mind should support the following:
- provide a GRAM5/GRAM2 compatible submission interface (what you get from IGE gram5 proper)
- resemble the LCG-CE application interface
- be visible to end-users using lcg-infosites
- be visible to the EMI Workload Management System (WMS) and 'automatically' attract jobs for supported VOs
- interoperate with GLUE2 information system clients
- support legacy GLUE1.3 (mainly for the WMS and lcg-infosites)
- support all IGTF CAs and honour CRLs
- fully support VOMS FQAN mappings and poolaccounts
- also allow local accounts to be used in conjunction with (and override) VOMS FQANs
- be configured for scalability using the Scheduler Event Generator (SEG)
- be resilient to users: use TMPDIR as default home, and find executables using PATH
- support accounting output in NDPF (CREAMy) compatible form
- log details in conformance to the CSIRT requirements
Requisite software installation
Add the repository at http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/ to your yum repos, preferably using a local mirror:
[NIKHEF-UMD-GRAM5] name=Nikhef UMD GRAM5 extras baseurl=http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/ gpgcheck=0 enabled=1
Base installation
Install either "ige-meta-globus-gram5-2.1-1.el5.noarch.rpm" and then remove the superfluous packages, or install the list of RPMs for basic GRAM5. Be sure to install your favourite scheduler, though, including MAUI if you use Torque, as these are needed for the dynamic scheduler information providers:
torque torque-client maui maui-client
JobManager from 5.2.1
If you install IGE2.0 (UMD<=1.8), then you must update the globus-jobmanager package. The one shipped with GT5.2.0 has a rare race condition that may prevent longish proxies, in particular ones with VOMS extensions, from working and results in a hanging job manager. Pick the job manager from GT5.2.1, at least version 13.33:
rpm -Uvh http://www.globus.org/ftppub/gt5/5.2/5.2.1/packages/rpm/redhat/5Server/x86_64/globus-gram-job-manager-13.33-1.x86_64.rpm rpm -Uvh http://www.globus.org/ftppub/gt5/5.2/5.2.1/packages/rpm/redhat/5Server/x86_64/globus-gram-job-manager-doc-13.33-1.x86_64.rpm
or add them as updates to your package management system.
Information system
- Add the BDII server from UMD by adding/installing the following RPMs (for Torque)
"bdii","5.2.5-2.el5","noarch" "glite-yaim-bdii","4.3.4-1.el5","noarch" "glite-yaim-torque-utils","5.0.0-1.sl5","noarch"
- Install Yaim to be able to configure BDII and the Glue1.3 information providers, if not already installed via Yum because of glite-yaim-bdii
"glite-yaim-core","5.0.2-1.sl5","noarch"
- Install the GLUE schemas and basic information providers (in GIP style) from gLite/EMI
"glue-schema","2.0.8-1.el5","noarch" "glite-info-provider-service", "1.7.0-1.el5", "noarch"
- Install "globus-gram5-glue2-info-providers" from the Nikhef repo (source https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-gram5-glue2-info-providers/) or the RPM http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/noarch/globus-gram5-glue2-info-providers-0.2-2.noarch.rpm
- Install "globus-yaim-gram5" (source https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-yaim-gram5/) to get the GLUE1.3 providers which are written entirely by Yaim, and the configuration for the Glue2 providers. The RPM is at http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/noarch/globus-yaim-gram5-0.3-3.noarch.rpm
- Install the dynamic infoproviders from LCG, which are used for GLUE1.3 information (GLUE2 only has static bits for the moment):
"lcg-info-dynamic-pbs", "2.0.0-1.sl5", "noarch" "lcg-info-dynamic-scheduler-pbs", "2.2.1-1.sl5", "noarch" "lcg-info-dynamic-scheduler-generic", "2.3.5-1.sl5", "noarch"
Mkgridmap support for local users
To support local users, either dynamically generated from LDAP or VOMS or via local additions, install the mkgridmap tool:
"edg-mkgridmap", "4.0.0-1", "noarch"
CA support
Assuming you want the EGI package, see https://wiki.egi.eu/wiki/EGI_IGTF_Release:
- Install http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo in /etc/yum.repos.d/
- Install the EGI meta-package: yum install ca-policy-egi-core
- Install the wLCG meta-package if your policy says so: yum install ca-policy-lcg.
- Add any additional local/national CAs if your policy says so
CRL support
Install fetch-crl version 3 from EPEL for RHEL5:
"fetch-crl3", "3.0.7-1.el5", "noarch"
WMS support
The gLite WMS assumed all kinds of LB stuff are already magically installed on every CE in the world. Let's make the WMS happy, and add:
"glite-lb-logger", "2.2.6-1.sl5", "x86_64" "glite-lb-client", "5.0.8-1.sl5", "x86_64"
Scalability and accounting fixes for the PBS job manager
This only works for you if you use Torque/PBS: install the Nikhef pbs job manager to get TMPDIR relocation, proper accounting log files, and VOMS FQAN logging for accounting. Install the one with Yum from the Nikhef repo:
"globus-gram-job-manager-pbs-nikhef", "0.2-1", "noarch"
from the Nikhef NDPF repository. This package obsoletes the default Globus pbs.pm job manager and replaces it with a modified version. The source it at https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-gram-job-manager-pbs-nikhef/ and the RPM at http://software.nikhef.nl/temporary/umd-gram5/rhel5/RPMS/noarch/globus-gram-job-manager-pbs-nikhef-0.2-2.noarch.rpm
Optional package: VOMS support
To get proper accounting information in the log files, install the voms-clients package (needed in case you want the Nikhef-pbs-jobmanager to write accounting data):
voms-clients-2.0.7-1.el5
If voms-proxy-info is available and "vomsinfo" set in the /etc/globus/globus-pbs.conf file, the nikhef pbs jobmanager will use it to extract the primarty and secondary FQANs and write thse to a specific accounting log entry in syslog. Accouting post-processing tools and collectors can then use it to link it up to the PBS/Torque job records and generate the proper usage records using e.g. NDPFAccounting.
Dependencies
Installing the above packages via Yum, with the EPEL and UMD repositories enabled, will automatically download and install all dependencies. For Quattor, use the "checkdeps" tool.
Configuration
Configure torque
- Add the proper server_name to the torque client configuration, so that the PBS jobmanager can get job information and submit new jobs. The user accounts must be matched between CE and torque server.
- Install the MAUI client
- add the information-system user (usually: edguser or edginfo) to the list of ADMIN3's for maui (to get ERT), and users "ldap" (BDII), "edginfo", and "edguser" to the operators group in torque (also both static GLUE content and ERT)
Configuring the job managers
Globus provides a set of commands "globus-gram-job-manager-LRMS-setup-type" to enable and configure the various job back-end systems like Torque/PBS. These are fine if you run interactively or use script-drived install, otherwise just make the appropriate symlinks yourself. Pick "poll" for the fork JM, and "SEG" for the Torque/PBS one, and make fork the default:
/etc/grid-services/jobmanager-fork -> available/jobmanager-fork-poll /etc/grid-services/jobmanager-pbs -> available/jobmanager-pbs-seg /etc/grid-services/jobmanager -> jobmanager-pbs /etc/globus/scheduler-event-generator/fork -> available/fork /etc/globus/scheduler-event-generator/pbs -> available/pbs
You must configure the fork jobmanager if you want to support gLite WMS submission, as this service insists on running grid-manager processes on the CE.
Jobmanager pbs enhanced version
The "Nikhef" pbs jobmanager can collect VOMS FQAN data for accounting, and adds a single configuration variable for this in globus-pbs.conf:
# Generated by Quattor log_path="/var/spool/pbs/server_logs" pbs_default="stro.nikhef.nl" mpiexec=yes qsub="/usr/bin/qsub" qstat="/usr/bin/qstat" qdel="/usr/bin/qdel" cluster="1" remote_shell="no" cpu_per_node="1" softenv_dir= vomsinfo="/usr/bin/voms-proxy-info"
note that softenv_dir is not used in EGI: some VOs use the lcg-VOmanager tag system (not yet available for IGE GRAM5 but simple to do by adding the scripts and configuring a gridftp server on the CE), others just write stuff to the software experiment area and will 'just work'.
Split Torque/CE servers
In all not the most trivial cases, the Torque server and the CEs are on different systems. The 'advertised' solution for GRAM5 (just like for CREAM) is to rely on NFS and share the PBS server logs between all machines -- and make them readable to the GRAM5 SEG. This may work fine, but since we don't like NFS on critical servers we use a frequent "rsync --append", initiated by the torque server, to mimic this behaviour. This makes the torque server stand-alone and not subject to arbitrary load from other services -- and thus hopefully more stable. Every minute we run:
/usr/bin/rsync -azqx --delete-after --partial --append /var/spool/pbs/server_logs/* --password-file=/etc/rsyncc.secrets dissel.nikhef.nl::pbslogfiles
and on the GRAM5 CE run the rsync deamon in a sandbox. In /etc/rsyncd.conf (with an appropriate secrets file):
uid = accuser gid = accuser use chroot = yes max connections = 10 syslog facility = daemon pid file = /var/run/rsyncd.pid [pbslogfiles] path = /var/spool/pbs/server_logs read only = no list = false secrets file = /etc/rsyncd.secrets hosts deny = 0.0.0.0/0 hosts allow = torqueserver.nikhef.nl auth users = accuser
== Fixup configuration for PBS SEG module
In /etc/sysconfig/globus-scheduler-event-generator write:
# Generated by Quattor GLOBUS_SEG_PIDFMT="/var/run/globus-scheduler-event-generator-%s.pid" GLOBUS_SEG_LOGFMT="/var/lib/globus/globus-seg-%s" GLOBUS_SEG_LRM_DIR="/etc/globus/scheduler-event-generator"
Configure the GRAM5 job manager
Add the following to /etc/globus/globus-gram-job-manager.conf to:
- set the (site-specific) TCP post range (default: 20000-25000)
- preserve key environment variables for accounting and security logging
- preserve logs to syslog so they survive a break-in or crash
-globus-toolkit-version 5.2.0 -log-pattern /var/log/globus/gram_$(LOGNAME).log -audit-directory /var/lib/globus/gram-audit -usagestats-targets statistics.ige-project.eu:4810 -save-logfile on_error -globus-gatekeeper-host thishostname.nikhef.nl -globus-tcp-port-range 20000,25000 -globus-tcp-source-range 20000,25000 -extra-envvars GRID_ID,GATEKEEPER_PEER,GATEKEEPER_JM_ID -enable-syslog
Mure the verybose logging of LCAS/LCMAPS
By default, in UMD1 the LCAS and LCMAPS sewcurity components are kind-of verybose. Limit their talkative nature by adding to /etc/sysconfig/globus-gatekeeper:
# Force writing of a separate gatekeeper logfile GLOBUS_GATEKEEPER_LOG="/var/log/globus-gatekeeper.log" # fix for old GT5.2.0 packaging bug GLOBUS_THREAD_MODEL=none; export GLOBUS_THREAD_MODEL # fix a verybose 1.4 version LCAS/LCMAPS framework LCMAPS_LOG_LEVEL=0; export LCMAPS_LOG_LEVEL LCMAPS_DEBUG_LEVEL=0; export LCMAPS_DEBUG_LEVEL LCAS_LOG_LEVEL=0; export LCAS_LOG_LEVEL LCAS_DEBUG_LEVEL=0; export LCAS_DEBUG_LEVEL
Host certificates
Get yourself a host certificate from a reputable CA, and install it in two places, with the key mode 0600:
/etc/grid-security/host{cert,key}.pem (owned by root, for use with the gatekeeper) /etc/grid-security/glite{cert,key}.pem (owner by user glite, for use with the L&B server)
Ban list
Put a basic ban list in /etc/lcas/ban_users.db if your install framework does not do so.
BDII Generic Information Provider (GIP) framework
The GIP framework that we use from the BDII runs as a separate user. Create this user "edginfo" as a system account if you don't have it already.
CA security configuration
Having installed fetch-crl version 3, make it faster by adding a /etc/fetch-crl3.conf (on EL6+: /etc/fetch-crl.conf):
infodir = /etc/grid-security/certificates agingtolerance = 24 parallelism = 5 nosymlinks nowarnings
Configuring NDPFAccounting
If you use NDPF Accounting, the syslog messages (in BLAH style) can be picked up automatically and put in a database for sending on further to APEL via SSM or the old OpenWire system. Only the CE "joiner" runs on the GRAM5 CE, and it uses the existing database records generated on the batch system server to find the jobs. Periodically (e.g. once a day) run the joiner (and have the configuration in /etc/cejoiner.cfg):
/usr/local/sbin/ndpf-cejoiner.pl /var/log/messages /var/log/messages.1 /var/log/messages.1.gz
Configuring L&B
To suppoer gLite WMS submission, you need a local L&B instance running on the CE. Fortunately, this is a very easy and simple service which needs no databases or so. It uses the "glite" user to run, and needs a host cert (see above). The /etc/glite.conf file contains these settings:
GLITE_USER=glite; export GLITE_USER; GLITE_HOST_CERT=/etc/grid-security/glitecert.pem; export GLITE_HOST_CERT; GLITE_HOST_KEY=/etc/grid-security/glitekey.pem; export GLITE_HOST_KEY; LL_PIDFILE=/var/local/glite/glite-lb-logd.pid; export LL_PIDFILE; IL_PIDFILE=/var/local/glite/glite-lb-interlogd.pid; export IL_PIDFILE;
And make sure it starts:
chkconfig glite-lb-locallogger on && service glite-lb-locallogger start
Yaim: site-info.def example
The following site-info def example must be tuned to your local environment:
# # Yaim configuration file managed by quattor ncm-yaim. # Please do not edit - all manual changes will be overwritten! # BATCH_SERVER="stro.nikhef.nl" BATCH_VERSION="2.3.8" BDII_HOST="bdii03.nikhef.nl" BDII_LIST="kraal.nikhef.nl:2170,bdii03.nikhef.nl:2170,bdii.grid.sara.nl:2170,bdii2.grid.sara.nl:2170" CE_CAPABILITY="CPUScalingReferenceSI00=2493 Share=alice:0 Share=atlas:0 Share=lhcb:0 glexec" CE_HOST="dissel.nikhef.nl" CE_OTHERDESCR="Cores=4.218009478672986,Benchmark=11.08-HEP-SPEC06" CREAM_CE_STATE="Production" CRON_DIR="/etc/cron.d" FUNCTIONS_DIR="/opt/glite/yaim/functions" GLITE_HOME_DIR="/var/local/glite" GLITE_USER="glite" GLOBUS_TCP_PORT_RANGE="20000,25000" GRIDMAP_AUTH="" GROUPS_CONF="/etc/siteinfo/groups.conf" JOB_MANAGER="pbs" MON_HOST="still.required.by.lcg-ce" QUEUES="ekster gratis infra medium short" SE_LIST="tbn18.nikhef.nl" SE_MOUNT_INFO_LIST="none" SITE_NAME="NIKHEF-ELPROD" TORQUE_VAR_DIR="/var/spool/pbs" USERS_CONF="/etc/siteinfo/users.conf" VO_SW_DIR="/data/esia" YAIM_HOME="/opt/glite/yaim" YAIM_VERSION="4.0" # # section CE configuration # CE_BATCH_SYS="pbs" CE_CPU_MODEL="IA32" CE_CPU_SPEED="2500" CE_CPU_VENDOR="intel" CE_INBOUNDIP="FALSE" CE_LOGCPU="3560" CE_MINPHYSMEM="8192" CE_MINVIRTMEM="4096" CE_OS="CentOS" CE_OS_ARCH="x86_64" CE_OS_RELEASE="5.8" CE_OS_VERSION="Final" CE_OUTBOUNDIP="TRUE" CE_PHYSCPU="844" CE_RUNTIMEENV="LCG-2 LCG-2_7_0 GLITE-3_2_0 NIKHEF NIKHEFLCG2ELPROD LCG_SC3 nl.vl-e.poc-release-3 nl.vl-e.poc-release-3.0 " CE_SF00="1327" CE_SI00="2240" CE_SMPSIZE="8" # # Queue configuration # QUEUES="infra gratis ekster short medium" EKSTER_GROUP_ENABLE="" GRATIS_GROUP_ENABLE="" INFRA_GROUP_ENABLE="" MEDIUM_GROUP_ENABLE="" SHORT_GROUP_ENABLE="" # # VO configuration # VOS="bbmri.nl biomed xenon.biggrid.nl pvier ops.biggrid.nl" # # LFC configuration # LFC_CENTRAL="xenon.biggrid.nl" # # section free configuration # CONFIG_USERS="no" SPECIAL_POOL_ACCOUNTS="1" MAUI_KEYFILE="/etc/maui-key" GRIDMAPDIR="/share/gridmapdir" EKSTER_GROUP_ENABLE="vo.gear.cern.ch" GRATIS_GROUP_ENABLE="biomed " INFRA_GROUP_ENABLE="ops pvier ops.biggrid.nl" MEDIUM_GROUP_ENABLE="atlas /atlas/Role=production /atlas/Role=pilot pvier vo.gear.cern.ch ops.biggrid.nl bbmri.nl xenon.biggrid.nl" SHORT_GROUP_ENABLE="enmr.eu pvier bbmri.nl xenon.biggrid.nl"
A VO definition file looks like this:
# # section VO xenon.biggrid.nl configuration # DEFAULT_SE="tbn18.nikhef.nl" LFC="central" MAP_WILDCARDS="yes" SW_DIR="/data/esia/xenon.biggrid.nl" UNPRIVILEGED_MKGRIDMAP="true" VOMSES="'xenon.biggrid.nl voms.grid.sara.nl 30008 /O=dutchgrid/O=hosts/OU=sara.nl/CN=voms.grid.sara.nl xenon.biggrid.nl' " VOMS_CA_DN="'/C=NL/O=NIKHEF/CN=NIKHEF medium-security certification auth' " VOMS_SERVERS="'vomss://voms.grid.sara.nl:8443/voms/xenon.biggrid.nl?/xenon.biggrid.nl/' "
The users.conf file, if you only have a single poolaccount or have these accounts managed elsewhere) a bit like:
0:pxensm00:0,0:pxenonsm,pxenon:xenon.biggrid.nl:sgm: 0:pxeno000:0:pxenon:xenon.biggrid.nl::
and the groups.conf file:
"/xenon.biggrid.nl/Role=sgm":::sgm: "/xenon.biggrid.nl"::::
Running Yaim
Populate the rest of /etc/siteinfo with the proper files, just like you would do for any other gLite service. In particular the users.conf and groups./conf file must be there, and the VO definitions configured in /etc/siteinfo/vo.d/.
Then, run yaim
/opt/glite/yaim/bin/yaim -c -s /etc/siteinfo/lcg-quattor-site-info.def -n gram5
Starting and configuring the gatekeeper and scheduler event generators (SEG)
Make sure the gatekeeper and SEG are started on system boot:
chkconfig globus-scheduler-event-generator on && service globus-scheduler-event-generator start chkconfig globus-gatekeeper on && service globus-gatekeeper start
Starting BDII
After running Yaim, both the GLUE1.3 and GLUE2 information templates will have been created c.q. configured. Starting the BDII just needs
chkconfig bdii on && service bdii start
and wait for a few secodns for the infosys to appear on port ldap://FQDN:2170/. The DIT for GLUE1.3 starts at "o=grid", the DIT for GLUE2 starts at "o=glue".
Lists and templates
RPM package lists
Superfluous packages in IGE meta-package via UMD
The following packages are not needed for proper functioning, once you already have a functioning batch system client setup. You will not need client and server tools for each and every scheduler in the world, which is what you get once you do dependency resolution through UMD. UMD ships all kinds of batch system client that may not be the version you want. Install your own favourite version of a batch system and be content. Also: only install the batch system plugins for the jobmanager that you really need:
fedora-usermgmt fedora-usermgmt-core fedora-usermgmt-default-fedora-setup fedora-usermgmt-shadow-utils gridengine libtorque munge munge-libs torque torque-client
and after installing your own batch system client, pickone of
globus-gram-job-manager-condor globus-gram-job-manager-pbs globus-gram-job-manager-sge
or they'll complain about missing dependencies.
Specific Packages
globus-gram-job-manager-pbs-nikhef
Replaces the standard pbs job manager, adding VOMS-based accounting info, applying the VDT and EDG patches on executable validation, and forcing cwd to $TMPDIR by default to forestall inadvertent home directory use by the end-users. The configuration file /etc/globus/globus-pbs.conf is extended with a single variable setting
vomsinfo=path-to-voms-proxy-info
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-gram-job-manager-pbs-nikhef-0.2.src.tgz
Example config:
log_path="/var/spool/pbs/server_logs" pbs_default="stro.nikhef.nl" mpiexec=yes qsub="/usr/bin/qsub" qstat="/usr/bin/qstat" qdel="/usr/bin/qdel" cluster="1" remote_shell="no" cpu_per_node="1" softenv_dir= vomsinfo="/usr/bin/voms-proxy-info"
globus-gram5-glue2-info-providers
The BDII-GIP style information providers for static information in GLUE2 format for the gatekeeper. This has the arbitrary choices for some of the GLUE2 values
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-gram5-glue2-info-providers-0.2.src.tgz
globus-yaim-gram5
The GIP confoguration function, and the gram5 node type that also configs the use of LCAS/LCMAPS, the VOMSES information, and GIP. The node-type is full-fledged including:
gram5_FUNCTIONS=" config_vomsdir config_vomses config_users config_vomsmap config_mkgridmap config_lcas_lcmaps_gt4 config_gip_gram5_glue2 config_gip_gram5_glue13 "
Source: https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/nl.nikhef.ndpf.tools/globus-yaim-gram5/
Tar-ball: http://software.nikhef.nl/temporary/umd-gram5/tgz/globus-yaim-gram5-0.3.src.tgz
Notes and caveats
Gatekeeper fails to start on IPv4 only systems
If your system has IPv6 disabled (which is different from it being not configured), then the gatekeeper may fail to start. Re-enable IPv6 in /etc/sysctl.conf:
net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0
and when you're at it, make sure you have all your nodes IPv6 ready ;-)!