GLExec

From PDP/Grid Wiki
Jump to navigationJump to search

gLExec is a program to make the required mapping between the grid world and the Unix notion of users and groups, and has the capacity to enforce that mapping by modifying the uid and gids of running processes. Based on LCMAPS and LCMAPS, it can both act as a light-weight 'gatekeeper' replacement, and even be used on the worker node in late-binding (pilot job) scenarios. Through the LCMAPS SCAS client a central mapping and authorization service (SCAS, or any interoperable SAML2XACML2 service) can be used.

gLExec is a program to make the required mapping between the grid world and the Unix notion of users and groups, and has the capacity to enforce that mapping by modifying the uid and gids of running processes. For a service running under a .generic. uid, such as a web services container, it provides the way to escape from this container uid. It may be used similarly by externally managed services run on a site.s edge. Lastly, in a late-binding scenario, the identity of the workload owner can be set at the instant the job starts executing.

The description, design and caveats are described in the paper to the CHEP conference.

Local services, in particular computing services offered on Unix [5] and Unix-like platforms, use a different native representation of the user and group concepts. In the Unix domain, these are expressed as (numeric) identifiers, where each user is assigned a user identifier (uid) and one or more group identifiers (gid). At any one time, a single gid will be the .primary. gid (pgid) of a particular process, This pgid is initially used for group-level process (and batch system) accounting. The uid and gid representation is local to each administrative domain.

Batch system interoperability

When used on a worker node (in a late binding pilot job scenario), gLExec attempts really hard to be neutral to its OS environment. In particular, gLExec will not break the process tree, and will accumulate CPU and system usage times from the child processes it spawns. We recognize that this is particularly important in the gLExec-on-WN scenario, where the entire process (pilot job and target user processes) should be managed as a whole by the node-local batch system daemon.

You are encouraged to verify OS and batch system interoperability. In order to do that, you have two options:

  • Comprehensive testing: Ulrich Schwickerath has defined a series of (partially CERN-specific) tests to verify that glExec does not break the batch system setup of a site. He has extensively documented his efforts on the Wiki at https://twiki.cern.ch/twiki/bin/view/FIOgroup/FsLSFGridglExec. Note that the Local Tools section is CERN-specific. If you use other tools to clean up the user's work area (such as the $tmpdir facility of PBSPro and Troque), or use the PruneUserproc utility to remove stray processes, you are not affected by this.
  • Basic OS and batch-system testing can be done even without installing glExec, but just compiling a simple C program with one hard-coded uid for testing. This is the fastest solution for testing, but only verifies that your batch system reacts correctly, not that your other grid-aware system script will work as you expect.

The following batch systems are known to be compatible with gLExec-on-the-Worker-Node:

  • Torque, all versions
  • OpenPBS, all versions
  • Platform LSF, all versions
  • BQS, all versions
  • Condor, all versions

If you notice any anomalies after testing, i.e. the job will not die, please notify the developers at grid dash mw dash security at nikhef dot nl.

Deploying gLExec on the worker node

Using generic per-node pool accounts or a shareed map database

The preferred way to deploy gLExec on the worker node is by using (VO-agnostic) generic pool accounts that are local to each worker node. This way, you can be sure that a gLExec'ed job does not "escape" from the node, and it limits the number of pool accounts needed. For this configuration, you

  • create at least as many pool accounts as you have job slots on a WN
  • assign a worker node local gridmapdir (suggestion: /var/local/gridmapdir)
  • create local pool accounts with a local home directory (suggestion: account names wnpool00 etc, and home directories in a local file system that has enough space, e.g., /var/local/home/poolwn00, etc.)
  • configure the lcmaps.db configuration used by glexec to refer to this gridmapdir

Note that the /var/run/glexec directory is used to maintain the mapping between the target and the originator account for easy back-mapping for running jobs. This information is of course also logged to syslog(3).

If you like shared pool accounts, you can use a shared atomic state database (implemented as an NFS directory) to host the gridmapdir. All operations on the gridmapdir are atomic, even over NFS, and it scales really well (remember that NFS is still the file sharing mechanism of choice for many large installations)

Detailed documentation is given at http://www.nikhef.nl/grid/lcaslcmaps/glexec/glexec-install-procedure.html.

Using the SCAS

If you prefer to use LCMAPS with the SCAS service, add the scas-client plugin to the set of RPMs, and configure the SCAS client. You would add to /opt/glite/etc/lcmaps/lcmaps-glexec.db:

scasclient = "lcmaps_scas_client.mod"
            " -capath /etc/grid-security/certificates/"
            " -endpoint https://graszaad.nikhef.nl:8443"
            " -resourcetype wn"
            " -actiontype execute-now"

and the following policy execution flow at the end:

# policies
glexec_get_account:
verify_proxy  -> scasclient
scasclient -> posix_enf


Using gLExec in a pilot job framework

When you use glexec with transient directories and input sandboxes, it's important that you create a writable directory for your target job, and you do this in a safe and portable way. We provide a proof-of-principle imple,entation on hwo to create such a directory, and clean up after yourself here:

Manual and documentation