Difference between revisions of "Adding a new VO"

From PDP/Grid Wiki
Jump to navigationJump to search
 
(28 intermediate revisions by 2 users not shown)
Line 2: Line 2:
 
== Collect information about the new VO ==
 
== Collect information about the new VO ==
  
Go to the CIC portal ([http://cic.gridops.org]) to collect information about the new VO.  
+
Go to the VO ID card at the operations portal ([http://operations-portal.egi.eu]) to collect information about the new VO.  
  
 
Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.
 
Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.
 
  
 
== Create pool accounts and the gridmapdir ==
 
== Create pool accounts and the gridmapdir ==
Line 13: Line 12:
 
Create pool accounts, home directories for the pool accounts and gridmapdir entries using the procedure described at the following page: [[http://www.nikhef.nl/pub/projects/grid/gridwiki/index.php/Creating_Pool_Accounts_With_LDAP]].
 
Create pool accounts, home directories for the pool accounts and gridmapdir entries using the procedure described at the following page: [[http://www.nikhef.nl/pub/projects/grid/gridwiki/index.php/Creating_Pool_Accounts_With_LDAP]].
  
For DPM, a separate NFS-shared home directory exists at hooimijt: <tt>/export/perm/share/gridmapdir_dpm</tt>. The pool account entries created in the regular gridmapdir can simply be copied (as user root on hooimijt) to the gridmapdir for DPM.  
+
The LFC no longer needs a gridmapdir. The Resource Brokers, WMS servers and DPM servers (head node and disk servers) use a dynamic pool account range (dynXXXXX) that is independent of the VO. This gridmapdir is managed by Quattor and does not need modification when adding a new VO.
 
 
For the LFC, a local gridmapdir is used: <tt>/etc/grid-security/gridmapdir</tt>. Again, the newly created entries in the regular shared gridmapdir should be copied to the local gridmadir.
 
  
 
== Create a software installation area ==
 
== Create a software installation area ==
Line 21: Line 18:
 
This section is only needed if the VO requires a software installation area.
 
This section is only needed if the VO requires a software installation area.
  
The software installation areas are located under <tt>/export/cache/lcgprod/esia</tt> at host hooimijt. The areas should be created manually, as user root at hooimijt.
+
The software installation areas are located under <tt>/export/data/esia</tt> at host hoeve. The areas should be created manually, as user root at hoeve.
  
  mkdir /export/cache/lcgprod/esia/''voname''
+
  mkdir /export/data/esia/''voname''
  chgrp ''unixgroup'' /export/cache/lcgprod/esia/''voname''
+
  chgrp ''unixgroup'' /export/data/esia/''voname''
  chmod g+wrs /export/cache/lcgprod/esia/''voname''
+
  chmod g+wrs /export/data/esia/''voname''
  chmod +t /export/cache/lcgprod/esia/''voname''
+
  chmod +t /export/data/esia/''voname''
  
 
If there is a group of pool accounts for sgm users for the VO, ''unixgroup'' should match the group for the sgm users.
 
If there is a group of pool accounts for sgm users for the VO, ''unixgroup'' should match the group for the sgm users.
Line 32: Line 29:
 
== Add the VO configuration to Quattor profiles ==
 
== Add the VO configuration to Quattor profiles ==
  
All modifications to the Quattor set up are located in the template hierarchy under the facility-specific directory $L/cfg/facility/''facility-name'' where $L points to the conf directory under the Quattor root directory. Unless specified differently, all files mentioned here are relative to the facility-specific directory.
+
All modifications to the Quattor set up are located in the template hierarchy under directory <tt>$L/cfg</tt> where <tt>$L</tt> points to the <tt>conf-ns</tt> directory under the Quattor root directory. The basic VO definition is (by default) independent of the facility. The rest of the configuration is specific per VO.
 +
 
 +
'''grid/vo/params/''voname''.tpl'''
 +
 
 +
Configuration of VO settings like the VOMS server, Unix pool account and group for 1 user, various FQAN for groups and roles, location of the software installation directory, the default storage element etc. '''The FQANs with special roles or groups must be defined in this list before the plain VO name.''' Otherwise there is the risk that the VO name with wild cards is the first match when LCMAPS is executed (implying all users are mapped to regular pool accounts and special pool accounts are never used).
 +
It is recommended to copy an existing template, rename it and customize its contents. Note that for every VOMS server referenced, a template grid/vo/voms/''voms-server''.tpl must exist!
 +
 
 +
'''grid/vo/voms/''voms-server''.tpl'''
 +
 
 +
Definition of the DN of the host certificate of the server, the port number (usually 8443) and the DN of the CA certificate that was used to sign the VOMS server's certificate.
 +
 
 +
'''cluster/''facility-name''/site/config/global.tpl''' (was: local/pro_config_lcg2_site.tpl)
 +
 
 +
This file defines variable VOS, which is a list of all supported VO names.
 +
 
 +
'''cluster/''facility-name''/site/config/torque/queues.tpl''' (was: local/pro_config_queues.tpl)
 +
 
 +
Add the name of the new VO (or one or more of its VOMS groups/roles) to the variable TORQUE_QUEUE_ACCESS, which is the source for generating the Torque ACLs. If a VO name is given, all unix groups for that VO get access to the queue.
 +
 
 +
'''cluster/''facility-name''/site/config/maui.tpl'''  (was: /local/pro_config_maui.tpl)
 +
 
 +
Add a line to the Maui configuration to specify the fair share and priority of the VO or its groups.
 +
 
 +
'''clusters/''facility-name''/site/config/ce.tpl''' (''optional''; was: local/pro_ce_config.tpl)
 +
 
 +
If VO views are needed for VOMS groups/roles on the CEs, the FQANs can be added to the variable VO_VIEW (which is the source for setting Yaim variables <QUEUE>_GROUP_ENABLE).  
  
'''lcg2/yaim/pro_vo_''voname''.tpl'''
+
Note: it is no longer needed to add user and group information to Yaim. The contents of those files are now derived from the VO parameters defined above. Via variable USE_DYNAMIC_POOL_ACCOUNTS=true all VOs are mapped to the same dynamic pool account range (dyn00000 and up, group dynamic).
  
Configuration of VO settings like the VOMS server, contents for the vomses file, location of the software installation directory, the default storage element etc. It is recommended to copy an existing template, rename it and customize its contents.
+
=== Supporting a subset of the VOs ===
  
'''lcg2/yaim/pro_config_lcg2_vos.tpl'''
+
It is possible to let a host define only a subset of the VOs by defining the variable VOS in the object template before including the machine type template, like in the following example:
  
In this file, include the VO-specific file created above.
+
# restrict access to the WMS because it is too popular with the gratis VOs
 +
variable VOS ?= list( "vo1", "vo2", "vo3" );
 +
include { 'machine-types/wms' };
  
'''lcg2/yaim/pro_lcg2_config_yaim_files.tpl'''
+
This trick is currently used on some WMS and storage systems.
  
Add 1 line per pool account group associated with the VO to the definition of variable USERSCONF. This line should refer to the first pool account in the group only (because we do not use Yaim to create these users).
 
Add all supported VOMS roles and groups to variable GROUPSCONF.
 
  
'''lcg2/yaim/pro_queue_access.tpl'''
+
=== Enable support for the CreamCE ===
  
Add the name of the VO and optional VOMS FQANs to the ''QUEUE''_GROUP_ENABLE variables that correspond to the queues to which the VO users may submit jobs.
+
The CreamCEs require access an NFS share /data/sourcream hosted by schuur. For every Unix group supported by a CreamCE, there should be a corresponding directory with the name of the group, owned by user tomcat and belonging to that Unix group. The permissions on these directories must be 0770.
  
'''lcg2/pro_lcg2_service_maui.tpl'''
+
These directories must be manually created by user root:
  
Add a line to the Maui configuration to specify the fair share and priority of the VO.
+
mkdir /project/share/cream/popsnl
 +
chmod 0770 /project/share/cream/popsnl
 +
chown tomcat:popsnl /project/share/cream/popsnl
 +
# ls -ld /project/share/cream/popsnl
 +
drwxrwx--- 3 tomcat popsnl 4096 Mar 16 09:24 /project/share/cream/popsnl
  
'''lcg2/yaim/pro_voms_trustanchors.tpl'''
+
In absence of the directory, job submission to the CreamCE will fail. The CreamCE's log file will contain the following error:
  
If the server certificate of the VOMS server is not installed via e.g. an rpm, it should be included in this template.
+
org.glite.ce.creamapi.cmdmanagement.CommandWorker (CommandWorker.java:119) -
 +
(Worker Thread 36) Worker Thread 36 command failed: the reason is =
 +
/bin/mkdir: cannot create directory `/data/sourcream/popsnl': Permission
 +
denied
 +
----
 +
[http://www.bronvanwelzijn.nl]

Latest revision as of 15:29, 27 September 2013

Collect information about the new VO

Go to the VO ID card at the operations portal ([1]) to collect information about the new VO.

Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.

Create pool accounts and the gridmapdir

Find a free Unix group ID and user ID range for the pool accounts. This can be achieved with via an ldapsearch query, or easier by using the ldap browser (LBE). LBE is available at the Nikhef desktops via /global/ices/toolset/bin/lbe.

Create pool accounts, home directories for the pool accounts and gridmapdir entries using the procedure described at the following page: [[2]].

The LFC no longer needs a gridmapdir. The Resource Brokers, WMS servers and DPM servers (head node and disk servers) use a dynamic pool account range (dynXXXXX) that is independent of the VO. This gridmapdir is managed by Quattor and does not need modification when adding a new VO.

Create a software installation area

This section is only needed if the VO requires a software installation area.

The software installation areas are located under /export/data/esia at host hoeve. The areas should be created manually, as user root at hoeve.

mkdir /export/data/esia/voname
chgrp unixgroup /export/data/esia/voname
chmod g+wrs /export/data/esia/voname
chmod +t /export/data/esia/voname

If there is a group of pool accounts for sgm users for the VO, unixgroup should match the group for the sgm users.

Add the VO configuration to Quattor profiles

All modifications to the Quattor set up are located in the template hierarchy under directory $L/cfg where $L points to the conf-ns directory under the Quattor root directory. The basic VO definition is (by default) independent of the facility. The rest of the configuration is specific per VO.

grid/vo/params/voname.tpl

Configuration of VO settings like the VOMS server, Unix pool account and group for 1 user, various FQAN for groups and roles, location of the software installation directory, the default storage element etc. The FQANs with special roles or groups must be defined in this list before the plain VO name. Otherwise there is the risk that the VO name with wild cards is the first match when LCMAPS is executed (implying all users are mapped to regular pool accounts and special pool accounts are never used). It is recommended to copy an existing template, rename it and customize its contents. Note that for every VOMS server referenced, a template grid/vo/voms/voms-server.tpl must exist!

grid/vo/voms/voms-server.tpl

Definition of the DN of the host certificate of the server, the port number (usually 8443) and the DN of the CA certificate that was used to sign the VOMS server's certificate.

cluster/facility-name/site/config/global.tpl (was: local/pro_config_lcg2_site.tpl)

This file defines variable VOS, which is a list of all supported VO names.

cluster/facility-name/site/config/torque/queues.tpl (was: local/pro_config_queues.tpl)

Add the name of the new VO (or one or more of its VOMS groups/roles) to the variable TORQUE_QUEUE_ACCESS, which is the source for generating the Torque ACLs. If a VO name is given, all unix groups for that VO get access to the queue.

cluster/facility-name/site/config/maui.tpl (was: /local/pro_config_maui.tpl)

Add a line to the Maui configuration to specify the fair share and priority of the VO or its groups.

clusters/facility-name/site/config/ce.tpl (optional; was: local/pro_ce_config.tpl)

If VO views are needed for VOMS groups/roles on the CEs, the FQANs can be added to the variable VO_VIEW (which is the source for setting Yaim variables <QUEUE>_GROUP_ENABLE).

Note: it is no longer needed to add user and group information to Yaim. The contents of those files are now derived from the VO parameters defined above. Via variable USE_DYNAMIC_POOL_ACCOUNTS=true all VOs are mapped to the same dynamic pool account range (dyn00000 and up, group dynamic).

Supporting a subset of the VOs

It is possible to let a host define only a subset of the VOs by defining the variable VOS in the object template before including the machine type template, like in the following example:

# restrict access to the WMS because it is too popular with the gratis VOs
variable VOS ?= list( "vo1", "vo2", "vo3" );
include { 'machine-types/wms' };

This trick is currently used on some WMS and storage systems.


Enable support for the CreamCE

The CreamCEs require access an NFS share /data/sourcream hosted by schuur. For every Unix group supported by a CreamCE, there should be a corresponding directory with the name of the group, owned by user tomcat and belonging to that Unix group. The permissions on these directories must be 0770.

These directories must be manually created by user root:

mkdir /project/share/cream/popsnl
chmod 0770 /project/share/cream/popsnl
chown tomcat:popsnl /project/share/cream/popsnl
# ls -ld /project/share/cream/popsnl
drwxrwx--- 3 tomcat popsnl 4096 Mar 16 09:24 /project/share/cream/popsnl

In absence of the directory, job submission to the CreamCE will fail. The CreamCE's log file will contain the following error:

org.glite.ce.creamapi.cmdmanagement.CommandWorker (CommandWorker.java:119) -
(Worker Thread 36) Worker Thread 36 command failed: the reason is =
/bin/mkdir: cannot create directory `/data/sourcream/popsnl': Permission
denied

[3]