Difference between revisions of "Adding a new VO"

From PDP/Grid Wiki
Jump to navigationJump to search
 
(4 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
== Collect information about the new VO ==
 
== Collect information about the new VO ==
  
Go to the CIC portal ([http://cic.gridops.org]) to collect information about the new VO.  
+
Go to the VO ID card at the operations portal ([http://operations-portal.egi.eu]) to collect information about the new VO.  
  
 
Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.
 
Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.
 
  
 
== Create pool accounts and the gridmapdir ==
 
== Create pool accounts and the gridmapdir ==
Line 41: Line 40:
 
Definition of the DN of the host certificate of the server, the port number (usually 8443) and the DN of the CA certificate that was used to sign the VOMS server's certificate.
 
Definition of the DN of the host certificate of the server, the port number (usually 8443) and the DN of the CA certificate that was used to sign the VOMS server's certificate.
  
'''cluster/''facility-name''/local/pro_config_lcg2_site.tpl'''
+
'''cluster/''facility-name''/site/config/global.tpl''' (was: local/pro_config_lcg2_site.tpl)
  
 
This file defines variable VOS, which is a list of all supported VO names.
 
This file defines variable VOS, which is a list of all supported VO names.
  
'''cluster/''facility-name''/local/pro_config_queues.tpl'''
+
'''cluster/''facility-name''/site/config/torque/queues.tpl''' (was: local/pro_config_queues.tpl)
  
Add the name of the new VO (or one or more of its VOMS groups/roles) to the variable TORQUE_QUEUE_ACCESS, which is the source for generating the Torque ACLs. If a VO name is given, all unix groups for that VO get access to the queue. If VO views are needed for VOMS groups/roles, the FQANs can be added to the variable VO_VIEW (which is the source for setting Yaim variables <QUEUE>_GROUP_ENABLE).  
+
Add the name of the new VO (or one or more of its VOMS groups/roles) to the variable TORQUE_QUEUE_ACCESS, which is the source for generating the Torque ACLs. If a VO name is given, all unix groups for that VO get access to the queue.  
  
'''cluster/''facility-name''/local/pro_config_maui.tpl'''
+
'''cluster/''facility-name''/site/config/maui.tpl''' (was: /local/pro_config_maui.tpl)
  
 
Add a line to the Maui configuration to specify the fair share and priority of the VO or its groups.
 
Add a line to the Maui configuration to specify the fair share and priority of the VO or its groups.
 +
 +
'''clusters/''facility-name''/site/config/ce.tpl''' (''optional''; was: local/pro_ce_config.tpl)
 +
 +
If VO views are needed for VOMS groups/roles on the CEs, the FQANs can be added to the variable VO_VIEW (which is the source for setting Yaim variables <QUEUE>_GROUP_ENABLE).
  
 
Note: it is no longer needed to add user and group information to Yaim. The contents of those files are now derived from the VO parameters defined above. Via variable USE_DYNAMIC_POOL_ACCOUNTS=true all VOs are mapped to the same dynamic pool account range (dyn00000 and up, group dynamic).
 
Note: it is no longer needed to add user and group information to Yaim. The contents of those files are now derived from the VO parameters defined above. Via variable USE_DYNAMIC_POOL_ACCOUNTS=true all VOs are mapped to the same dynamic pool account range (dyn00000 and up, group dynamic).
Line 66: Line 69:
  
  
 +
=== Enable support for the CreamCE ===
 +
 +
The CreamCEs require access an NFS share /data/sourcream hosted by schuur. For every Unix group supported by a CreamCE, there should be a corresponding directory with the name of the group, owned by user tomcat and belonging to that Unix group. The permissions on these directories must be 0770.
 +
 +
These directories must be manually created by user root:
 +
 +
mkdir /project/share/cream/popsnl
 +
chmod 0770 /project/share/cream/popsnl
 +
chown tomcat:popsnl /project/share/cream/popsnl
 +
# ls -ld /project/share/cream/popsnl
 +
drwxrwx--- 3 tomcat popsnl 4096 Mar 16 09:24 /project/share/cream/popsnl
 +
 +
In absence of the directory, job submission to the CreamCE will fail. The CreamCE's log file will contain the following error:
  
 +
org.glite.ce.creamapi.cmdmanagement.CommandWorker (CommandWorker.java:119) -
 +
(Worker Thread 36) Worker Thread 36 command failed: the reason is =
 +
/bin/mkdir: cannot create directory `/data/sourcream/popsnl': Permission
 +
denied
 
----
 
----
 
[http://www.bronvanwelzijn.nl]
 
[http://www.bronvanwelzijn.nl]

Latest revision as of 13:29, 27 September 2013

Collect information about the new VO

Go to the VO ID card at the operations portal ([1]) to collect information about the new VO.

Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.

Create pool accounts and the gridmapdir

Find a free Unix group ID and user ID range for the pool accounts. This can be achieved with via an ldapsearch query, or easier by using the ldap browser (LBE). LBE is available at the Nikhef desktops via /global/ices/toolset/bin/lbe.

Create pool accounts, home directories for the pool accounts and gridmapdir entries using the procedure described at the following page: [[2]].

The LFC no longer needs a gridmapdir. The Resource Brokers, WMS servers and DPM servers (head node and disk servers) use a dynamic pool account range (dynXXXXX) that is independent of the VO. This gridmapdir is managed by Quattor and does not need modification when adding a new VO.

Create a software installation area

This section is only needed if the VO requires a software installation area.

The software installation areas are located under /export/data/esia at host hoeve. The areas should be created manually, as user root at hoeve.

mkdir /export/data/esia/voname
chgrp unixgroup /export/data/esia/voname
chmod g+wrs /export/data/esia/voname
chmod +t /export/data/esia/voname

If there is a group of pool accounts for sgm users for the VO, unixgroup should match the group for the sgm users.

Add the VO configuration to Quattor profiles

All modifications to the Quattor set up are located in the template hierarchy under directory $L/cfg where $L points to the conf-ns directory under the Quattor root directory. The basic VO definition is (by default) independent of the facility. The rest of the configuration is specific per VO.

grid/vo/params/voname.tpl

Configuration of VO settings like the VOMS server, Unix pool account and group for 1 user, various FQAN for groups and roles, location of the software installation directory, the default storage element etc. The FQANs with special roles or groups must be defined in this list before the plain VO name. Otherwise there is the risk that the VO name with wild cards is the first match when LCMAPS is executed (implying all users are mapped to regular pool accounts and special pool accounts are never used). It is recommended to copy an existing template, rename it and customize its contents. Note that for every VOMS server referenced, a template grid/vo/voms/voms-server.tpl must exist!

grid/vo/voms/voms-server.tpl

Definition of the DN of the host certificate of the server, the port number (usually 8443) and the DN of the CA certificate that was used to sign the VOMS server's certificate.

cluster/facility-name/site/config/global.tpl (was: local/pro_config_lcg2_site.tpl)

This file defines variable VOS, which is a list of all supported VO names.

cluster/facility-name/site/config/torque/queues.tpl (was: local/pro_config_queues.tpl)

Add the name of the new VO (or one or more of its VOMS groups/roles) to the variable TORQUE_QUEUE_ACCESS, which is the source for generating the Torque ACLs. If a VO name is given, all unix groups for that VO get access to the queue.

cluster/facility-name/site/config/maui.tpl (was: /local/pro_config_maui.tpl)

Add a line to the Maui configuration to specify the fair share and priority of the VO or its groups.

clusters/facility-name/site/config/ce.tpl (optional; was: local/pro_ce_config.tpl)

If VO views are needed for VOMS groups/roles on the CEs, the FQANs can be added to the variable VO_VIEW (which is the source for setting Yaim variables <QUEUE>_GROUP_ENABLE).

Note: it is no longer needed to add user and group information to Yaim. The contents of those files are now derived from the VO parameters defined above. Via variable USE_DYNAMIC_POOL_ACCOUNTS=true all VOs are mapped to the same dynamic pool account range (dyn00000 and up, group dynamic).

Supporting a subset of the VOs

It is possible to let a host define only a subset of the VOs by defining the variable VOS in the object template before including the machine type template, like in the following example:

# restrict access to the WMS because it is too popular with the gratis VOs
variable VOS ?= list( "vo1", "vo2", "vo3" );
include { 'machine-types/wms' };

This trick is currently used on some WMS and storage systems.


Enable support for the CreamCE

The CreamCEs require access an NFS share /data/sourcream hosted by schuur. For every Unix group supported by a CreamCE, there should be a corresponding directory with the name of the group, owned by user tomcat and belonging to that Unix group. The permissions on these directories must be 0770.

These directories must be manually created by user root:

mkdir /project/share/cream/popsnl
chmod 0770 /project/share/cream/popsnl
chown tomcat:popsnl /project/share/cream/popsnl
# ls -ld /project/share/cream/popsnl
drwxrwx--- 3 tomcat popsnl 4096 Mar 16 09:24 /project/share/cream/popsnl

In absence of the directory, job submission to the CreamCE will fail. The CreamCE's log file will contain the following error:

org.glite.ce.creamapi.cmdmanagement.CommandWorker (CommandWorker.java:119) -
(Worker Thread 36) Worker Thread 36 command failed: the reason is =
/bin/mkdir: cannot create directory `/data/sourcream/popsnl': Permission
denied

[3]