Adding a new VO
Collect information about the new VO
Go to the CIC portal ([1]) to collect information about the new VO.
Relevant information are the VOMS server(s), the content for the vomses file, possible VOMS roles and groups to be supported, indications of the number of required pool accounts etc.
Create pool accounts and the gridmapdir
Find a free Unix group ID and user ID range for the pool accounts. This can be achieved with via an ldapsearch query, or easier by using the ldap browser (LBE). LBE is available at the Nikhef desktops via /global/ices/toolset/bin/lbe.
Create pool accounts, home directories for the pool accounts and gridmapdir entries using the procedure described at the following page: [[2]].
The LFC no longer needs a gridmapdir. The Resource Brokers and DPM servers (head node and disk servers) use a dynamic pool account range (dynXXXXX) that is independent of the VO. This gridmapdir is managed by Quattor and does not need modification when adding a new VO.
Create a software installation area
This section is only needed if the VO requires a software installation area.
The software installation areas are located under /export/data/esia at host hoeve. The areas should be created manually, as user root at hoeve.
mkdir /export/data/esia/voname chgrp unixgroup /export/data/esia/voname chmod g+wrs /export/data/esia/voname chmod +t /export/data/esia/voname
If there is a group of pool accounts for sgm users for the VO, unixgroup should match the group for the sgm users.
Add the VO configuration to Quattor profiles
All modifications to the Quattor set up are located in the template hierarchy under directory $L/cfg where $L points to the conf-ns directory under the Quattor root directory. The basic VO definition is (by default) independent of the facility. The rest of the configuration is specific per VO.
grid/vo/params/voname.tpl
Configuration of VO settings like the VOMS server, contents for the vomses file, location of the software installation directory, the default storage element etc. It is recommended to copy an existing template, rename it and customize its contents.
facility/facility-name/local/pro_config_lcg2_site.tpl
This file defines variable VOS, which is a list of all supported VO names. Also, the name of the VOMS server can be added to variable VOMS_SERVER_CERTIFICATES to isntall the certificate for the VOMS server.
facility/facility-name/local/pro_config_yaim_users.tpl
Add 1 line per pool account group associated with the VO to the definition of variable USERSCONF. This line should refer to the first pool account in the group only (because we do not use Yaim to create these users).
facility/facility-name/local/pro_config_yaim_groups.tpl
Add all supported VOMS roles and groups to variable GROUPSCONF.
facility/facility-name/local/pro_config_queues.tpl
Add the name of the new VO to the appropriate queues in the variables (type: nlist) QUEUE_ACCESS and QUEUE_GROUP_ENABLE. The former defines ACLs for Torque, the latter for VO views in the information system, which may also contain any specific VOMS groups or roles.
facility/facility-name/local/pro_config_maui.tpl
Add a line to the Maui configuration to specify the fair share and priority of the VO.
In some cases - adding the new VO to some of our RBs and/or DPMs additional steps have to be done
facility/facility-name/profiles/profile_RB-Name.tpl
add VO-Name to variable: VOS = list
facility/facility-name/local/pro_config_yaim_users_dynamic.tpl
add 0:dyn00000:2050:dynamic:VO-Name::