Difference between revisions of "RCauth Delegation Server & MasterPortal - Ansible scripts"

From PDP/Grid Wiki
Jump to navigationJump to search
(nikhef spec)
 
(64 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
= Introduction =
 
= Introduction =
  
After experimenting with [[CILogon Pre-Pilot Work - Jenkins | Jenkins]] as a method of deployment for the [[CILogon Pre-Pilot Work | CILogon Pilot]] we quickly had to come to the conclusion that Jenkins was not meant to be used for such deployments. Although it can spin up machines and run any kind of configuration scripts on them, it was still lacking the kind of configuration control and templating capabilities that we were looking for. We decided to move away from it and use one of the more widespread tools for deployments and configuration.
+
This page describes the ansible scripts we provide for setting up either a MasterPortal or Delegation Server, both part of the RCauth landscape. Details on how to use the ansible scripts can be found in the README files in the respective github repositories. These sections describe instead the overall layout and design of the scripts.
  
Out pick fell on [http://docs.ansible.com/ Ansible] mainly because of the following points:
+
== General Structure ==
 +
 
 +
The ansible deployment scripts are structured into, what ansible calls [http://docs.ansible.com/ansible/playbooks.html plays]. In our deployment scripts we use a single play to configure a single host of a specific kind. Ansible plays are made up of [http://docs.ansible.com/ansible/playbooks_roles.html#roles roles]. We are using roles as a means of grouping tasks that logically belong together.
 +
 
 +
Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of default variables:  
  
* support for configuration templating
+
* '''roles/x/vars/main.yml''' holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they '''should not''' be overridden by variables defined inside a play (see [http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable variable precedence]). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
* easy to use modules to interact with systems
 
* no client software required (runs through ssh)
 
* good documentation
 
  
== General Structure ==
+
* '''roles/x/defaults/main.yml''' holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they '''may''' be overridden by variables defined inside a play.
  
The ansible deployment scripts are structured into, what ansible calls [http://docs.ansible.com/ansible/playbooks.html plays]. In our deployment scripts we used a single play to configure a single host of a specific kind. Ansible plays are made up of [http://docs.ansible.com/ansible/playbooks_roles.html#roles roles]. We are using roles as a means of grouping tasks that logically belong together.
+
The variables set in roles/x/defaults/main.yml are overridden by files in a subdirectory of the top-level config/ directory. See config/PLACEHOLDER/x_env.yml for example files for each role (x). For a specific deployment a new subdirectory needs to be created and defined as configdir in the inventory file, see hosts.inventory.PLACEHOLDER for an example.
  
Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of variables:
+
== Generating Secrets ==
  
* '''roles/x/vars/main.yml''' holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they '''cannot''' be overwritten by variables defined inside a play (see [http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable variable precedence]). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
+
Both Master Portal and Delegation Server setup requires several different passwords for different components (database, rsync, keystore). To ease the pain of making up new passwords we provide an extra play that can be executed to generate a set of required passwords. The passwords are created with the openssl 'rand' module with special characters omitted. The play generating new passwords runs on your local machine and it creates a file called '''secrets_env.yml''' in the current directory. This file is then used as input for the [[#Plays | Master Portal play]] and the [[#Plays_2 | Delegation Server play]] respectively. In case you don't want to use our password  generator make sure to fill secrets_env.yml manually with all the required passwords!
  
* '''roles/x/defaults/main.yml''' holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they '''can''' be overwritten by variables defined inside a play. For each defined role 'x' you will find a corresponding variables file name 'x_env.yml' in the top level directory. This variables file is used together with the 'x' role inside a play. Use this variables file to overwrite defaults! Note that not ALL variables from the defaults are meant to be overwritten (for examples paths can stay the same). You will find a 'x_env.yml' template for each role that will tell you which variables you SHOULD overwrite (like passwords).
+
For further details on how to use this play, see the README files.
  
 
= Master Portal =
 
= Master Portal =
  
The ansible scripts for deploying a Master Portal can be found under Nikhef subversion, [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/aarc.master-portal/ansible/ here]. You can use these scripts to deploy a Master Portal and Credential Store on two separate hosts, or into one single host. Before you begin executing plays make sure to decide whether you're deployment will use two separate hosts for Master Portal and Credential Store, or a single host. Fill in you machine hostname[s] into the '''hosts.inventory''' file accordingly. The hostname[s] set in the inventory files will be set on the target machine[s].
+
The ansible scripts for deploying a Master Portal can be found on [https://github.com/rcauth-eu/aarc-ansible-master-portal github] as part of the rcauth-eu organization. You can use these scripts to deploy a Master Portal, Credential Store and SSH host on separate hosts, or on a single host. They assume a basic CentOS 6 or 7 installation.
  
These scripts expect you to have a basic CentOS 6 or 7 installation ready. Since the two OS installations have slight differences between them (like repository addresses) we created two sets of variables for each OS version. Once you decided which OS version you want to use, make sure to check the symlinks called '''roles/*/defaults/main.yml'''. These symlinks can either point to '''rh6.yml''' or to '''rh7.yml''' under the same directory in which they are. Set these according to the OS of your choice.
+
git clone https://github.com/rcauth-eu/aarc-ansible-master-portal.git
 +
 
 +
Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.
  
 
== Roles ==
 
== Roles ==
  
=== basic ===
+
=== common ===
  
The basic role tries to cover the general environment setup that is needed for both Master Portal and Credential Store. The tasks belonging to the basic role cover the following configurations:
+
This 'role' only loads variables common to all roles, it does not contain actual tasks.
  
* install / update required packages and repositories
+
=== basic ===
* configure access to the machine (iptable, ssh)
 
* deploy host credentials (see input 1.)
 
* enable required services, disable not required services
 
  
Before applying the basic role make sure to provide the following inputs for the ansible scripts:
+
The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:
  
# You need to have host certificates ready to apply this role. Place your PEM formatted certificate and key file under the 'roles/basic/files/' directory. You will need to call these credentials ''''hostname'.crt''' and ''''hostname'.key''', where 'hostname' is the same name that you provided in the [[ #Master_Portal | hosts.inventory]] file of your play. It is assumed that your host certificates are issued by 'TERENA eScience SSL CA 3'. If this is not the case, you will have to make some modifications to these scripts to install the right trust root!
+
* basic host setup (hostname, selinux, sshd)
# Overwrite the recommended default variables from '''basic_env.yml''' to match your environment.
+
* install / update required packages and repositories (repos, yum)
 +
* configure access to the machine and ntp (network, iptables, access, ntpd)
 +
* deploy host credentials and grid related services (fetchcrl, hostcreds, vomses)
 +
* enable/disable services
  
 
=== credstore ===
 
=== credstore ===
Line 46: Line 49:
 
The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:
 
The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:
  
* install and configure MyProxy Server as a Credential Store
+
* install and configure MyProxy Server as a Credential Store (myproxy-credstore)
* deploy trusted online CA (see input 1.)
+
* install the relevant IGTF CA distributions (deploy-igtf-distrib)
* configure iptable rules for Credential Store (open myproxy port)
+
* add myproxy_purge for expired/revoked certificates (purger)
* add additional services (myproxy_purge for expired/revoked certificates)
+
* start/restart myproxy-server
* start/restart relevant services (myproxy-server)
 
  
Before applying the credstore role make sure to provide the following inputs for the ansible scripts:
+
=== masterportal ===
  
# MyProxy Server only stores credentials that it can verify, therefore it's very important to have the Online CA (which will issue user certificates) as a trusted certificate on the Credential Store machine. Make a tarball from the Online CA in PEM format, together with its subject_hash links and signing_policy. Do not forget the signing policy, since MyProxy will not work without it! The tarball should contain these files at the top level, without any directory structure, and it should be places as '''onlineca.tar.gz''' under the 'roles/credstore/files/' directory .
+
The masterportal role takes care of deploying and configuring the Master Portal, VO-portal, SSH-portal and their dependencies. The tasks belonging to the masterportal role cover the following:
# Overwrite the recommended default variables from '''credstore_env.yml''' to match your environment.
 
  
=== masterportal ===
+
* install and configure java, tomcat, apache-httpd (java, tomcat, httpd)
 +
* install and configure mysql or mariadb (mysql). Note postgres is not yet supported but might work. Note: on RH6 mysql is used while on RH7 mariadb is used instead.
 +
* install mysql backup script and cronjob (mysql-backup)
 +
* install keyutil for creating key- and truststores (keyutil)
 +
* install and configure Master Portal Client and Server parts (oa4mp-client, oa4mp-server)
 +
* install and configure SSH portal for uploading SSH keys (optional, can be disabled by undefining sshkey_portal)
 +
* install and configure VO Portal for testing proxy retrieval (optional, can be disabled by undefining vo_portal)
 +
* start/restart relevant services tomcat, httpd and mariadb/mysql
  
The masterportal role takes care of deploying and configuring the Master Portal and its dependencies. The tasks belonging to the masterportal role cover the following:
+
=== sshhost ===
  
* install and configure dependencies : java, tomcat, httpd
+
This role takes care of installing the host that will run the ssh server for getting proxy certificates on the command line.
* add database support through either mysql or mariadb (postgres is not yet supported). Note: If you're on RH6 use mysql, if you're on RH7 stick to mariadb.
 
* install and configure Master Portal Server and Master Portal Client
 
* install and configure VO Portal (optional). You can enable/disable the execution of this task in role/masterportal/tasks/main.yml
 
* configure iptable rules for Master Portal (open https port)
 
* start/restart relevant services (tomcat, httpd)
 
  
Before applying the masterportal role make sure to provide the following inputs for the ansible scripts:
+
* create the proxy user account (proxyuser)
 +
* configure the hostcert/key for the proxy user (proxycreds)
 +
* copy the AuthorizedKeysCommand (authz_cmd) and myproxy_cmd scripts and their dependencies (scripts)
 +
* update the sshd config file and restart sshd (sshd)
 +
* update the access.conf file (access)
  
# Add the compiled Master Portal wars to the deployment. If you don't have a compiled Master Portal you can [[Building the Master Portal | build]] it yourself. Add the compiled war files ('''mp-oa2-client.war''' and '''mp-oa2-server.war''') into the 'role/masterportal/files' directory.
+
== Plays ==
# Add the compiled Master Portal CLI to the deployment. You will need this CLI tool to approve and manage your Master Portal clients! If you don't have a compiled CLI you can [[Building the Master Portal | build]] it yourself. Place the compiled jar ('''oa2-cli.jar''') into the 'role/masterportal/files' directory.
 
# In case you are also deploying the VO Portal, make sure to also put the compiled VO Portal war file ('''vo-portal.war''') into the 'role/masterportal/files' directory.
 
# Overwrite the recommended default variables from '''masterportal_env.yml''' to match your environment.
 
  
== Plays ==
+
The ansible scripts include plays for each host type.<br>
 +
Each play will first apply the [[#common | common]] and [[#basic | basic]] roles followed by either the
 +
[[#credstore | credstore]], [[#masterportal | masterportal]] or [[#sshhost | sshhost]] role.
  
These ansible scripts include two plays:
+
= Delegation Server =
  
* '''credstore.yml''' to deploy a Credential Store. This will apply the [[#basic | basic]] and the [[#credstore | credstore]] roles.
+
The ansible scripts for deploying a Delegation Server can be found on [https://github.com/rcauth-eu/aarc-ansible-delegation-server github] as part of the rcauth-eu organization. Note that these ansible scripts will only deploy the Delegation Server and optionally a ''demo'' Online CA. If you want to deploy a proper Online CA as well, you can take a look at [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ these] deployment scripts. As a result, these scripts are less general than those for the [[#Master_Portal | Master Portal ]].
* '''masterportal.yml''' to deploy a Master Portal. This will apply the [[#basic | basic]] and the [[#masterportal | masterportal]] roles.
 
  
You can execute these plays, after you've give the required deployment input for the [[#Roles | roles]], by calling:
+
git clone https://github.com/rcauth-eu/aarc-delegation-server.git
  
ansible-playbook -i hosts.inventory credstore.yml|masterportal.yml
+
Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.
  
It is safe to re-execute these plays multiple times, therefore you can also use these scripts to do an update on a Master Portal with a new release!
+
== Physical layout ==
  
'''Note!''' If you're doing a single host deployment (Master Portal and Credential Store on one host) make sure to always execute the credstore play before the masterportal play. In case you re-execute credstore make sure to re-execute the masterporal play as well. If you fail to do so you will end up with a broken firewall setup!
+
It is recommended to dedicate a full physical machine with two separate network interfaces for the Delegation Server. One of the network interfaces, what we call '''public_interface''' will be the outwards facing interface for incoming requests. The second interface, what we call '''private_interface''' is a dedicated connection to the back-end CA. It is assumed that both Delegation Server and back-end CA are kept in a safe and secure environment with nothing between the
 +
Delegation Server and back-end CA other than a direct network connection. Consult the [https://www.rcauth.eu/ RCauth.eu] CP/CPS for more details on recommended safety measures.
  
= Delegation Server =
+
== Roles ==
  
The ansible scripts for deploying a Delegation Server can be found under Nikhef subversion, [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/DS/ansible/ here]. Note that these ansible scripts will only deploy the Delegation Server and not the Online CA itself. If you want to deploy an Online CA as well, you should take a look at [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ these] deployment scripts. The Online CA is expected to be up and running before the deployment of the Delegation Server. See the [[#Integrating_with_Online_CA | integration ]] section for more details.
+
=== common ===
  
These scripts are not as general as the [[#Master_Portal | Master Portal ]] scripts, because they contain more Nikhef infrastructure specific configuration. This guide will tell you where these specific configurations are, so you will know what to tweak in case you are deploying a Delegation Server somewhere else. See the [[#Nikhef_specific_configurations | Nikhef specific ]] section for this.
+
This 'role' only loads variables common to all roles, it does not contain actual tasks.
  
As a first step, make sure to configure your own [http://docs.ansible.com/ansible/intro_inventory.html inventory] file with the hostname of the target machine on which you want a Delegation Server deployed. Look into the '''nikhef''' inventory file for inspiration.
+
=== basic ===
  
== Integrating with Online CA ==
+
The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:
  
It is recommended to dedicate a full physical machine for the Delegation Server with two separate network interfaces. One of the network interfaces, what we call '''public_interface''' will be the outwards facing interface for incoming requests. The second interface, what we call '''private_interface''' is a dedicated connection to the Online CA. It is assumed that both Delegation Server and Online CA are kept in a safe and secure environment with nothing between the Delegation Server and Online CA other than a simple network connection. Consult the [https://www.rcauth.eu/ RCauth.eu] policy for more details on recommended safety measures. 
+
* basic host setup (hostname, sshd)
 +
* install / update required packages and repositories (epelrepo, yum)
 +
* configure access to the machine and ntp (network, iptables, access, ntpd)
 +
* enable/disable services
  
After configuring your inventory file with the right 'hostname', you will have to configure some additional host-specific variables. Create a new file under 'host_vars/' with the name 'hostname'. Enter the following host-specific variables into the created file:
+
=== cafrontend ===
  
private_network: 192.168.1.0/24
+
The cafrontend role takes care of deploying a basic environment for the Delegation Server in the context of it being a frontend for the back-end CA. The tasks belonging to the cafrontend role cover the following:
private_ds_address: 192.168.1.254
 
private_ca_address: 192.168.1.1
 
 
private_interface: enp8s0
 
public_interface: enp4s0
 
 
private_domain: canet
 
private_ds_hostname: frontend
 
private_ca_hostname: ca
 
  
Tweak the variables according to your own setup environment. The 'ds' and 'ca' are abbreviations for 'Delegation Server' and 'Online CA' respectively.
+
* configure the internal network (ca-network)
 +
* configure services related to the internal network (dnsmasq, postfix, squid, ca-sshd)
 +
* enable services
  
The communication between the Delegation Server and the Online CA through the private_network is secured via host certificates. If you followed the [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ suggested] Online CA setup you will see that the Online CA creates a dedicated root certificate and two host certificates: one for himself and one for the Delegation Server. The Online CA setup scripts will package the host certificate belonging to the Delegation Server and it will copy it into the Delegation Server machine under '''/root/frontend.canet.tgz'''
+
=== democa ===
  
If you're setting up the Delegation Server for the first time (not updating it) check that the '''/root/frontend.canet.tgz''' exists and contains host certificates created by the Online CA. The ansible scripts will process these credentials from PEM into PKCS12, understood by Delegation Server. It will also take care of removing the original /root/frontend.canet.tgz in order to avoid having unprotected duplicate credentials lying around.
+
This role sets up a fully-functional but demonstration-only back-end CA. When using a a 'real' HSM-based CA, this is the place to configure it (i.e. after the cafrontend role). The tasks belonging to the democa role cover the following:
  
If you're updating the Delegation Server with some new configuration or version, without any update to the private communication link, you can simply ignore this file. In its absence the ansible scripts will simply ignore this part of the setup and will assume that the credential has already been translated to PKCS12 and copied to the desired location.
+
* configure basic services (hosts, nameserver, yum_squid, postfix, rsyslog)
 +
* setup private CA for TLS on private net (canet_ca)
 +
* setup backup with rsync over ssh via cronjob (rsync_sshkey, backup)
 +
* configure myproxy-server as CA, export relevant parts to frontend (myproxy, hostcreds, democa_tarball)
  
== Nikhef specific configurations ==
+
=== delegserver ===
  
The [[#cafrontend | cafrontend ]] role contains all of the Nikhef specific configuration that you might want to check and tweak in case you are using these scripts somewhere else. You should take a look at the following :
+
The delegserver role takes care of deploying and configuring the Delegation Server and its dependencies once the rest is setup. The tasks belonging to the delegserver role cover the following:  
  
* iptable and ip6table rules from template will reflect the Nikhef intranet. Make sure to check these, otherwise you risk closing yourself out of your own machine!
+
* finalize setup for communication with back-end CA (ca-sshd, ca-backup, ca_checker)
* access.conf template and access.yml will apply access control rules on the machine which are again specific to the Nikhef intranet. Make sure to check these, otherwise you risk closing yourself out of your own machine!
+
* install and configure main services making up the Delegation Server (java, tomcat, httpd)
* the nikhef local repository is installed (but no enabled) by default, together with some software (mkgroup-sshlpk) for provisioning ssh access to the machine.  
+
* install and configure mariadb (mysql). Note postgres is not yet supported but might work.
 +
* install and configure Shibboleth service provider (shib)
 +
* install and configure backups (rsyncd, mysql-backup)
 +
* install and configure the Delegation Server tomcat servlet and cli (oa4mp-server)
 +
* configure the web page, e.g. for CRL production (webroot)
 +
* configure the credentials, certificate directories etc. (hostcreds-tomcat, hostcreds-apache)
 +
* start/restart relevant services
  
== Roles ==
+
== Plays ==
  
=== cafrontend ===
+
There are three main ansible scripts: two for setting up the delegation server, one for setting up a demo CA.
=== delegserver ===
+
The delegation server is set up in two stages, one part before the back-end CA is setup, one part afterwards.<br>
 +
* The cafrontend play, being the first part of the Delegation Server setup, will apply the [[#common | common]], [[#basic | basic]] and [[#cafrontend | cafrontend]] roles.
 +
* The demoCA play will apply the [[#common | common]], [[#basic | basic]] (unless co-hosted with the Delegation Server, which is ok for a demo CA) and [[#democa | democa]] roles.
 +
* The delegserver play, being the second part the Delegation Server setup, will apply only the [[#common | common]] and [[#delegserver | delegserver]] roles since the basic role has already been applied in the cafrontend play.
  
== Plays ==
+
In case of a full back-end CA, you would not run the second playbook, but instead run an installation of the actual CA at that point in time.

Latest revision as of 08:48, 5 September 2019

Introduction

This page describes the ansible scripts we provide for setting up either a MasterPortal or Delegation Server, both part of the RCauth landscape. Details on how to use the ansible scripts can be found in the README files in the respective github repositories. These sections describe instead the overall layout and design of the scripts.

General Structure

The ansible deployment scripts are structured into, what ansible calls plays. In our deployment scripts we use a single play to configure a single host of a specific kind. Ansible plays are made up of roles. We are using roles as a means of grouping tasks that logically belong together.

Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of default variables:

  • roles/x/vars/main.yml holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they should not be overridden by variables defined inside a play (see variable precedence). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
  • roles/x/defaults/main.yml holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they may be overridden by variables defined inside a play.

The variables set in roles/x/defaults/main.yml are overridden by files in a subdirectory of the top-level config/ directory. See config/PLACEHOLDER/x_env.yml for example files for each role (x). For a specific deployment a new subdirectory needs to be created and defined as configdir in the inventory file, see hosts.inventory.PLACEHOLDER for an example.

Generating Secrets

Both Master Portal and Delegation Server setup requires several different passwords for different components (database, rsync, keystore). To ease the pain of making up new passwords we provide an extra play that can be executed to generate a set of required passwords. The passwords are created with the openssl 'rand' module with special characters omitted. The play generating new passwords runs on your local machine and it creates a file called secrets_env.yml in the current directory. This file is then used as input for the Master Portal play and the Delegation Server play respectively. In case you don't want to use our password generator make sure to fill secrets_env.yml manually with all the required passwords!

For further details on how to use this play, see the README files.

Master Portal

The ansible scripts for deploying a Master Portal can be found on github as part of the rcauth-eu organization. You can use these scripts to deploy a Master Portal, Credential Store and SSH host on separate hosts, or on a single host. They assume a basic CentOS 6 or 7 installation.

git clone https://github.com/rcauth-eu/aarc-ansible-master-portal.git

Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.

Roles

common

This 'role' only loads variables common to all roles, it does not contain actual tasks.

basic

The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:

  • basic host setup (hostname, selinux, sshd)
  • install / update required packages and repositories (repos, yum)
  • configure access to the machine and ntp (network, iptables, access, ntpd)
  • deploy host credentials and grid related services (fetchcrl, hostcreds, vomses)
  • enable/disable services

credstore

The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:

  • install and configure MyProxy Server as a Credential Store (myproxy-credstore)
  • install the relevant IGTF CA distributions (deploy-igtf-distrib)
  • add myproxy_purge for expired/revoked certificates (purger)
  • start/restart myproxy-server

masterportal

The masterportal role takes care of deploying and configuring the Master Portal, VO-portal, SSH-portal and their dependencies. The tasks belonging to the masterportal role cover the following:

  • install and configure java, tomcat, apache-httpd (java, tomcat, httpd)
  • install and configure mysql or mariadb (mysql). Note postgres is not yet supported but might work. Note: on RH6 mysql is used while on RH7 mariadb is used instead.
  • install mysql backup script and cronjob (mysql-backup)
  • install keyutil for creating key- and truststores (keyutil)
  • install and configure Master Portal Client and Server parts (oa4mp-client, oa4mp-server)
  • install and configure SSH portal for uploading SSH keys (optional, can be disabled by undefining sshkey_portal)
  • install and configure VO Portal for testing proxy retrieval (optional, can be disabled by undefining vo_portal)
  • start/restart relevant services tomcat, httpd and mariadb/mysql

sshhost

This role takes care of installing the host that will run the ssh server for getting proxy certificates on the command line.

  • create the proxy user account (proxyuser)
  • configure the hostcert/key for the proxy user (proxycreds)
  • copy the AuthorizedKeysCommand (authz_cmd) and myproxy_cmd scripts and their dependencies (scripts)
  • update the sshd config file and restart sshd (sshd)
  • update the access.conf file (access)

Plays

The ansible scripts include plays for each host type.
Each play will first apply the common and basic roles followed by either the credstore, masterportal or sshhost role.

Delegation Server

The ansible scripts for deploying a Delegation Server can be found on github as part of the rcauth-eu organization. Note that these ansible scripts will only deploy the Delegation Server and optionally a demo Online CA. If you want to deploy a proper Online CA as well, you can take a look at these deployment scripts. As a result, these scripts are less general than those for the Master Portal .

git clone https://github.com/rcauth-eu/aarc-delegation-server.git

Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.

Physical layout

It is recommended to dedicate a full physical machine with two separate network interfaces for the Delegation Server. One of the network interfaces, what we call public_interface will be the outwards facing interface for incoming requests. The second interface, what we call private_interface is a dedicated connection to the back-end CA. It is assumed that both Delegation Server and back-end CA are kept in a safe and secure environment with nothing between the Delegation Server and back-end CA other than a direct network connection. Consult the RCauth.eu CP/CPS for more details on recommended safety measures.

Roles

common

This 'role' only loads variables common to all roles, it does not contain actual tasks.

basic

The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:

  • basic host setup (hostname, sshd)
  • install / update required packages and repositories (epelrepo, yum)
  • configure access to the machine and ntp (network, iptables, access, ntpd)
  • enable/disable services

cafrontend

The cafrontend role takes care of deploying a basic environment for the Delegation Server in the context of it being a frontend for the back-end CA. The tasks belonging to the cafrontend role cover the following:

  • configure the internal network (ca-network)
  • configure services related to the internal network (dnsmasq, postfix, squid, ca-sshd)
  • enable services

democa

This role sets up a fully-functional but demonstration-only back-end CA. When using a a 'real' HSM-based CA, this is the place to configure it (i.e. after the cafrontend role). The tasks belonging to the democa role cover the following:

  • configure basic services (hosts, nameserver, yum_squid, postfix, rsyslog)
  • setup private CA for TLS on private net (canet_ca)
  • setup backup with rsync over ssh via cronjob (rsync_sshkey, backup)
  • configure myproxy-server as CA, export relevant parts to frontend (myproxy, hostcreds, democa_tarball)

delegserver

The delegserver role takes care of deploying and configuring the Delegation Server and its dependencies once the rest is setup. The tasks belonging to the delegserver role cover the following:

  • finalize setup for communication with back-end CA (ca-sshd, ca-backup, ca_checker)
  • install and configure main services making up the Delegation Server (java, tomcat, httpd)
  • install and configure mariadb (mysql). Note postgres is not yet supported but might work.
  • install and configure Shibboleth service provider (shib)
  • install and configure backups (rsyncd, mysql-backup)
  • install and configure the Delegation Server tomcat servlet and cli (oa4mp-server)
  • configure the web page, e.g. for CRL production (webroot)
  • configure the credentials, certificate directories etc. (hostcreds-tomcat, hostcreds-apache)
  • start/restart relevant services

Plays

There are three main ansible scripts: two for setting up the delegation server, one for setting up a demo CA. The delegation server is set up in two stages, one part before the back-end CA is setup, one part afterwards.

  • The cafrontend play, being the first part of the Delegation Server setup, will apply the common, basic and cafrontend roles.
  • The demoCA play will apply the common, basic (unless co-hosted with the Delegation Server, which is ok for a demo CA) and democa roles.
  • The delegserver play, being the second part the Delegation Server setup, will apply only the common and delegserver roles since the basic role has already been applied in the cafrontend play.

In case of a full back-end CA, you would not run the second playbook, but instead run an installation of the actual CA at that point in time.