Difference between revisions of "RCauth Delegation Server & MasterPortal - Ansible scripts"

From PDP/Grid Wiki
Jump to navigationJump to search
 
(33 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
= Introduction =
 
= Introduction =
  
After experimenting with [[CILogon Pre-Pilot Work - Jenkins | Jenkins]] as a method of deployment for the [[CILogon Pre-Pilot Work | CILogon Pilot]] we quickly had to come to the conclusion that Jenkins was not meant to be used for such deployments. Although it can spin up machines and run any kind of configuration scripts on them, it was still lacking the kind of configuration control and templating capabilities that we were looking for. We decided to move away from it and use one of the more widespread tools for deployments and configuration.
+
This page describes the ansible scripts we provide for setting up either a MasterPortal or Delegation Server, both part of the RCauth landscape. Details on how to use the ansible scripts can be found in the README files in the respective github repositories. These sections describe instead the overall layout and design of the scripts.
  
Out pick fell on [http://docs.ansible.com/ Ansible] mainly because of the following points:
+
== General Structure ==
 
 
* support for configuration templating
 
* easy to use modules to interact with systems
 
* no software required on target machine (runs through ssh)
 
* good documentation
 
* widely used
 
 
 
'''Note!''' Deployments done through these Ansible scripts are expecting a clean system with only the OS (CentOS 6 or CentOS 7) installed. No packages have to be installed manually before executing the deployment scripts, since Ansible will do that for you.
 
  
== General Structure ==
+
The ansible deployment scripts are structured into, what ansible calls [http://docs.ansible.com/ansible/playbooks.html plays]. In our deployment scripts we use a single play to configure a single host of a specific kind. Ansible plays are made up of [http://docs.ansible.com/ansible/playbooks_roles.html#roles roles]. We are using roles as a means of grouping tasks that logically belong together.
  
The ansible deployment scripts are structured into, what ansible calls [http://docs.ansible.com/ansible/playbooks.html plays]. In our deployment scripts we used a single play to configure a single host of a specific kind. Ansible plays are made up of [http://docs.ansible.com/ansible/playbooks_roles.html#roles roles]. We are using roles as a means of grouping tasks that logically belong together.
+
Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of default variables:  
  
Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of variables:
+
* '''roles/x/vars/main.yml''' holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they '''should not''' be overridden by variables defined inside a play (see [http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable variable precedence]). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
  
* '''roles/x/vars/main.yml''' holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they '''cannot''' be overwritten by variables defined inside a play (see [http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable variable precedence]). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
+
* '''roles/x/defaults/main.yml''' holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they '''may''' be overridden by variables defined inside a play.
  
* '''roles/x/defaults/main.yml''' holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they '''can''' be overwritten by variables defined inside a play. For each defined role 'x' you will find a corresponding variables file name 'x_env.yml' in the top level directory. This variables file is used together with the 'x' role inside a play. Use this variables file to overwrite defaults! Note that not ALL variables from the defaults are meant to be overwritten (for examples paths can stay the same). You will find a 'x_env.yml' template for each role that will tell you which variables you SHOULD overwrite (like passwords).
+
The variables set in roles/x/defaults/main.yml are overridden by files in a subdirectory of the top-level config/ directory. See config/PLACEHOLDER/x_env.yml for example files for each role (x). For a specific deployment a new subdirectory needs to be created and defined as configdir in the inventory file, see hosts.inventory.PLACEHOLDER for an example.
  
 
== Generating Secrets ==
 
== Generating Secrets ==
Line 27: Line 19:
 
Both Master Portal and Delegation Server setup requires several different passwords for different components (database, rsync, keystore). To ease the pain of making up new passwords we provide an extra play that can be executed to generate a set of required passwords. The passwords are created with the openssl 'rand' module with special characters omitted. The play generating new passwords runs on your local machine and it creates a file called '''secrets_env.yml''' in the current directory. This file is then used as input for the [[#Plays | Master Portal play]] and the [[#Plays_2 | Delegation Server play]] respectively. In case you don't want to use our password  generator make sure to fill secrets_env.yml manually with all the required passwords!  
 
Both Master Portal and Delegation Server setup requires several different passwords for different components (database, rsync, keystore). To ease the pain of making up new passwords we provide an extra play that can be executed to generate a set of required passwords. The passwords are created with the openssl 'rand' module with special characters omitted. The play generating new passwords runs on your local machine and it creates a file called '''secrets_env.yml''' in the current directory. This file is then used as input for the [[#Plays | Master Portal play]] and the [[#Plays_2 | Delegation Server play]] respectively. In case you don't want to use our password  generator make sure to fill secrets_env.yml manually with all the required passwords!  
  
To generate a new set of passwords, execute:
+
For further details on how to use this play, see the README files.
 
 
ansible-playbook secrets.yml
 
 
 
'''Note!''' Re-executing the play will NOT override the previously generated passwords from secrets_env.yml. In case you want a password regenerated, you should remove it from the file before re-executing the play.
 
 
 
'''Note!''' The play will only generate a fixed set of required passwords for the setup. In case you want to tweak the list of passwords that the play has to generate, look into the secrets.yml play!
 
  
 
= Master Portal =
 
= Master Portal =
  
The ansible scripts for deploying a Master Portal can be found on [https://github.com/rcauth-eu/aarc-ansible-master-portal github] as part of the rcauth-eu organization. You can use these scripts to deploy a Master Portal and Credential Store on two separate hosts, or into one single host. Check them out via:
+
The ansible scripts for deploying a Master Portal can be found on [https://github.com/rcauth-eu/aarc-ansible-master-portal github] as part of the rcauth-eu organization. You can use these scripts to deploy a Master Portal, Credential Store and SSH host on separate hosts, or on a single host. They assume a basic CentOS 6 or 7 installation.
  
 
  git clone https://github.com/rcauth-eu/aarc-ansible-master-portal.git
 
  git clone https://github.com/rcauth-eu/aarc-ansible-master-portal.git
  
Before you begin executing plays make sure to decide whether you're deployment will use two separate hosts for Master Portal and Credential Store, or a single host. Fill in you machine hostname[s] into the '''hosts.inventory''' file accordingly. The hostname[s] set in the inventory files will be set on the target machine[s].
+
Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.
 
 
These scripts expect you to have a basic CentOS 6 or 7 installation ready. Since the two OS installations have slight differences between them (like repository addresses) we created two sets of variables for each OS version. Once you decided which OS version you want to use, make sure to check the symlinks called '''roles/*/vars/main.yml'''. These symlinks can either point to '''rh6.yml''' or to '''rh7.yml''' under the same directory in which they are. Set these according to the OS of your choice.
 
  
 
== Roles ==
 
== Roles ==
  
=== basic ===
+
=== common ===
  
The basic role tries to cover the general environment setup that is needed for both Master Portal and Credential Store. The tasks belonging to the basic role cover the following configurations:
+
This 'role' only loads variables common to all roles, it does not contain actual tasks.
  
* install / update required packages and repositories
+
=== basic ===
* configure access to the machine (iptables, ssh)
 
* deploy host credentials (see input 1. below)
 
* enable required services, disable not required services
 
  
Before applying the basic role make sure to provide the following inputs for the ansible scripts:
+
The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:
  
# You need to have host certificates ready to apply this role. Place your PEM formatted certificate and key file under the 'roles/basic/files/' directory. Note that the private key of the host certificate should be left unencrypted, so that services running on the host can use it as is. You will need to call these credentials ''''hostname'.crt''' and ''''hostname'.key''', where 'hostname' is the same name that you provided in the [[ #Master_Portal | hosts.inventory]] file of your play.
+
* basic host setup (hostname, selinux, sshd)
#* By default, the 'TERENA eScience SSL CA 3' trust anchor is installed for the provided host certificate. If your host certificate is issued by a different CA, you can install the CA's trust anchor by changing the required package name in '''hostcred_trust_anchor_package''' in the '''basic_env.yml''' file.
+
* install / update required packages and repositories (repos, yum)
#* In case your host certificate is issued by a custom CA which '''cannot''' be installed via the package manager, you can install this by putting the PEM formatted CA certificate file and the CA's signing policy file into '''roles/basic/files'''. Now fill in the '''hostcred_trust_anchor_pem''' and the '''hostcred_trust_anchor_signing_policy''' variables from the '''basic_env.yml''' file with the corresponding filenames.
+
* configure access to the machine and ntp (network, iptables, access, ntpd)
# Overwrite the recommended default variables from '''basic_env.yml''' to match your environment.
+
* deploy host credentials and grid related services (fetchcrl, hostcreds, vomses)
 +
* enable/disable services
  
 
=== credstore ===
 
=== credstore ===
Line 67: Line 49:
 
The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:
 
The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:
  
* install and configure MyProxy Server as a Credential Store
+
* install and configure MyProxy Server as a Credential Store (myproxy-credstore)
* deploy trust anchors for Online CA (see input 1. below)
+
* install the relevant IGTF CA distributions (deploy-igtf-distrib)
* configure iptables rules for Credential Store (open myproxy port 7512)
+
* add myproxy_purge for expired/revoked certificates (purger)
* add additional services (myproxy_purge for expired/revoked certificates)
+
* start/restart myproxy-server
* start/restart relevant services (myproxy-server)
 
  
Before applying the credstore role make sure to provide the following inputs for the ansible scripts:
+
=== masterportal ===
 
 
# MyProxy Server only stores credentials that it can verify, therefore it's very important to have the Online CA (which will issue user certificates) as a trusted certificate on the Credential Store machine. In case you are planning to connect to [http://rcauth.eu/ RCAuth.eu], you can find the appropriate trust anchors at the [https://dist.eugridpma.info/distribution/igtf/current/accredited/RPMS/ IGTF accredited] repository (configure both RCAuth and its dependency DCAROOT as well). You can configure any alternative trust root you would like to use in '''credstore_env.yml'''.
 
# Overwrite the recommended default variables from '''credstore_env.yml''' to match your environment.
 
  
=== masterportal ===
+
The masterportal role takes care of deploying and configuring the Master Portal, VO-portal, SSH-portal and their dependencies. The tasks belonging to the masterportal role cover the following:
  
The masterportal role takes care of deploying and configuring the Master Portal and its dependencies. The tasks belonging to the masterportal role cover the following:
+
* install and configure java, tomcat, apache-httpd (java, tomcat, httpd)
 +
* install and configure mysql or mariadb (mysql). Note postgres is not yet supported but might work. Note: on RH6 mysql is used while on RH7 mariadb is used instead.
 +
* install mysql backup script and cronjob (mysql-backup)
 +
* install keyutil for creating key- and truststores (keyutil)
 +
* install and configure Master Portal Client and Server parts (oa4mp-client, oa4mp-server)
 +
* install and configure SSH portal for uploading SSH keys (optional, can be disabled by undefining sshkey_portal)
 +
* install and configure VO Portal for testing proxy retrieval (optional, can be disabled by undefining vo_portal)
 +
* start/restart relevant services tomcat, httpd and mariadb/mysql
  
* install and configure dependencies : java, tomcat, apache-httpd
+
=== sshhost ===
* add database support through either mysql or mariadb (postgres is not yet supported). Note: If you're on RH6 use mysql, if you're on RH7 stick to mariadb.
 
* install and configure Master Portal Server and Master Portal Client
 
* install and configure demo VO Portal (optional, useful for testing). You can enable/disable the execution of this task in role/masterportal/tasks/main.yml
 
* configure iptables rules for Master Portal (open https port 443)
 
* start/restart relevant services (tomcat, apache-httpd)
 
  
Before applying the masterportal role make sure to provide the following inputs for the ansible scripts:
+
This role takes care of installing the host that will run the ssh server for getting proxy certificates on the command line.
  
# Add the compiled Master Portal wars to the deployment. If you don't have a compiled Master Portal you can [[AARC_Pilot_-_Building_from_Source#Building_the_Master_Portal | build]] it yourself. Add the compiled war files ('''mp-oa2-client.war''' and '''mp-oa2-server.war''') into the 'role/masterportal/files' directory.
+
* create the proxy user account (proxyuser)
# Add the compiled Master Portal CLI to the deployment. You will need this CLI tool to approve and manage your Master Portal clients! If you don't have a compiled CLI you can [[AARC_Pilot_-_Building_from_Source#Master_Portal_-_CLI | build]] it yourself. Place the compiled jar ('''oa2-cli.jar''') into the 'role/masterportal/files' directory.
+
* configure the hostcert/key for the proxy user (proxycreds)
# In case you are also deploying the VO Portal, make sure to also put the compiled VO Portal war file ('''vo-portal.war''') into the 'role/masterportal/files' directory.
+
* copy the AuthorizedKeysCommand (authz_cmd) and myproxy_cmd scripts and their dependencies (scripts)
# Overwrite the recommended default variables from '''masterportal_env.yml''' to match your environment.
+
* update the sshd config file and restart sshd (sshd)
# Generate secrets by executing the [[#Generating_Secrets | password generator]].
+
* update the access.conf file (access)
# Fill in required MP Client ID and Secret into '''secrets_env.yml'''.
 
# For new installations you will also need to create a set of JSON web keys, this can be done e.g. using the <tt>oa2-cli</tt> tool with the <tt>use signing</tt> command. The resulting jwk file is expected to be called '''mp.jwk''' and should be put in '''roles/masterportal/files/'''. Since that requires already a running installation, you can alternatively compile and run [[AARC Pilot - Creating JWK files | this simple Java program]].
 
  
 
== Plays ==
 
== Plays ==
  
These ansible scripts include two plays:
+
The ansible scripts include plays for each host type.<br>
 
+
Each play will first apply the [[#common | common]] and [[#basic | basic]] roles followed by either the
* '''credstore.yml''' to deploy a Credential Store. This will apply the [[#basic | basic]] and the [[#credstore | credstore]] roles.
+
[[#credstore | credstore]], [[#masterportal | masterportal]] or [[#sshhost | sshhost]] role.
* '''masterportal.yml''' to deploy a Master Portal. This will apply the [[#basic | basic]] and the [[#masterportal | masterportal]] roles.
 
 
 
You can execute these plays, after you've give the required deployment input for the [[#Roles | roles]], by calling:
 
 
 
ansible-playbook -i hosts.inventory credstore.yml
 
ansible-playbook -i hosts.inventory masterportal.yml
 
 
 
It is safe to re-execute these plays multiple times, therefore you can also use these scripts to do an update on a Master Portal with a new release!
 
 
 
'''Note!''' If you're doing a single host deployment (Master Portal and Credential Store on one host) make sure to always execute the credstore play before the masterportal play. In case you re-execute credstore make sure to re-execute the masterporal play as well. If you fail to do so you will end up with a broken firewall setup!
 
  
 
= Delegation Server =
 
= Delegation Server =
  
The ansible scripts for deploying a Delegation Server can be found under Nikhef subversion, [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/DS/ansible/ here]. Note that these ansible scripts will only deploy the Delegation Server and not the Online CA itself. If you want to deploy an Online CA as well, you should take a look at [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ these] deployment scripts. The Online CA is expected to be up and running before the deployment of the Delegation Server. See the [[#Integrating_with_Online_CA | integration ]] section for more details. Check out the Delegation Server ansible scripts from:
+
The ansible scripts for deploying a Delegation Server can be found on [https://github.com/rcauth-eu/aarc-ansible-delegation-server github] as part of the rcauth-eu organization. Note that these ansible scripts will only deploy the Delegation Server and optionally a ''demo'' Online CA. If you want to deploy a proper Online CA as well, you can take a look at [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ these] deployment scripts. As a result, these scripts are less general than those for the [[#Master_Portal | Master Portal ]].
  
  svn co https://ndpfsvn.nikhef.nl/repos/pdpsoft/trunk/eu.rcauth.pilot-ica/DS/
+
  git clone https://github.com/rcauth-eu/aarc-delegation-server.git
  
These scripts are not as general as the [[#Master_Portal | Master Portal ]] scripts, because they contain more Nikhef infrastructure specific configuration. This guide will tell you where these specific configurations are, so you will know what to tweak in case you are deploying a Delegation Server somewhere else. See the [[#Nikhef_specific_configurations | Nikhef specific ]] section for this.
+
Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.
  
As a first step, make sure to configure your own [http://docs.ansible.com/ansible/intro_inventory.html inventory] file with the hostname of the target machine on which you want a Delegation Server deployed. Look into the '''nikhef''' inventory file for inspiration.
+
== Physical layout ==
  
'''Note!''' Currently only CentOS 7 is supported as an OS for the Delegation Server deployment scripts.
+
It is recommended to dedicate a full physical machine with two separate network interfaces for the Delegation Server. One of the network interfaces, what we call '''public_interface''' will be the outwards facing interface for incoming requests. The second interface, what we call '''private_interface''' is a dedicated connection to the back-end CA. It is assumed that both Delegation Server and back-end CA are kept in a safe and secure environment with nothing between the  
 +
Delegation Server and back-end CA other than a direct network connection. Consult the [https://www.rcauth.eu/ RCauth.eu] CP/CPS for more details on recommended safety measures.
  
== Integrating with Online CA ==
+
== Roles ==
  
It is recommended to dedicate a full physical machine for the Delegation Server with two separate network interfaces. One of the network interfaces, what we call '''public_interface''' will be the outwards facing interface for incoming requests. The second interface, what we call '''private_interface''' is a dedicated connection to the Online CA. It is assumed that both Delegation Server and Online CA are kept in a safe and secure environment with nothing between the Delegation Server and Online CA other than a simple network connection. Consult the [https://www.rcauth.eu/ RCauth.eu] policy for more details on recommended safety measures. 
+
=== common ===
  
After configuring your inventory file with the right 'hostname', you will have to configure some additional host-specific variables. Create a new file under 'host_vars/' with the name 'hostname'. Enter the following host-specific variables into the created file:
+
This 'role' only loads variables common to all roles, it does not contain actual tasks.
  
private_network: 192.168.1.0/24
+
=== basic ===
private_ds_address: 192.168.1.254
 
private_ca_address: 192.168.1.1
 
 
private_interface: enp8s0
 
public_interface: enp4s0
 
 
private_domain: canet
 
private_ds_hostname: frontend
 
private_ca_hostname: ca
 
  
Tweak the variables according to your own setup environment, especially the two interface names. The 'ds' and 'ca' are abbreviations for 'Delegation Server' and 'Online CA' respectively.
+
The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:
  
The communication between the Delegation Server and the Online CA through the private_network is secured via host certificates. If you followed the [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ suggested] Online CA setup you will see that the Online CA creates a dedicated root certificate and two host certificates: one for himself and one for the Delegation Server. The Online CA setup scripts will package the host certificate belonging to the Delegation Server and it will copy it into the Delegation Server machine under '''/root/frontend.canet.tgz'''
+
* basic host setup (hostname, sshd)
 +
* install / update required packages and repositories (epelrepo, yum)
 +
* configure access to the machine and ntp (network, iptables, access, ntpd)
 +
* enable/disable services
  
If you're setting up the Delegation Server for the first time (not updating it) check that the '''/root/frontend.canet.tgz''' exists and contains host certificates created by the Online CA. The ansible scripts will process these credentials from PEM into PKCS12, understood by Delegation Server. It will also take care of removing the original /root/frontend.canet.tgz in order to avoid having unprotected duplicate credentials lying around.
+
=== cafrontend ===
 
 
If you're updating the Delegation Server with some new configuration or version, without any update to the private communication link, you can simply ignore this file. In its absence the ansible scripts will simply ignore this part of the setup and will assume that the credential has already been translated to PKCS12 and copied to the desired location.
 
  
== Nikhef specific configurations ==
+
The cafrontend role takes care of deploying a basic environment for the Delegation Server in the context of it being a frontend for the back-end CA. The tasks belonging to the cafrontend role cover the following:
  
The [[#cafrontend | cafrontend ]] role contains all of the Nikhef specific configuration that you might want to check and tweak in case you are using these scripts somewhere else. You should take a look at the following :
+
* configure the internal network (ca-network)
 +
* configure services related to the internal network (dnsmasq, postfix, squid, ca-sshd)
 +
* enable services
  
* iptables and ip6tables rules from template will reflect the Nikhef intranet. Make sure to check these, otherwise you risk closing yourself out of your own machine!
+
=== democa ===
* access.conf template and access.yml will apply access control rules on the machine which are again specific to the Nikhef intranet. Make sure to check these, otherwise you risk closing yourself out of your own machine!
 
* the nikhef local repository is installed (but no enabled) by default, together with some software (mkgroup-sshlpk) for provisioning ssh access to the machine.
 
  
== Roles ==
+
This role sets up a fully-functional but demonstration-only back-end CA. When using a a 'real' HSM-based CA, this is the place to configure it (i.e. after the cafrontend role). The tasks belonging to the democa role cover the following:
  
=== cafrontend ===
+
* configure basic services (hosts, nameserver, yum_squid, postfix, rsyslog)
 
+
* setup private CA for TLS on private net (canet_ca)
The cafrontend role takes care of deploying a basic environment for the Delegation Server in the context of it being a frontend for the Online CA. The tasks belonging to the cafrontend role cover the following:
+
* setup backup with rsync over ssh via cronjob (rsync_sshkey, backup)
 
+
* configure myproxy-server as CA, export relevant parts to frontend (myproxy, hostcreds, democa_tarball)
* install / update required packages and repositories
 
* configure access to the machine (iptables, ssh, access.conf)
 
* deploy host credentials (see input 1. below)  
 
* postfix configuration for mail sending and forwarding from the Online CA
 
* proxy configuration for the Online CA. The Online CA needs to apply updates through the Delegation Server.
 
* backup configuration via rsync.
 
* enable required services, disable not required services
 
 
 
Before applying the cafrontend role make sure to provide the following inputs for the ansible scripts:
 
 
 
# You need to have host certificates ready to apply this role. Place your PEM formatted certificate and key file under the 'roles/cafrontend/files/' directory. Note that the private key of the host certificate should be left unencrypted, so that services running on the host can use it as is. You will need to call these credentials ''''hostname'.crt''' and ''''hostname'.key''', where 'hostname' is the same name that you provided in your inventory file of your play. In case your host certificate is signed by an intermediate CA you might want to also set the 'hostcert_intermediate' variable in the [[#delegserver | delegserver]] role, so that apache-httpd will send along the intermediate CA together with the host certificate. 
 
# Overwrite the recommended default variables from '''cafrontend_env.yml''' to match your environment.
 
  
 
=== delegserver ===
 
=== delegserver ===
  
The delegserver role takes care of deploying and configuring the Delegation Server and its dependencies. The tasks belonging to the delegserver role cover the following:  
+
The delegserver role takes care of deploying and configuring the Delegation Server and its dependencies once the rest is setup. The tasks belonging to the delegserver role cover the following:  
  
* install and configure dependencies : java, tomcat, apache-httpd  
+
* finalize setup for communication with back-end CA (ca-sshd, ca-backup, ca_checker)
* add database support through either mysql or mariadb (postgres is not yet supported).
+
* install and configure main services making up the Delegation Server (java, tomcat, httpd)
* install and configure Shibboleth service provider
+
* install and configure mariadb (mysql). Note postgres is not yet supported but might work.
* install and configure the Delegation Server
+
* install and configure Shibboleth service provider (shib)
* start/restart relevant services (tomcat, apache-httpd, shibd)  
+
* install and configure backups (rsyncd, mysql-backup)
 
+
* install and configure the Delegation Server tomcat servlet and cli (oa4mp-server)
Before applying the delegserver role make sure to provide the following inputs for the ansible scripts:
+
* configure the web page, e.g. for CRL production (webroot)
 
+
* configure the credentials, certificate directories etc. (hostcreds-tomcat, hostcreds-apache)
# Add the compiled Delegation Server war to the deployment. If you don't have a compiled Delegation Server you can [[AARC_Pilot_-_Building_from_Source#Building_the_Delegation_Server | build]] it yourself. Add the compiled war file ('''oauth2.war''') into the 'role/delegserver/files' directory.
+
* start/restart relevant services
# Add the compiled Delegation Server CLI to the deployment. You will need this CLI tool to approve and manage your Master Portals (Delegation Server clients)! If you don't have a compiled CLI you can [[AARC_Pilot_-_Building_from_Source#Delegation_Server_-_CLI | build]] it yourself. Place the compiled jar ('''oa2-cli.jar''') into the 'role/delegserver/files' directory.
 
# Overwrite the recommended default variables from '''delegserver_env.yml''' to match your environment.
 
# Generate secrets by executing the [[#Generating_Secrets | password generator]].
 
# For new installations you will also need to create a set of JSON web keys, this can be done e.g. using the oa2-cli tool with the use signing command. The resulting jwk file is expected to be called mp.jwk and should be put in roles/delegserver/files/. Since that requires already a running installation, you can alternatively compile and run [[AARC_Pilot_-_Creating_JWK_files | this simple Java program]]. You should specify the default Key ID as '''oa4mp_server_jwk_key_id''' in the '''delegserver_env.yml''' file.
 
  
 
== Plays ==
 
== Plays ==
  
There is one main play called '''delegserver.yml''' which takes care of starting the Delegation Server deployment. This play includes both [[#cafrontend | cafrontend]] and [[#delegserver | delegserver]] roles. Before executing the play make sure to have the '''secrets_env.yml''' file ready with all the required passwords for the setup! You can generate the secrets.yml file using the [[#Generating_Secrets | secret generator]]. You can execute the play, after you've given the required deployment input for the both roles, by calling:
+
There are three main ansible scripts: two for setting up the delegation server, one for setting up a demo CA.
 
+
The delegation server is set up in two stages, one part before the back-end CA is setup, one part afterwards.<br>
ansible-playbook -i nikhef delegserver.yml
+
* The cafrontend play, being the first part of the Delegation Server setup, will apply the [[#common | common]], [[#basic | basic]] and [[#cafrontend | cafrontend]] roles.
 +
* The demoCA play will apply the [[#common | common]], [[#basic | basic]] (unless co-hosted with the Delegation Server, which is ok for a demo CA) and [[#democa | democa]] roles.
 +
* The delegserver play, being the second part the Delegation Server setup, will apply only the [[#common | common]] and [[#delegserver | delegserver]] roles since the basic role has already been applied in the cafrontend play.
  
Replace the 'nikhef' inventory file with your own inventory file. It is safe to re-execute the plays multiple times, therefore you can also use this script to do an update on a Delegation Server with a new release!
+
In case of a full back-end CA, you would not run the second playbook, but instead run an installation of the actual CA at that point in time.

Latest revision as of 08:48, 5 September 2019

Introduction

This page describes the ansible scripts we provide for setting up either a MasterPortal or Delegation Server, both part of the RCauth landscape. Details on how to use the ansible scripts can be found in the README files in the respective github repositories. These sections describe instead the overall layout and design of the scripts.

General Structure

The ansible deployment scripts are structured into, what ansible calls plays. In our deployment scripts we use a single play to configure a single host of a specific kind. Ansible plays are made up of roles. We are using roles as a means of grouping tasks that logically belong together.

Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of default variables:

  • roles/x/vars/main.yml holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they should not be overridden by variables defined inside a play (see variable precedence). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
  • roles/x/defaults/main.yml holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they may be overridden by variables defined inside a play.

The variables set in roles/x/defaults/main.yml are overridden by files in a subdirectory of the top-level config/ directory. See config/PLACEHOLDER/x_env.yml for example files for each role (x). For a specific deployment a new subdirectory needs to be created and defined as configdir in the inventory file, see hosts.inventory.PLACEHOLDER for an example.

Generating Secrets

Both Master Portal and Delegation Server setup requires several different passwords for different components (database, rsync, keystore). To ease the pain of making up new passwords we provide an extra play that can be executed to generate a set of required passwords. The passwords are created with the openssl 'rand' module with special characters omitted. The play generating new passwords runs on your local machine and it creates a file called secrets_env.yml in the current directory. This file is then used as input for the Master Portal play and the Delegation Server play respectively. In case you don't want to use our password generator make sure to fill secrets_env.yml manually with all the required passwords!

For further details on how to use this play, see the README files.

Master Portal

The ansible scripts for deploying a Master Portal can be found on github as part of the rcauth-eu organization. You can use these scripts to deploy a Master Portal, Credential Store and SSH host on separate hosts, or on a single host. They assume a basic CentOS 6 or 7 installation.

git clone https://github.com/rcauth-eu/aarc-ansible-master-portal.git

Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.

Roles

common

This 'role' only loads variables common to all roles, it does not contain actual tasks.

basic

The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:

  • basic host setup (hostname, selinux, sshd)
  • install / update required packages and repositories (repos, yum)
  • configure access to the machine and ntp (network, iptables, access, ntpd)
  • deploy host credentials and grid related services (fetchcrl, hostcreds, vomses)
  • enable/disable services

credstore

The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:

  • install and configure MyProxy Server as a Credential Store (myproxy-credstore)
  • install the relevant IGTF CA distributions (deploy-igtf-distrib)
  • add myproxy_purge for expired/revoked certificates (purger)
  • start/restart myproxy-server

masterportal

The masterportal role takes care of deploying and configuring the Master Portal, VO-portal, SSH-portal and their dependencies. The tasks belonging to the masterportal role cover the following:

  • install and configure java, tomcat, apache-httpd (java, tomcat, httpd)
  • install and configure mysql or mariadb (mysql). Note postgres is not yet supported but might work. Note: on RH6 mysql is used while on RH7 mariadb is used instead.
  • install mysql backup script and cronjob (mysql-backup)
  • install keyutil for creating key- and truststores (keyutil)
  • install and configure Master Portal Client and Server parts (oa4mp-client, oa4mp-server)
  • install and configure SSH portal for uploading SSH keys (optional, can be disabled by undefining sshkey_portal)
  • install and configure VO Portal for testing proxy retrieval (optional, can be disabled by undefining vo_portal)
  • start/restart relevant services tomcat, httpd and mariadb/mysql

sshhost

This role takes care of installing the host that will run the ssh server for getting proxy certificates on the command line.

  • create the proxy user account (proxyuser)
  • configure the hostcert/key for the proxy user (proxycreds)
  • copy the AuthorizedKeysCommand (authz_cmd) and myproxy_cmd scripts and their dependencies (scripts)
  • update the sshd config file and restart sshd (sshd)
  • update the access.conf file (access)

Plays

The ansible scripts include plays for each host type.
Each play will first apply the common and basic roles followed by either the credstore, masterportal or sshhost role.

Delegation Server

The ansible scripts for deploying a Delegation Server can be found on github as part of the rcauth-eu organization. Note that these ansible scripts will only deploy the Delegation Server and optionally a demo Online CA. If you want to deploy a proper Online CA as well, you can take a look at these deployment scripts. As a result, these scripts are less general than those for the Master Portal .

git clone https://github.com/rcauth-eu/aarc-delegation-server.git

Refer to the README there for instructions on how to use the set of scripts. The remainder of this section contains background information on the layout of the scripts.

Physical layout

It is recommended to dedicate a full physical machine with two separate network interfaces for the Delegation Server. One of the network interfaces, what we call public_interface will be the outwards facing interface for incoming requests. The second interface, what we call private_interface is a dedicated connection to the back-end CA. It is assumed that both Delegation Server and back-end CA are kept in a safe and secure environment with nothing between the Delegation Server and back-end CA other than a direct network connection. Consult the RCauth.eu CP/CPS for more details on recommended safety measures.

Roles

common

This 'role' only loads variables common to all roles, it does not contain actual tasks.

basic

The basic role tries to cover the general environment setup that is needed for all components. The tasks belonging to the basic role cover the following configurations:

  • basic host setup (hostname, sshd)
  • install / update required packages and repositories (epelrepo, yum)
  • configure access to the machine and ntp (network, iptables, access, ntpd)
  • enable/disable services

cafrontend

The cafrontend role takes care of deploying a basic environment for the Delegation Server in the context of it being a frontend for the back-end CA. The tasks belonging to the cafrontend role cover the following:

  • configure the internal network (ca-network)
  • configure services related to the internal network (dnsmasq, postfix, squid, ca-sshd)
  • enable services

democa

This role sets up a fully-functional but demonstration-only back-end CA. When using a a 'real' HSM-based CA, this is the place to configure it (i.e. after the cafrontend role). The tasks belonging to the democa role cover the following:

  • configure basic services (hosts, nameserver, yum_squid, postfix, rsyslog)
  • setup private CA for TLS on private net (canet_ca)
  • setup backup with rsync over ssh via cronjob (rsync_sshkey, backup)
  • configure myproxy-server as CA, export relevant parts to frontend (myproxy, hostcreds, democa_tarball)

delegserver

The delegserver role takes care of deploying and configuring the Delegation Server and its dependencies once the rest is setup. The tasks belonging to the delegserver role cover the following:

  • finalize setup for communication with back-end CA (ca-sshd, ca-backup, ca_checker)
  • install and configure main services making up the Delegation Server (java, tomcat, httpd)
  • install and configure mariadb (mysql). Note postgres is not yet supported but might work.
  • install and configure Shibboleth service provider (shib)
  • install and configure backups (rsyncd, mysql-backup)
  • install and configure the Delegation Server tomcat servlet and cli (oa4mp-server)
  • configure the web page, e.g. for CRL production (webroot)
  • configure the credentials, certificate directories etc. (hostcreds-tomcat, hostcreds-apache)
  • start/restart relevant services

Plays

There are three main ansible scripts: two for setting up the delegation server, one for setting up a demo CA. The delegation server is set up in two stages, one part before the back-end CA is setup, one part afterwards.

  • The cafrontend play, being the first part of the Delegation Server setup, will apply the common, basic and cafrontend roles.
  • The demoCA play will apply the common, basic (unless co-hosted with the Delegation Server, which is ok for a demo CA) and democa roles.
  • The delegserver play, being the second part the Delegation Server setup, will apply only the common and delegserver roles since the basic role has already been applied in the cafrontend play.

In case of a full back-end CA, you would not run the second playbook, but instead run an installation of the actual CA at that point in time.