Difference between revisions of "RCauth Delegation Server & MasterPortal - Ansible scripts"

From PDP/Grid Wiki
Jump to navigationJump to search
(deleg server outline)
(online ca integration)
Line 95: Line 95:
  
 
These scripts are not as general as the [[#Master_Portal | Master Portal ]] scripts, because they contain more Nikhef infrastructure specific configuration. This guide will tell you where these specific configurations are, so you will know what to tweak in case you are deploying a Delegation Server somewhere else. See the [[#Nikhef_specific_configurations | Nikhef specific ]] section for this.
 
These scripts are not as general as the [[#Master_Portal | Master Portal ]] scripts, because they contain more Nikhef infrastructure specific configuration. This guide will tell you where these specific configurations are, so you will know what to tweak in case you are deploying a Delegation Server somewhere else. See the [[#Nikhef_specific_configurations | Nikhef specific ]] section for this.
 +
 +
As a first step, make sure to configure your own [http://docs.ansible.com/ansible/intro_inventory.html inventory] file with the hostname of the target machine on which you want a Delegation Server deployed. Look into the '''nikhef''' inventory file for inspiration.
  
 
== Integrating with Online CA ==
 
== Integrating with Online CA ==
 +
 +
It is recommended to dedicate a full physical machine for the Delegation Server with two separate network interfaces. One of the network interfaces, what we call '''public_interface''' will be the outwards facing interface for incoming requests. The second interface, what we call '''private_interface''' is a dedicated connection to the Online CA. It is assumed that both Delegation Server and Online CA are kept in a safe and secure environment with nothing between the Delegation Server and Online CA other than a simple network connection. Consult the [https://www.rcauth.eu/ RCauth.eu] policy for more details on recommended safety measures. 
 +
 +
After configuring your inventory file with the right 'hostname', you will have to configure some additional host-specific variables. Create a new file under 'host_vars/' with the name 'hostname'. Enter the following host-specific variables into the created file:
 +
 +
private_network: 192.168.1.0/24
 +
private_ds_address: 192.168.1.254
 +
private_ca_address: 192.168.1.1
 +
 +
private_interface: enp8s0
 +
public_interface: enp4s0
 +
 +
private_domain: canet
 +
private_ds_hostname: frontend
 +
private_ca_hostname: ca
 +
 +
Tweak the variables according to your own setup environment. The 'ds' and 'ca' are abbreviations for 'Delegation Server' and 'Online CA' respectively.
 +
 +
The communication between the Delegation Server and the Online CA through the private_network is secured via host certificates. If you followed the [http://ndpfsvn.nikhef.nl/viewvc/pdpsoft/trunk/eu.rcauth.pilot-ica/CA/ suggested] Online CA setup you will see that the Online CA creates a dedicated root certificate and two host certificates: one for himself and one for the Delegation Server. The Online CA setup scripts will package the host certificate belonging to the Delegation Server and it will copy it into the Delegation Server machine under '''/root/frontend.canet.tgz'''
 +
 +
If you're setting up the Delegation Server for the first time (not updating it) check that the '''/root/frontend.canet.tgz''' exists and contains host certificates created by the Online CA. The ansible scripts will process these credentials from PEM into PKCS12 understood by Delegation Server. It will also take care of removing the original /root/frontend.canet.tgz in order to avoid having unprotected duplicate credentials lying around.
 +
 +
If you're updating the Delegation Server with some new configuration or version, without any update to the private communication link, you can simply ignore this file. In its absence the ansible scripts will simply ignore this part of the setup and will assume that the credential have already been translated to PKCS12.
  
 
== Nikhef specific configurations ==
 
== Nikhef specific configurations ==

Revision as of 16:35, 13 July 2016

Introduction

After experimenting with Jenkins as a method of deployment for the CILogon Pilot we quickly had to come to the conclusion that Jenkins was not meant to be used for such deployments. Although it can spin up machines and run any kind of configuration scripts on them, it was still lacking the kind of configuration control and templating capabilities that we were looking for. We decided to move away from it and use one of the more widespread tools for deployments and configuration.

Out pick fell on Ansible mainly because of the following points:

  • support for configuration templating
  • easy to use modules to interact with systems
  • no client software required (runs through ssh)
  • good documentation

General Structure

The ansible deployment scripts are structured into, what ansible calls plays. In our deployment scripts we used a single play to configure a single host of a specific kind. Ansible plays are made up of roles. We are using roles as a means of grouping tasks that logically belong together.

Giving input to the deployment scripts is done through overwriting existing variables. Each ansible role contains two sets of variables:

  • roles/x/vars/main.yml holds OS specific variables (like package names, repositories, service name, etc...). These are only used within a role (x), and they cannot be overwritten by variables defined inside a play (see variable precedence). However, in order to support different OS versions, we have separate OS specific variable files under roles/x/vars/, like rh7.yml or rh6.yml. Depending on the OS version that you are using for your deployment, you should define the roles/x/vars/main.yml as a symlink to one of the existing OS specific variable files.
  • roles/x/defaults/main.yml holds application specific variables (like configuration options, paths, etc...). These are only used within a role (x), and they can be overwritten by variables defined inside a play. For each defined role 'x' you will find a corresponding variables file name 'x_env.yml' in the top level directory. This variables file is used together with the 'x' role inside a play. Use this variables file to overwrite defaults! Note that not ALL variables from the defaults are meant to be overwritten (for examples paths can stay the same). You will find a 'x_env.yml' template for each role that will tell you which variables you SHOULD overwrite (like passwords).

Master Portal

The ansible scripts for deploying a Master Portal can be found under Nikhef subversion, here. You can use these scripts to deploy a Master Portal and Credential Store on two separate hosts, or into one single host. Before you begin executing plays make sure to decide whether you're deployment will use two separate hosts for Master Portal and Credential Store, or a single host. Fill in you machine hostname[s] into the hosts.inventory file accordingly. The hostname[s] set in the inventory files will be set on the target machine[s].

These scripts expect you to have a basic CentOS 6 or 7 installation ready. Since the two OS installations have slight differences between them (like repository addresses) we created two sets of variables for each OS version. Once you decided which OS version you want to use, make sure to check the symlinks called roles/*/defaults/main.yml. These symlinks can either point to rh6.yml or to rh7.yml under the same directory in which they are. Set these according to the OS of your choice.

Roles

basic

The basic role tries to cover the general environment setup that is needed for both Master Portal and Credential Store. The tasks belonging to the basic role cover the following configurations:

  • install / update required packages and repositories
  • configure access to the machine (iptable, ssh)
  • deploy host credentials (see input 1.)
  • enable required services, disable not required services

Before applying the basic role make sure to provide the following inputs for the ansible scripts:

  1. You need to have host certificates ready to apply this role. Place your PEM formatted certificate and key file under the 'roles/basic/files/' directory. You will need to call these credentials 'hostname'.crt and 'hostname'.key, where 'hostname' is the same name that you provided in the hosts.inventory file of your play. It is assumed that your host certificates are issued by 'TERENA eScience SSL CA 3'. If this is not the case, you will have to make some modifications to these scripts to install the right trust root!
  2. Overwrite the recommended default variables from basic_env.yml to match your environment.

credstore

The credstore role takes care of deploying and configuring any Credential Store specific software. The tasks belonging to the credstore role cover the following:

  • install and configure MyProxy Server as a Credential Store
  • deploy trusted online CA (see input 1.)
  • configure iptable rules for Credential Store (open myproxy port)
  • add additional services (myproxy_purge for expired/revoked certificates)
  • start/restart relevant services (myproxy-server)

Before applying the credstore role make sure to provide the following inputs for the ansible scripts:

  1. MyProxy Server only stores credentials that it can verify, therefore it's very important to have the Online CA (which will issue user certificates) as a trusted certificate on the Credential Store machine. Make a tarball from the Online CA in PEM format, together with its subject_hash links and signing_policy. Do not forget the signing policy, since MyProxy will not work without it! The tarball should contain these files at the top level, without any directory structure, and it should be places as onlineca.tar.gz under the 'roles/credstore/files/' directory .
  2. Overwrite the recommended default variables from credstore_env.yml to match your environment.

masterportal

The masterportal role takes care of deploying and configuring the Master Portal and its dependencies. The tasks belonging to the masterportal role cover the following:

  • install and configure dependencies : java, tomcat, httpd
  • add database support through either mysql or mariadb (postgres is not yet supported). Note: If you're on RH6 use mysql, if you're on RH7 stick to mariadb.
  • install and configure Master Portal Server and Master Portal Client
  • install and configure VO Portal (optional). You can enable/disable the execution of this task in role/masterportal/tasks/main.yml
  • configure iptable rules for Master Portal (open https port)
  • start/restart relevant services (tomcat, httpd)

Before applying the masterportal role make sure to provide the following inputs for the ansible scripts:

  1. Add the compiled Master Portal wars to the deployment. If you don't have a compiled Master Portal you can build it yourself. Add the compiled war files (mp-oa2-client.war and mp-oa2-server.war) into the 'role/masterportal/files' directory.
  2. Add the compiled Master Portal CLI to the deployment. You will need this CLI tool to approve and manage your Master Portal clients! If you don't have a compiled CLI you can build it yourself. Place the compiled jar (oa2-cli.jar) into the 'role/masterportal/files' directory.
  3. In case you are also deploying the VO Portal, make sure to also put the compiled VO Portal war file (vo-portal.war) into the 'role/masterportal/files' directory.
  4. Overwrite the recommended default variables from masterportal_env.yml to match your environment.

Plays

These ansible scripts include two plays:

  • credstore.yml to deploy a Credential Store. This will apply the basic and the credstore roles.
  • masterportal.yml to deploy a Master Portal. This will apply the basic and the masterportal roles.

You can execute these plays, after you've give the required deployment input for the roles, by calling:

ansible-playbook -i hosts.inventory credstore.yml|masterportal.yml

It is safe to re-execute these plays multiple times, therefore you can also use these scripts to do an update on a Master Portal with a new release!

Note! If you're doing a single host deployment (Master Portal and Credential Store on one host) make sure to always execute the credstore play before the masterportal play. In case you re-execute credstore make sure to re-execute the masterporal play as well. If you fail to do so you will end up with a broken firewall setup!

Delegation Server

The ansible scripts for deploying a Delegation Server can be found under Nikhef subversion, here. Note that these ansible scripts will only deploy a the Delegation Server and not the Online CA itself. If you want to deploy an Online CA as well, you should take a look at these deployment scripts. The Online CA is expected to be up and running before the deployment of the Delegation Server. See the integration section for more details.

These scripts are not as general as the Master Portal scripts, because they contain more Nikhef infrastructure specific configuration. This guide will tell you where these specific configurations are, so you will know what to tweak in case you are deploying a Delegation Server somewhere else. See the Nikhef specific section for this.

As a first step, make sure to configure your own inventory file with the hostname of the target machine on which you want a Delegation Server deployed. Look into the nikhef inventory file for inspiration.

Integrating with Online CA

It is recommended to dedicate a full physical machine for the Delegation Server with two separate network interfaces. One of the network interfaces, what we call public_interface will be the outwards facing interface for incoming requests. The second interface, what we call private_interface is a dedicated connection to the Online CA. It is assumed that both Delegation Server and Online CA are kept in a safe and secure environment with nothing between the Delegation Server and Online CA other than a simple network connection. Consult the RCauth.eu policy for more details on recommended safety measures.

After configuring your inventory file with the right 'hostname', you will have to configure some additional host-specific variables. Create a new file under 'host_vars/' with the name 'hostname'. Enter the following host-specific variables into the created file:

private_network: 192.168.1.0/24
private_ds_address: 192.168.1.254
private_ca_address: 192.168.1.1

private_interface: enp8s0
public_interface: enp4s0

private_domain: canet
private_ds_hostname: frontend
private_ca_hostname: ca

Tweak the variables according to your own setup environment. The 'ds' and 'ca' are abbreviations for 'Delegation Server' and 'Online CA' respectively.

The communication between the Delegation Server and the Online CA through the private_network is secured via host certificates. If you followed the suggested Online CA setup you will see that the Online CA creates a dedicated root certificate and two host certificates: one for himself and one for the Delegation Server. The Online CA setup scripts will package the host certificate belonging to the Delegation Server and it will copy it into the Delegation Server machine under /root/frontend.canet.tgz

If you're setting up the Delegation Server for the first time (not updating it) check that the /root/frontend.canet.tgz exists and contains host certificates created by the Online CA. The ansible scripts will process these credentials from PEM into PKCS12 understood by Delegation Server. It will also take care of removing the original /root/frontend.canet.tgz in order to avoid having unprotected duplicate credentials lying around.

If you're updating the Delegation Server with some new configuration or version, without any update to the private communication link, you can simply ignore this file. In its absence the ansible scripts will simply ignore this part of the setup and will assume that the credential have already been translated to PKCS12.

Nikhef specific configurations

Roles

cafrontend

delegserver

Plays