Difference between revisions of "Jenkins Setup"

From PDP/Grid Wiki
Jump to navigationJump to search
(plugin note)
(reverse proxy bit)
Line 1: Line 1:
  
The installation guide for Jenkins can be found at [https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Red+Hat+distributions]. This wiki is aimed to describe customizations made to the default setup including [[#OpenStack Integration|OpenStack integration]] and [[#Authentication through the Nikhef SSO|Shibboleth authentication]].
+
The installation guide for Jenkins can be found at [https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Red+Hat+distributions]. We deployed Jenkins on a VM running CentOS 6 x64. This wiki is aimed to describe customizations made to the default setup including [[#OpenStack Integration|OpenStack integration]] and [[#Authentication through the Nikhef SSO|Shibboleth authentication]].
  
 
== OpenStack Integration ==
 
== OpenStack Integration ==
Line 71: Line 71:
  
 
== Authentication through the Nikhef SSO ==
 
== Authentication through the Nikhef SSO ==
 +
 +
The motivation behind this setup was to have Jenkins set up as a Nikhef services, only available to known members. Jenkins can be set up to run in a container, or can be run on its own as deamon [https://wiki.jenkins-ci.org/display/JENKINS/Containers]. In our setup Jenkins is running as a deamon with apache in front of it. Apache servers as a reverse proxy which only lets requests through if they have been authenticated by the Nikhef SSO. The authentication relies on [https://wiki.shibboleth.net/confluence/display/SHIB2/UnderstandingShibboleth Shibboleth].
 +
 +
=== Apache reverse proxy setup ===
 +
 +
The OS that was running Jenkins came with apache 2.2, but the Jenkins reverse proxy required an option (namely AllowEncodedSlashes NoDecode) which was only available in later releases of apache. You can read about this phenomenon and the requirements of Jenkins [https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+says+my+reverse+proxy+setup+is+broken here] and 
 +
[https://wiki.jenkins-ci.org/display/JENKINS/Running+Jenkins+behind+Apache here]. We decided to switch to apache 2.4:
 +
 +
wget -O /etc/yum.repos.d/epel-httpd24.repo  <nowiki>http://repos.fedorapeople.org/repos/jkaluza/httpd24/epel-httpd24.repo</nowiki>
 +
yum install httpd24 httpd24-mod_ssl httpd24-httpd-devel
 +
 +
'''Note!''' This will install an additional apache server (2.4) next to the already existing one in CentOS 6 (2.2). Make sure the shut down the previous one, before firing up this one. The newly installed apache (2.4) will be rooted in /opt/rh/httpd/root with all of it's configuration, so the apache config files will also be in there, not in /etc/httpd. The development package is needed to build shibboleth RPM from source, as described below.
 +
 +
Next you need to configure apache to act as a reverse proxy. Set up virtual host on port 8008 with ProxyPass and ProxyPassReverse to the [http://en.wikipedia.org/wiki/Apache_JServ_Protocol ajp] enpoints of jenkins listening on port 8009. Create a new configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/jenkinsproxy.conf containing:
 +
 +
<VirtualHost *:8008>
 +
 +
  ServerName <hostname>
 +
  UseCanonicalName On
 +
  UseCanonicalPhysicalPort On
 +
 +
  SSLEngine On
 +
  SSLCertificateFile      <ssl.crt>
 +
  SSLCertificateKeyFile    <ssl.key>
 +
  #only if applicable
 +
  SSLCertificateChainFile  <ca.crt>
 +
 +
  ProxyRequests    Off
 +
  ProxyPass        /  ajp://127.0.0.1:8009/ nocanon
 +
  ProxyPassReverse  /  ajp://127.0.0.1:8009/
 +
  AllowEncodedSlashes NoDecode
 +
 +
</VirtualHost>
 +
 +
You should fill the server hostname and the location of the key and certificate file you are planning to use for HTTPS. In our setup we run jenkins without a prefix, so the server can be accessed in /. If you run Jenkins with a prefix, make sure to include it after the / in both ProxyPass and ProxyPassReverse.
 +
 +
Make sure that apache is listenning on port 8008 by editing /opt/rh/httpd24/root/etc/httpd/conf/httpd.conf and make a rule for port 8008 in your iptables. After starting apache you should be able to access the Jenkins Dashboard via the apache reverse proxy.
 +
 +
sudo service httpd24-httpd start
 +
 +
'''Note!''' In this setup the communication between apache and Jenkins is unencrypted using ajp, which is fine until Jenkins only listens on localhost. Additional steps can be taken to make this connection use ssl.
 +
 +
'''Note! Make sure to close all listening ports (HTTP and HTTPS) on Jenkins, and only leave the AJP port (8009) opened for incoming connections on localhost. If you fail to do so, the reverse proxy can get bypassed!''' Change the following parameters in /etc/sysconfig/jenkins:
 +
 +
JENKINS_PORT="-1"
 +
JENKINS_LISTEN_ADDRESS=""
 +
 +
#JENKINS_HTTPS_PORT=""
 +
#JENKINS_HTTPS_LISTEN_ADDRESS="0.0.0.0"
 +
 +
JENKINS_AJP_PORT="8009"
 +
JENKINS_AJP_LISTEN_ADDRESS="127.0.0.1"
 +
 +
=== Shibboleth setup ===

Revision as of 15:46, 20 January 2015

The installation guide for Jenkins can be found at [1]. We deployed Jenkins on a VM running CentOS 6 x64. This wiki is aimed to describe customizations made to the default setup including OpenStack integration and Shibboleth authentication.

OpenStack Integration

OpenStack can be used to provision Jenkins slaves on demand. A Jenkins slave is an executor node that runs a Jenkins agent and gets jobs from the Jenkins master.

To set up this integration a Jenkins plugin was used, called JClouds. Installing Jenkins plugins is a fairly straight forward job that can be done from the web interface through Manage Jenkins -> Manage Plugins. The JClouds plugin is designed to work with multiple cloud platforms, so it has to be configured for OpenStack specifically. The plugin configurations can be found under Manage Jenkins -> Configure System, or alternatively can also be tweaked from /var/lib/jenkins/config.xml. The parameters to configure are:

Profile          : <profile name>
Provider Name    : openstack-nova
End Point URL    : http://<ip>:5000/v2.0/
Identity         : tenant:user
Credentials      : <password>
RSA Private Key  : <private key>
Public Key       : <public key>

The endpoint should point to the machine where you have the openstack-nova-api set up. In case of our all-in-one type of setup, there is not much to choose from. The identity and credentials are needed for authentication at OpenStack. If you want to keep Jenkins slave separate from other projects running on OpenStack you can create a new tenant/user pair in OpenStack as such:

source keystonerc_admin
keystone user-create --name=jenkins --pass=<pass>
keystone tenant-create --name=jenkins-testbed --description="Jenkins Testbed"
keystone user-role-add --user=jenkins --role=_member_ --tenant=jenkins-testbed

Next you should create a key pair that Jenkins will use to allow ssh access into the slaves.

Note! The public key of the created key pair should be uploaded into OpenStack, so that it can be injected in the VMs authorized_hosts list. Do this using:

nova keypair-add --pub-key <pub key> jenkins-key

You can test your setup with a Test Connection button on the bottom of the page.

Template Configuration

In order to be able to start slaves, you first need to define Cloud Instance Templates. The parameters to configure are:

Name                    : <name>
Labels                  : <lable1> <label2>
Hardware Options 
   Specify Hardware ID  : RegionOne/2 (m1.small)
Image/OS Options
   Specify Image ID     : RegionOne/<image ID>
General Options  
   Admin Username       : <admin username>
   Networks             : <network ID>
   Security Groups      : <secgroup>
Open Stack Options
   Key Pair Name        : jenkins-key

After naming the template you should also give it a couple of labels. Labels work in Jenkins like tags. It's useful to group more templates of the same kind, for example with 'slave' for generic templates, and with more specialized keywords for other templates. These labels can later be used in job configurations, and it's the main way of restricting the type of the node where a they are allowed to run.

Next a collection of openstack specific options continue. You should choose one of the provided Hardware type (m1.tiny, m1.small...) and image IDs. You can find image IDs by executing glance image-list in your openstack node. The 'Admin Username' expects the username with sudo capabilities on the image that you are about to boot. This varies from image to image (for example, the centOS 6 cloud image has 'root', while the centOS 7 image has 'centos'). This is required, because Jenkins will attempt to ssh into a newly created VM to set up its environment. The network ID has to be the ID of an already existing network within openstack, and can be viewd with nova network-list. (Note! This network should be associated with the same tenant that you created for the jenkins user) You can provide specific, already existing security groups to use, but we observed that Jenkins tends to create its own security group in OpenStack with a single rule that opens up port 22. Last, you have to provide the name of the keypair that you installed in the previous step, while setting up the connection. These options should be sufficient to start Jenkins slaves.

Other useful options that can be tweaked are:

Init Script         : executes a series of bash commands after the VM is created
Allow Sudo          : Jenkins creates a UNIX user (named 'jenkins' by default) to run its agent. This enables sudo on it. Useful when you want to install something through the Init Script.
Install Private Key : copy the private key provided in the setup, such that you can ssh from one slave into the other.

Using the plugin

After you install the plugin two separate options will appear in every job configuration under Build Environment: JClouds Instance Creation and JClouds Single-Use Slave.

JClouds Instance Creation allows you to create a specified number of VMs from a template. Note that this will not create a VM on which to run the job, rather it creates the VM as part of the job (this means that you should have at least one matching executor where this job can run). The IPs of the newly created VMs are then listed in the environmental variable called JCLOUDS_IPS. You can use this to establish connections to the VMs from this job.

Note! By default these VMs will be terminated automatically once the job finishes. There is an option, however, called 'Stop on Terminate' which just stops them instead of terminating. This option is implemented such that it issues a 'suspend' command to OpenStack, which currently fails ('VM_MISSING_PV_DRIVERS'). As a result the VM is not terminated, and not suspended either, so it keeps running.

JClouds Single-Use Slave creates a single-use VM matching the node restrictions of the job, runs the job on the VM and terminates it.

Note! This option seems to hang the job occasionally. It seems that the VM is created and the slave is installed as expected on it (logs confirm), but the state is not refreshed, so the job tends to stay in the queue until a manual status refresh of slaves. Moreover, after job completion, the single-use slave is marked for deletion, but it's actually never removed until manual intervention (delete node directly from OpenStack).

Authentication through the Nikhef SSO

The motivation behind this setup was to have Jenkins set up as a Nikhef services, only available to known members. Jenkins can be set up to run in a container, or can be run on its own as deamon [2]. In our setup Jenkins is running as a deamon with apache in front of it. Apache servers as a reverse proxy which only lets requests through if they have been authenticated by the Nikhef SSO. The authentication relies on Shibboleth.

Apache reverse proxy setup

The OS that was running Jenkins came with apache 2.2, but the Jenkins reverse proxy required an option (namely AllowEncodedSlashes NoDecode) which was only available in later releases of apache. You can read about this phenomenon and the requirements of Jenkins here and here. We decided to switch to apache 2.4:

wget -O /etc/yum.repos.d/epel-httpd24.repo  http://repos.fedorapeople.org/repos/jkaluza/httpd24/epel-httpd24.repo
yum install httpd24 httpd24-mod_ssl httpd24-httpd-devel

Note! This will install an additional apache server (2.4) next to the already existing one in CentOS 6 (2.2). Make sure the shut down the previous one, before firing up this one. The newly installed apache (2.4) will be rooted in /opt/rh/httpd/root with all of it's configuration, so the apache config files will also be in there, not in /etc/httpd. The development package is needed to build shibboleth RPM from source, as described below.

Next you need to configure apache to act as a reverse proxy. Set up virtual host on port 8008 with ProxyPass and ProxyPassReverse to the ajp enpoints of jenkins listening on port 8009. Create a new configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/jenkinsproxy.conf containing:

<VirtualHost *:8008>

  ServerName <hostname>
  UseCanonicalName On
  UseCanonicalPhysicalPort On

  SSLEngine On
  SSLCertificateFile       <ssl.crt>
  SSLCertificateKeyFile    <ssl.key>
  #only if applicable
  SSLCertificateChainFile  <ca.crt>

  ProxyRequests     Off
  ProxyPass         /  ajp://127.0.0.1:8009/ nocanon
  ProxyPassReverse  /  ajp://127.0.0.1:8009/
  AllowEncodedSlashes NoDecode

</VirtualHost>

You should fill the server hostname and the location of the key and certificate file you are planning to use for HTTPS. In our setup we run jenkins without a prefix, so the server can be accessed in /. If you run Jenkins with a prefix, make sure to include it after the / in both ProxyPass and ProxyPassReverse.

Make sure that apache is listenning on port 8008 by editing /opt/rh/httpd24/root/etc/httpd/conf/httpd.conf and make a rule for port 8008 in your iptables. After starting apache you should be able to access the Jenkins Dashboard via the apache reverse proxy.

sudo service httpd24-httpd start

Note! In this setup the communication between apache and Jenkins is unencrypted using ajp, which is fine until Jenkins only listens on localhost. Additional steps can be taken to make this connection use ssl.

Note! Make sure to close all listening ports (HTTP and HTTPS) on Jenkins, and only leave the AJP port (8009) opened for incoming connections on localhost. If you fail to do so, the reverse proxy can get bypassed! Change the following parameters in /etc/sysconfig/jenkins:

JENKINS_PORT="-1"
JENKINS_LISTEN_ADDRESS=""

#JENKINS_HTTPS_PORT=""
#JENKINS_HTTPS_LISTEN_ADDRESS="0.0.0.0"

JENKINS_AJP_PORT="8009"
JENKINS_AJP_LISTEN_ADDRESS="127.0.0.1"

Shibboleth setup