Difference between revisions of "Jenkins Setup"
(logout) |
(logout notes) |
||
Line 246: | Line 246: | ||
Jenkins seemed to be missing the Logout button once a user was logged in. To solve this we added a [https://wiki.jenkins-ci.org/display/JENKINS/Sidebar-Link+Plugin plugin] that creates an extra option in the sidebar. The logout link used is: <nowiki>https://<jenkins>/Shibboleth.sso/Logout</nowiki> . This endpoint is responsible for executing a global SSO logout by issuing a logout request to the SLO endpoint specified by the IdP, and also doing a local logout by removing the session from the local cache. | Jenkins seemed to be missing the Logout button once a user was logged in. To solve this we added a [https://wiki.jenkins-ci.org/display/JENKINS/Sidebar-Link+Plugin plugin] that creates an extra option in the sidebar. The logout link used is: <nowiki>https://<jenkins>/Shibboleth.sso/Logout</nowiki> . This endpoint is responsible for executing a global SSO logout by issuing a logout request to the SLO endpoint specified by the IdP, and also doing a local logout by removing the session from the local cache. | ||
+ | |||
+ | '''Note!''' While using global logout Shibboleth sends a logout request to the IdP, and expects a signed logout response, which was not present. You can tell Shibboleth to skip the security checks by using a [https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPPolicyRule NullSecurity] PolicyRule, although this is not advised to use in production. | ||
+ | |||
+ | '''Note!''' We experienced first hand some of the difficulties of single (global) logout[https://wiki.shibboleth.net/confluence/display/SHIB2/SLOIssues]. Accessing Jenkins, after logging in at the Nikhef SSO through a different Nikhef service, works as expected, but logging out results in an error message. What we ended up doing was to redirect users who wish to log out from Jenkins to the Nikhef SSO page. |
Latest revision as of 17:24, 22 January 2015
The installation guide for Jenkins can be found at [1]. We deployed Jenkins on a VM running CentOS 6 x64. This wiki is aimed to describe customizations made to the default setup including OpenStack integration and Shibboleth authentication.
OpenStack Integration
OpenStack can be used to provision Jenkins slaves on demand. A Jenkins slave is an executor node that runs a Jenkins agent and gets jobs from the Jenkins master.
To set up this integration a Jenkins plugin was used, called JClouds. Installing Jenkins plugins is a fairly straight forward job that can be done from the web interface through Manage Jenkins -> Manage Plugins. The JClouds plugin is designed to work with multiple cloud platforms, so it has to be configured for OpenStack specifically. The plugin configurations can be found under Manage Jenkins -> Configure System, or alternatively can also be tweaked from /var/lib/jenkins/config.xml. The parameters to configure are:
Cloud Profile : <profile name> Provider Name : openstack-nova End Point URL : http://<ip>:5000/v2.0/ Identity : tenant:user Credentials : <password> RSA Private Key : <private key> Public Key : <public key>
The endpoint should point to the machine where you have the openstack-nova-api set up. In case of our all-in-one type of setup, there is not much to choose from. The identity and credentials are needed for authentication at OpenStack. If you want to keep Jenkins slave separate from other projects running on OpenStack you can create a new tenant/user pair in OpenStack as such:
source keystonerc_admin keystone user-create --name=jenkins --pass=<pass> keystone tenant-create --name=jenkins-testbed --description="Jenkins Testbed" keystone user-role-add --user=jenkins --role=_member_ --tenant=jenkins-testbed
Next you should create a key pair that Jenkins will use to allow ssh access into the slaves.
Note! The public key of the created key pair should be uploaded into OpenStack, so that it can be injected in the VMs authorized_hosts list. Do this using:
nova keypair-add --pub-key <pub key> jenkins-key
You can test your setup with a Test Connection button on the bottom of the page.
Template Configuration
In order to be able to start slaves, you first need to define Cloud Instance Templates. The parameters to configure are:
Name : <name> Labels : <lable1> <label2> Hardware Options Specify Hardware ID : RegionOne/2 (m1.small) Image/OS Options Specify Image ID : RegionOne/<image ID> General Options Admin Username : <admin username> Networks : <network ID> Security Groups : <secgroup> Open Stack Options Key Pair Name : jenkins-key
After naming the template you should also give it a couple of labels. Labels work in Jenkins like tags. It's useful to group more templates of the same kind, for example with a 'slave' label for generic templates, and with more specialized keywords for other templates. These labels can later be used in job configurations, and it's the main way of restricting the type of the node where jobs are allowed to run.
Next a collection of openstack specific options continue. You should choose one of the provided Hardware type (m1.tiny, m1.small...) and image IDs. You can find image IDs by executing glance image-list in your openstack node. The 'Admin Username' expects the username with sudo capabilities on the image that you are about to boot. This varies from image to image (for example, the centOS 6 cloud image has 'root', while the centOS 7 image has 'centos'). This is required, because Jenkins will attempt to ssh into a newly created VM to set up its environment. The network ID has to be the ID of an already existing network within openstack, and can be viewed with nova network-list. (Note! This network should be associated with the same tenant that you created for the jenkins user) You can provide specific, already existing security groups to use, but we observed that Jenkins tends to create its own security group in OpenStack with a single rule that opens up port 22. Last, you have to provide the name of the keypair that you installed in the previous step, while setting up the connection. These options should be sufficient to start Jenkins slaves.
Other useful options that can be tweaked are:
Init Script : executes a series of bash commands after the VM is created Allow Sudo : Jenkins creates a UNIX user (named 'jenkins' by default) to run its agent. This enables sudo on it. Useful when you want to install something through the Init Script. Install Private Key : copy the private key provided in the setup, such that you can ssh from one slave into the other.
Using the plugin
After you install the plugin two separate options will appear in every job configuration under Build Environment: JClouds Instance Creation and JClouds Single-Use Slave.
JClouds Instance Creation allows you to create a specified number of VMs from a template. Note that this will not create a VM on which to run the job, rather it creates the VM as part of the job (this means that you should have at least one matching executor where this job can run). The IPs of the newly created VMs are then listed in the environmental variable called JCLOUDS_IPS. You can use this to establish connections to the VMs from this job.
Note! By default these VMs will be terminated automatically once the job finishes. There is an option, however, called 'Stop on Terminate' which just stops them instead of terminating. This option is implemented such that it issues a 'suspend' command to OpenStack, which currently fails ('VM_MISSING_PV_DRIVERS'). As a result the VM is not terminated, and not suspended either, so it keeps running.
JClouds Single-Use Slave creates a single-use VM matching the node restrictions of the job, runs the job on the VM and terminates it.
Note! This option seems to hang the job occasionally. It seems that the VM is created and the slave is installed as expected on it (logs confirm), but the state is not refreshed, so the job tends to stay in the queue until a manual status refresh of slaves. Moreover, after job completion, the single-use slave is marked for deletion, but it's actually never removed until manual intervention (delete node directly from OpenStack).
Authentication through the Nikhef SSO
The motivation behind this setup was to have Jenkins set up as a Nikhef services, only available to known members. Jenkins can be set up to run in a container, or can be run on its own as deamon [2]. In our setup Jenkins is running as a deamon with apache in front of it. Apache servers as a reverse proxy which only lets requests through if they have been authenticated by the Nikhef SSO. The authentication relies on Shibboleth.
Apache reverse proxy setup
CentOS 6 came with apache 2.2, but the Jenkins reverse proxy required an option (namely AllowEncodedSlashes NoDecode) which was only available in later releases of apache. You can read about this phenomenon and the requirements of Jenkins here and here. We decided to switch to apache 2.4:
wget -O /etc/yum.repos.d/epel-httpd24.repo http://repos.fedorapeople.org/repos/jkaluza/httpd24/epel-httpd24.repo yum install httpd24 httpd24-mod_ssl httpd24-httpd-devel
Note! This will install an additional apache server (2.4) next to the already existing one in CentOS 6 (2.2). Make sure the shut down the previous one, before firing up this one. The newly installed apache (2.4) will be rooted in /opt/rh/httpd/root with all of it's configuration, so the apache config files will also be in there, not in /etc/httpd. The development package is needed to build shibboleth RPM from source, as described here.
Next you need to configure apache to act as a reverse proxy. Set up virtual host on port 8008 with ProxyPass and ProxyPassReverse to the ajp enpoints of jenkins listening on port 8009. Create a new configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/jenkinsproxy.conf containing:
<VirtualHost *:8008> ServerName <hostname> UseCanonicalName On UseCanonicalPhysicalPort On SSLEngine On SSLCertificateFile <ssl.crt> SSLCertificateKeyFile <ssl.key> #only if applicable SSLCertificateChainFile <ca.crt> ProxyRequests Off ProxyPass / ajp://127.0.0.1:8009/ nocanon ProxyPassReverse / ajp://127.0.0.1:8009/ AllowEncodedSlashes NoDecode </VirtualHost>
You should fill the server hostname and the location of the key and certificate file you are planning to use for HTTPS. In our setup we run jenkins without a prefix, so the server can be accessed in /. If you run Jenkins with a prefix, make sure to include it after the / in both ProxyPass and ProxyPassReverse.
Make sure that apache is listenning on port 8008 by editing /opt/rh/httpd24/root/etc/httpd/conf/httpd.conf and make a rule for port 8008 in your iptables. After starting apache you should be able to access the Jenkins Dashboard via the apache reverse proxy.
sudo service httpd24-httpd start
Note! In this setup the communication between apache and Jenkins is unencrypted using ajp, which is fine until Jenkins only listens on localhost. Additional steps can be taken to make this connection use ssl.
Note! Make sure to close all listening ports (HTTP and HTTPS) on Jenkins, and only leave the AJP port (8009) opened for incoming connections on localhost. If you fail to do so, the reverse proxy can get bypassed! Change the following parameters in /etc/sysconfig/jenkins:
JENKINS_PORT="-1" JENKINS_LISTEN_ADDRESS="" #JENKINS_HTTPS_PORT="" #JENKINS_HTTPS_LISTEN_ADDRESS="0.0.0.0" JENKINS_AJP_PORT="8009" JENKINS_AJP_LISTEN_ADDRESS="127.0.0.1"
Shibboleth setup
We configured HTTPS access to the Jenkins server through an apache reverse proxy, but so far everybody can use jenkins uncontrolled. To set up authentication using the Nikhef SSO we used Shibboleth. The Shibboleth SP (Service Provider) component is made up of a deamon called shibd and an apache module that helps configure apache to send authentication requests. Unfortunately, the prepackaged RPMs are deciding which version of apache module to install based on the native apache of the system, which in our case is apache 2.2. This is not what we want, since we are using apache 2.4. To install the right apache module we had to build our own RPM from a SRPM as outlined here. Since the Shibboleth package also comes with some dependencies, what we ended up doing was to install Shibboleth with yum (this takes care of the right dependencies) then remove the shibboleth package, build it from source and install it. This avoids building the rest of the dependencies from source.
wget -O /etc/yum.repos.d/shib.repo http://download.opensuse.org/repositories/security://shibboleth/CentOS_CentOS-6/security:shibboleth.repo yum install shibboleth.x86_64 rpm -e --nodeps shibboleth rpmbuild --rebuild --clean --without builtinapache -D 'shib_options \ --enable-apache-24 \ --with-apxs24=/opt/rh/httpd24/root/usr/bin/apxs \ --with-apr=/opt/rh/httpd24/root/usr/bin/apr-1-config \ --with-apu=/opt/rh/httpd24/root/usr/bin/apu-1-config' SRPMS/shibboleth-2.5.3-1.1.src.rpm rpm -iv RPMS/x86_64/shibboleth-2.5.3-1.1.x86_64.rpm
Note! In order for the rpmbuild to succeed you might have to install the development packages of the shibboleth dependencies.
Note! The rpmbuild also failed multiple times complaining about rpaths and about a deprecated dependency. The rpath warnings got fixed with 'export QA_RPATHS=$[ 0x0001|0x0010 ]', while the dependency error turned out to be a wrongly configured library dependency in /opt/rh/httpd24/root/usr/bin/apxs. A nasty hack made apxs look into /opt/rh/httpd24/root/usr/lib64 for its dependencies rather then in /usr/lib64. The diff between the old and new apxs:
35c35 < my $libdir = `pkg-config --variable=libdir apr-1`; --- > my $libdir = "/opt/rh/httpd24/root/usr/lib64/";
You can verify if you have the right apache module installed with:
rpm -q -l shibboleth-2.5.3-1.1.el6.x86_64 | grep mod_shib_24
Shibboleth with Apache
The guides at [3] and [4] helped in configuring apache and shibboleth to work together. Shibboleth came installed with a sample apache configuration file which loads the module. You should move this module into the right place in httpd24, or otherwise you have to load the module manually somewhere else.
cp /etc/shibboleth/apache24.config /opt/rh/httpd24/root/etc/httpd/conf.d/
The Shibboleth configuration file can be found under /etc/shibboleth2.xml. You have to configure this to your own setup.
... <ApplicationDefaults entityID=<entityID> REMOTE_USER="eppn persistent-id targeted-id"> <Sessions lifetime="28800" timeout="3600" relayState="ss:mem" checkAddress="true" handlerSSL="true" cookieProps="https"> <SSO entityID="https://sso.nikhef.nl/sso/saml2/idp/metadata.php"> SAML2 SAML1 </SSO> <Logout>SAML2 Local</Logout> <Handler type="MetadataGenerator" Location="/Metadata" signing="false"/> <Handler type="Status" Location="/Status" acl="127.0.0.1 ::1"/> <Handler type="Session" Location="/Session" showAttributeValues="false"/> <Handler type="DiscoveryFeed" Location="/DiscoFeed"/> </Sessions> <MetadataProvider type="XML" uri="https://sso.nikhef.nl/sso/saml2/idp/metadata.php" backingFilePath="sso.nikhef.nl-metadata.xml" reloadInterval="7200"> </MetadataProvider> ... </ApplicationDefaults> ...
Make sure to fill the 'entityID' in the ApplicationDefaults to a unique id, in our case it's simply the hostname of the service that we're setting up. The 'entityID' in the SSO has to match the id of the IdP (Nikhef SSO). Last, a MetadataProvider has to be registered, pointing to the right endpoint of the Nikhef SSO.
Note! We disabled the signature verification on the metadata that shibboleth is downloading from the IdP. The reason why we did this was because the metadata seemed to be signed with the certificate that is provided through the metadata itself. A verification would require that we have the certificate beforehand, which is not the case.
Before you can use the Nikhef SSO as your IdP you have to make sure to register the metadata provided by your service with the IdP. You can generate this metadata by looking at the https://<jenkins>/Shibboleth.sso/Metadata endpoint of your service.
Next you can configure apache to restrict access to a subtree of your choice only to privileged users by redirecting them to the Nikhef SSO. In our case this will be the whole tree starting from /. Add the following section into your virtual host configuration at /opt/rh/httpd24/root/etc/httpd/conf.d/jenkinsproxy.conf:
<Location /> SSLRequireSSL AuthType shibboleth ShibRequestSetting requireSession 1 Require valid-user </Location>
After restarting httpd24-httpd and shibd you should see that when accessing Jenkins, you get redirected to the SSO page to log in.
Jenkins Tweaks
Login
Using the Shibboleth SSO redirect we managed to restrict access to Jenkins only to known users, but so far Jenkins has now way of knowing who is the authenticated user. Because of this every authenticated user appears under an 'anonymous' alias, which makes it impossible to track who did what. By default the settings in Manage Jenkins -> Configure Global Security do not support this type of login, however you can download a plugin called Reverse Proxy Auth Plugin. This plugin introduces another access control mode called 'HTTP Header by reverse proxy'. This option delegates the responsibility of authentication to the reverse proxy, and expects the authenticated users' id in the HTTP header under a variable called 'X-Forwarded-User'.
Apache has to be set up to forward HTTP headers to Jenkins, and to set X-Forwarded-User for the authenticated user. You should add the following into the jenkinsproxy.conf file:
# prevent the client from setting this header RequestHeader unset X-Forwarded-User <Location /> ... #enable using headers #needed in order to pass X-Forwarded-User in the header to jenkins ShibUseHeaders On RewriteEngine On RewriteCond %{REMOTE_USER} (.+) # this actually doesn't rewrite anything. what we do here is to set RU to the match above RewriteRule .* - [E=RU:%1] RequestHeader set X-Forwarded-User %{RU}e </Location>
Note! For this to work you have to have the mod_rewrite module installed in your apache.
Apache extracts the value of X-Forwarded-User from the environmental variable REMOTE_USER. The value given to this variable can be tweaked in Shibboleth [5] [6]. After authentication the Shibboleth SP receives a collection of user attributes, which can be mapped to variables. Depending on the IdP that you're talking with, this can have a different name and format. In our setup the Nikhef SSO provides uid which is mapped in /etc/shibboleth/attribute-map.xml into a variable called Shib-uid. Make sure that you have the following (or similar, in case of a different IdP) line present:
<Attribute name="urn:mace:dir:attribute-def:uid" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" id="Shib-uid"/>
Now all that's left to do is to tell Shibboleth to use the value of Shib-uid to set the REMOTE_USER environmental variable. You can do this in /etc/shibboleth/shibboleth2.xml by adding 'Shib-uid' to the front of the list of values that REMOTE_USER can take.
<ApplicationDefaults entityID=<entityID> REMOTE_USER="Shib-uid eppn persistent-id targeted-id">
Now you should see your username appear in the upper right corner of the Jenkins Dashboard once you logged in at your SSO. You can check https://<jenkins>/whoAmI/ for debugging purposes, to see whether the right variable is set in the request header that jenkins receives.
Logout
Jenkins seemed to be missing the Logout button once a user was logged in. To solve this we added a plugin that creates an extra option in the sidebar. The logout link used is: https://<jenkins>/Shibboleth.sso/Logout . This endpoint is responsible for executing a global SSO logout by issuing a logout request to the SLO endpoint specified by the IdP, and also doing a local logout by removing the session from the local cache.
Note! While using global logout Shibboleth sends a logout request to the IdP, and expects a signed logout response, which was not present. You can tell Shibboleth to skip the security checks by using a NullSecurity PolicyRule, although this is not advised to use in production.
Note! We experienced first hand some of the difficulties of single (global) logout[7]. Accessing Jenkins, after logging in at the Nikhef SSO through a different Nikhef service, works as expected, but logging out results in an error message. What we ended up doing was to redirect users who wish to log out from Jenkins to the Nikhef SSO page.