OpenStack Cluster

From PDP/Grid Wiki
Revision as of 12:48, 16 January 2015 by Tamasb@nikhef.nl (talk | contribs) (completed with networking info)
Jump to navigationJump to search

In order to have a system to provision virtual machines on demand for various use cases (such as for Jenkins slaves) a small OpenStack installation has been deployed on a XenServer pool of four physical machines.

Architecture and Components

OpenStack defines a terminology for its components, see [1]. It distinguishes between three types of nodes which are used to govern the functioning of the system: Controller Node (controller), Compute Node (compute) and Network Node (network). In the deplyment guides these are viewed as separate nodes, in order to support high scalabilty, but given the small size of our cluster, we decided to go ahead with an all-in-one design that incorporates the functionality of all three nodes into one.

OpenStack provides clear separation between its components, and leaves it up to the user which to install and which not, depending on his needs. The components that we selected are:

  • keystone - identity service for users and for other openstack components alike
  • nova - compute service, governs VM management
  • glance - image service, stores and provisions cloud images to nova
  • cinder - volume service, provides persistent storage volumes to VMs
  • horizon - web-based ui

We decided to drop the use of neutron (networking service) in favor of the nova-network legacy neworking service because of its simplicity.

OpenStack and XenServer

The guide at [2] offers some insight into how OpenStack should be set up together with XenServer. Our XenServer instance consist of four phisical machines with a pool master. The XenServer pool master is required because of the iSCSI backend used for VM volume provisioning. To avoid race conditions between XenServer machines, only the pool master is allowed to allocate space, and thus only the pool master can create new VMs (a VM can still be created on any XenServer host, but the request will be routed to the pool master). Having a XenServer pool master froced us to deviate from the deplyment guide in some cases pointed out bellow.

Before installing the OpenStack packages the following steps have been taken from [3] on the dom0 :

  • Storage: we went ahead with the LVM based iSCSI storage, against the advice. (The only limitation noticed so far is the broken cinder deployment)
  • XenAPI Plugins: these python plugins have been copied into /etc/xapi.d/plugins/ on the XenServer pool master (This is crutial, because otherwise nova-compute cannot communicate with XenServer)
  • VIF Isolation Rules: not yet installed
  • 'resize' functionality: not supported yet
  • python 2.6 and packeges: not yet installed, stayed with python 2.4

DomU Setup

Our all-in-one OpenStack node is installed in a paravirtualized VM running in the domU of XenServer. Since there was a 64 bit CentOS 6 VM template with 1 VCPU and 1 GB of memory already configured in XenServer, this became the OS of choice for the all-in-one node. In retrospective, it could have used a more powerful VM (at least with more memory, because the 1GB is nearly all reserved by nova).

Regarding the networking part, we wanted to follow the FlatDHCP[4] design, where a dedicated VM network is created with the openstack node serving as a gateway and DHCP server for it. Since the actual host machines only have two network interfaces (one for management and one for public traffic) we created a dedicated VLAN for the openstack VM network. The new, simplified, version of the network configuration on the host machines looks something like this:

 eth0 (management network)
   |
   --- xenbr0 (virtual bridge)
   |
   --- xapi5 (iscsi network)

 eth1
   |
   --- xenbr1 (virtual bridge)  
   |
   --- xapi17 (openstack VM network) *new*
   |
   --- xapi14 (tw public network) 

Note that the routers behind the host machines have to be configured to forward packets in the newly created VLAN. Next, we create virtual network interfaces (VIF) for the openstack node to every required network. You can create a new VIF with:

xe vif-create vm-uuid=<vm id> network-uuid=<network id> device=0 mac=random 
xe vif-plug <vif id>

If you cannot see the newly created interfaces in the VM, restart the VM. In the end, there should be 4 network interfaces present in the openstack node as follows:

 # OpenStack communicates with XenServer over this network
 eth0 (management network)                                                          -> plugged in xenbr0
 
 # to the internet
 eth1 (public network)                                                              -> plugged in xapi14

 # This interface is configured with a static ip, 
 # the first ip chosen from 172.22.192.0/18 (VM network)
 # Acts as a DHCP server and gateway for the internet for the VM network
 eth2 (VM network) 172.22.192.1 (static ip)                                         -> plugged in xapi17	
 
 # (optional)
 # While experimenting with cinder we needed access into our iSCSI backend. 
 eth3 (iscsi network)                                                               -> plugged in xapi5		

Make sure to configure eth1 to have access to the internet. In our case we dedicated a public ip for it.

In order to support the full functionality of this paravirtualized system first the XenServer Tools have to be installed on it [5]. We still had the xen-tools.iso lying around on the XenServer hosts in the list of VDIs (Virtual Disk Image), so it just needed to be mounted on the newly created VM:

xe vbd-create vm-uuid=<vm id> device=xvdd vdi-uuid=<xen-tools.iso vdi id> bootable=false mode=RO type=Disk 
xe vbd-plug uuid=<new vbd id>

and once logged into the VM

mnt /dev/xvdd /mnt
./mnt/Linux/install.sh