Difference between revisions of "OpenStack Cluster"

From PDP/Grid Wiki
Jump to navigationJump to search
(added XenServer bit)
(fixed typos and reformated the boxes)
Line 4: Line 4:
 
== Architecture and Components ==
 
== Architecture and Components ==
  
OpenStack defines a terminology for its components, see [http://docs.openstack.org/icehouse/install-guide/install/yum/content/ch_overview.html]. It distinguishes between three types of nodes which are used to govern the functioning of the system: Controller Node (controller), Compute Node (compute) and Network Node (network). In the deplyment guides these are viewd as separate nodes, in order to support high scalabilty, but given the small size of our cluster, we decided to go ahead with an all-in-one design that incorporates the functionality of all three nodes into one.
+
OpenStack defines a terminology for its components, see [http://docs.openstack.org/icehouse/install-guide/install/yum/content/ch_overview.html]. It distinguishes between three types of nodes which are used to govern the functioning of the system: Controller Node (controller), Compute Node (compute) and Network Node (network). In the deplyment guides these are viewed as separate nodes, in order to support high scalabilty, but given the small size of our cluster, we decided to go ahead with an all-in-one design that incorporates the functionality of all three nodes into one.
  
OpenStack provides clear separation between its components, and leaves it up to the user which to install and which not depending on his needs. The components that we selected are:  
+
OpenStack provides clear separation between its components, and leaves it up to the user which to install and which not, depending on his needs. The components that we selected are:  
  
 
* keystone - identity service for users and for other openstack components alike
 
* keystone - identity service for users and for other openstack components alike
* nova - compute service, which governs VM management
+
* nova - compute service, governs VM management
* glance - image service, that stores and provisions cloud images to nova
+
* glance - image service, stores and provisions cloud images to nova
* cinder - volume service, that provides persistent storage volumes to VMs  
+
* cinder - volume service, provides persistent storage volumes to VMs  
 
* horizon - web-based ui  
 
* horizon - web-based ui  
  
Line 18: Line 18:
 
== OpenStack and XenServer ==
 
== OpenStack and XenServer ==
  
The guide at [https://wiki.openstack.org/wiki/XenServer/XenAndXenServer] offer some insight into how OpenStack should be set up together with XenServer. Our XenServer instance consist of four phisical machines with a pool master. The XenServer pool master required because of the iSCSI backed used for VM volume provisioning. To avoid race conditions between XenServer machines, only the pool master is allowed to allocate space, and thus only the pool master can create new VMs (a VM can still be created on any XenServer host, but the request will be routed to the pool master). Having a XenServer pool master required us to deviate from the deplyment guide in cases pointed out bellow
+
The guide at [https://wiki.openstack.org/wiki/XenServer/XenAndXenServer] offers some insight into how OpenStack should be set up together with XenServer. Our XenServer instance consist of four phisical machines with a pool master. The XenServer pool master is required because of the iSCSI backend used for VM volume provisioning. To avoid race conditions between XenServer machines, only the pool master is allowed to allocate space, and thus only the pool master can create new VMs (a VM can still be created on any XenServer host, but the request will be routed to the pool master). Having a XenServer pool master froced us to deviate from the deplyment guide in some cases pointed out bellow.
  
Before installing the OpenStack packages the following steps have been taken on the from [https://wiki.openstack.org/wiki/XenServer/PostInstall] :
+
Before installing the OpenStack packages the following steps have been taken from [https://wiki.openstack.org/wiki/XenServer/PostInstall] on the dom0 :
  
 
* Storage: we went ahead with the LVM based iSCSI storage, against the advice. (The only limitation noticed so far is the broken cinder deployment)
 
* Storage: we went ahead with the LVM based iSCSI storage, against the advice. (The only limitation noticed so far is the broken cinder deployment)
Line 29: Line 29:
  
 
Our all-in-one OpenStack node is installed in a paravirtualized VM running in the domU of XenServer. Since there was a CentOS 6 template already configured in XenServer, this became the OS of choice for the all-in-one node. In order to support the full functionality of this paravirtualized system first the XenServer Tools have to be installed on it [http://support.citrix.com/proddocs/topic/xencenter-62/xs-xc-vms-installtools.html]. We still had the xen-tools.iso ISO lying around on the XenServer hosts in the list of VDIs (Virtual Disk Image), so it just needed to be mounted on the newly created VM:
 
Our all-in-one OpenStack node is installed in a paravirtualized VM running in the domU of XenServer. Since there was a CentOS 6 template already configured in XenServer, this became the OS of choice for the all-in-one node. In order to support the full functionality of this paravirtualized system first the XenServer Tools have to be installed on it [http://support.citrix.com/proddocs/topic/xencenter-62/xs-xc-vms-installtools.html]. We still had the xen-tools.iso ISO lying around on the XenServer hosts in the list of VDIs (Virtual Disk Image), so it just needed to be mounted on the newly created VM:
xe vbd-create vm-uuid=<VM_ID> device=xvdd vdi-uuid=<xen-tools.iso VDI_ID> bootable=false mode=RO type=Disk  
+
xe vbd-create vm-uuid=<VM_ID> device=xvdd vdi-uuid=<xen-tools.iso VDI_ID> bootable=false mode=RO type=Disk  
xe vbd-plug uuid=<new VBD_ID>
+
xe vbd-plug uuid=<new VBD_ID>
  
 
on the VM
 
on the VM
  
mnt /dev/xvdd /mnt
+
mnt /dev/xvdd /mnt
cd /mnt/Linux
+
./mnt/Linux/install.sh
./install.sh
 

Revision as of 09:51, 16 January 2015

In order to have a system to provision virtual machines on demand for various use cases (such as for Jenkins slaves) a small OpenStack installation has been deployed on a XenServer pool of four physical machines.

Architecture and Components

OpenStack defines a terminology for its components, see [1]. It distinguishes between three types of nodes which are used to govern the functioning of the system: Controller Node (controller), Compute Node (compute) and Network Node (network). In the deplyment guides these are viewed as separate nodes, in order to support high scalabilty, but given the small size of our cluster, we decided to go ahead with an all-in-one design that incorporates the functionality of all three nodes into one.

OpenStack provides clear separation between its components, and leaves it up to the user which to install and which not, depending on his needs. The components that we selected are:

  • keystone - identity service for users and for other openstack components alike
  • nova - compute service, governs VM management
  • glance - image service, stores and provisions cloud images to nova
  • cinder - volume service, provides persistent storage volumes to VMs
  • horizon - web-based ui

We decided to drop the use of neutron (networking service) in favor of the nova-network legacy neworking service because of its simplicity.

OpenStack and XenServer

The guide at [2] offers some insight into how OpenStack should be set up together with XenServer. Our XenServer instance consist of four phisical machines with a pool master. The XenServer pool master is required because of the iSCSI backend used for VM volume provisioning. To avoid race conditions between XenServer machines, only the pool master is allowed to allocate space, and thus only the pool master can create new VMs (a VM can still be created on any XenServer host, but the request will be routed to the pool master). Having a XenServer pool master froced us to deviate from the deplyment guide in some cases pointed out bellow.

Before installing the OpenStack packages the following steps have been taken from [3] on the dom0 :

  • Storage: we went ahead with the LVM based iSCSI storage, against the advice. (The only limitation noticed so far is the broken cinder deployment)
  • XenAPI Plugins: these python plugins have been copied into /etc/xapi.d/plugins/ on the XenServer pool master (This is crutial, because otherwise nova-compute cannot communicate with XenServer)
  • VIF Isolation Rules: not yet installed
  • 'resize' functionality: not supported yet
  • python 2.6 and packeges: not yet installed, stayed with python 2.4

Our all-in-one OpenStack node is installed in a paravirtualized VM running in the domU of XenServer. Since there was a CentOS 6 template already configured in XenServer, this became the OS of choice for the all-in-one node. In order to support the full functionality of this paravirtualized system first the XenServer Tools have to be installed on it [4]. We still had the xen-tools.iso ISO lying around on the XenServer hosts in the list of VDIs (Virtual Disk Image), so it just needed to be mounted on the newly created VM:

xe vbd-create vm-uuid=<VM_ID> device=xvdd vdi-uuid=<xen-tools.iso VDI_ID> bootable=false mode=RO type=Disk 
xe vbd-plug uuid=<new VBD_ID>

on the VM

mnt /dev/xvdd /mnt
./mnt/Linux/install.sh