Difference between revisions of "OpenStack Cluster"

From PDP/Grid Wiki
Jump to navigationJump to search
(typo fixes)
(fixed hyperlinks)
 
Line 310: Line 310:
 
'''Note!''' If you don't install the XenServer guest tools you will be unable to use the 'suspend' feature in OpenStack.
 
'''Note!''' If you don't install the XenServer guest tools you will be unable to use the 'suspend' feature in OpenStack.
  
Once you configured the VM to fit your needs you have to export a snapshot of it. This proved to be a bit tricky using XenServer, because its cli export functionality only allows for exports in .xva format (Xen Virtual Application). OpenStack on the other hand does not recognize this, and expects something like a .vhd file. The only possible way to convert from .xva to .vhd is to use [http://www.citrix.com/downloads/xenserver/tools/conversion.html| Citrix XenConverter], a tool offered only for Windows. Alternatively you can try to export directly OVA/OVF images using [http://xenserver.org/partners/developing-products-for-xenserver/21-xencenter-development/88-xc-dev-home.html| Citrix XenCenter], a manager application for XenServer (this option failed for me).
+
Once you configured the VM to fit your needs you have to export a snapshot of it. This proved to be a bit tricky using XenServer, because its cli export functionality only allows for exports in .xva format (Xen Virtual Application). OpenStack on the other hand does not recognize this, and expects something like a .vhd file. The only possible way to convert from .xva to .vhd is to use [http://www.citrix.com/downloads/xenserver/tools/conversion.html Citrix XenConverter], a tool offered only for Windows. Alternatively you can try to export directly OVA/OVF images using [http://xenserver.org/partners/developing-products-for-xenserver/21-xencenter-development/88-xc-dev-home.html Citrix XenCenter], a manager application for XenServer (this option failed for me).
  
 
'''Note!''' Recent versions of XenConverter don't have the desired conversion functionality. In order to convert from .xva to .vhd (OVA/OVF) you need version '''2.3.1'''. Moreover, do not use the 'Compress OVF' option while converting, because you will have problems with qemu-img recognizing it.
 
'''Note!''' Recent versions of XenConverter don't have the desired conversion functionality. In order to convert from .xva to .vhd (OVA/OVF) you need version '''2.3.1'''. Moreover, do not use the 'Compress OVF' option while converting, because you will have problems with qemu-img recognizing it.

Latest revision as of 12:18, 18 February 2015

In order to have a system to provision virtual machines on demand for various use cases (such as for Jenkins slaves) a small OpenStack installation has been deployed on a XenServer pool of four physical machines.

Architecture and Components

OpenStack defines a terminology for its components, see [1]. It distinguishes between three types of nodes which are used to govern the functioning of the system: Controller Node (controller), Compute Node (compute) and Network Node (network). In the deployment guides these are viewed as separate nodes, in order to support high scalability, but given the small size of our cluster, we decided to go ahead with an all-in-one design that incorporates the functionality of all three nodes into one.

OpenStack provides clear separation between its components, and leaves it up to the user which to install and which not, depending on his needs. The components that we selected are:

  • keystone - identity service for users and for other openstack components alike
  • nova - compute service, governs VM management
  • glance - image service, stores and provisions cloud images to nova
  • cinder - volume service, provides persistent storage volumes to VMs
  • horizon - web-based ui

We decided to drop the use of neutron (networking service) in favour of the nova-network legacy networking service because of its simplicity.

OpenStack and XenServer

The guide at [2] offers some insight into how OpenStack should be set up together with XenServer. Our XenServer instance consist of four physical machines with a pool master. The XenServer pool master is required because of the iSCSI backend used for VM volume provisioning. To avoid race conditions between XenServer machines, only the pool master is allowed to allocate space, and thus only the pool master can create new VMs (a VM can still be created on any XenServer host, but the request will be routed to the pool master). Having a XenServer pool master forced us to deviate from the deployment guide in some cases pointed out bellow. OpenStack is talking to the hypervisor using the xenapi, which is the same api that the XenCenter management system uses.

Before installing the OpenStack packages the following steps have been taken from [3] on the dom0 :

  • Storage: we went ahead with the LVM based iSCSI storage, against the advice. (The only limitation noticed so far is the broken cinder deployment)
  • XenAPI Plugins: these python plugins have been copied into /etc/xapi.d/plugins/ on the XenServer pool master (This is crutial, because otherwise nova-compute cannot communicate with XenServer)
  • VIF Isolation Rules: not yet installed
  • 'resize' functionality: not supported yet
  • python 2.6 and packeges: not yet installed, stayed with python 2.4

DomU Setup

Our all-in-one OpenStack node is installed in a paravirtualized VM running in the domU of XenServer. Since there was a 64 bit CentOS 6 VM template with 1 VCPU and 1 GB of memory already configured in XenServer, this became the OS of choice for the all-in-one node. In retrospective, it could have used a more powerful VM (at least with more memory, because the 1GB is nearly all reserved by nova).

Regarding the networking part, we wanted to follow the FlatDHCP[4] design, where a dedicated VM network is created with the openstack node serving as a gateway and DHCP server for it. Since the actual host machines only have two network interfaces (one for management and one for public traffic) we created a dedicated VLAN for the openstack VM network. The new, simplified, version of the network configuration on the host machines looks something like this:

 eth0 (management network)
   |
   --- xenbr0 (virtual bridge)
   |
   --- xapi5 (iscsi network)

 eth1
   |
   --- xenbr1 (virtual bridge)  
   |
   --- xapi17 (openstack VM network) *new*
   |
   --- xapi14 (tw public network) 

Note that the routers behind the host machines have to be configured to forward packets in the newly created VLAN. Next, we create virtual network interfaces (VIF) for the openstack node to every required network. You can create a new VIF with:

xe vif-create vm-uuid=<vm id> network-uuid=<network id> device=0 mac=random 
xe vif-plug <vif id>

If you cannot see the newly created interfaces in the VM, restart the VM. In the end, there should be 4 network interfaces present in the openstack node as follows:

 # OpenStack communicates with XenServer over this network
 eth0 (management network)                                                          -> plugged in xenbr0
 
 # to the internet
 eth1 (public network)                                                              -> plugged in xapi14

 # This interface is configured with a static ip, 
 # the first ip chosen from 172.22.192.0/18 (VM network)
 # Acts as a DHCP server and gateway for the internet for the VM network
 eth2 (VM network) 172.22.192.1 (static ip)                                         -> plugged in xapi17	
 
 # While experimenting with cinder we needed access into our iSCSI backend. 
 # We ended up needing it for the glance backend
 eth3 (iscsi network)                                                               -> plugged in xapi5		

Make sure to configure eth1 to have access to the internet. In our case we dedicated a public ip for it.

In order to support the full functionality of this paravirtualized system first the XenServer Tools have to be installed on it [5]. We still had the xen-tools.iso lying around on the XenServer hosts in the list of VDIs (Virtual Disk Image), so it just needed to be mounted on the newly created VM:

xe vbd-create vm-uuid=<vm id> device=xvdd vdi-uuid=<xen-tools.iso vdi id> bootable=false mode=RO type=Disk 
xe vbd-plug uuid=<new vbd id>

and once logged into the VM

mnt /dev/xvdd /mnt
./mnt/Linux/install.sh

Deploying OpenStack

Before starting the installation make sure to update the OS to its newest version, and set up ntp. Installing the epel packages is also recommended. OpenStack can be installed and set up in a manual[6] or a more automated way. No matter which method you choose, there is a dependency to look out for which might or might not be mentioned in the guide you follow. This dependency is a python library called XenAPI.py, and it's needed to support the XenServer hypervisor. You can get it with:

yum install pip
pip install xenapi

We chose an automated deployment method using PackStack[7]. You can use the packstack command to generate an answer-file (packstack config file) and modify it to your needs before the installation. This takes care of generating most of the required setting and passwords for every component. The important parameters to change are the ones deciding whether to install a service or not, the *_HOST parameters pointing at where to install services and your choice of database (MariaDB for us) and message queue service (qpid for CentOS). Do not lose or post the answers-file, since it contains the passwords for all the openstack services.

Note! You should take special consideration when deploying a multi-node system. The OpenStack components are using the message queue to communicate with each other which is set up insecurely (without ssl) by default. This is ok in an all-in-one setup, because messages are not leaving the machine, but in other cases the message queue security has to be set up.

packstack --gen-answer-file my_answers.txt
#Modification made to the my_answers.txt file
#The default is to install every service, so here we mark unwanted services with 'n'
> CONFIG_NEUTRON_INSTALL=n
> CONFIG_SWIFT_INSTALL=n
> CONFIG_CEILOMETER_INSTALL=n
> CONFIG_NAGIOS_INSTALL=n
> CONFIG_DEBUG_MODE=y
> CONFIG_CONTROLLER_HOST=<public ip>
> CONFIG_COMPUTE_HOSTS=<public ip>
> CONFIG_NETWORK_HOSTS=<public ip>
> CONFIG_STORAGE_HOST=<public ip>
> CONFIG_RH_OPTIONAL=n
> CONFIG_AMQP_BACKEND=qpid
> CONFIG_AMQP_HOST=<public ip>
> CONFIG_MARIADB_HOST=<public ip>
> CONFIG_NOVA_COMPUTE_PRIVIF=eth2
> CONFIG_NOVA_NETWORK_PUBIF=eth1
> CONFIG_NOVA_NETWORK_PRIVIF=eth0
> CONFIG_NOVA_NETWORK_FIXEDRANGE=<ip range> (172.22.192.0/18) 
> CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=novapool
> CONFIG_NOVA_NETWORK_VLAN_START=
> CONFIG_CINDER_VOLUMES_CREATE=n
packstack --answer-file=my-answers.txt

Grab a cup of coffee. This should set up most of the required software to run OpenStack. The packstack installation also creates two openstack user account: 'admin' and 'demo'. It also creates two files called 'keystonerc_admin' and 'keystonerc_demo' which can be easily sourced to become one user or the other. You can use these useful commands to find out more about your installation.

Note! Packstack installs the libvirt package, which is not really needed if you're setting up OpenStack with XenServer via the xenapi. To avoid confusion we removed it.

openstack-status
openstack-service [list|start|stop]

Configuring OpenStack

Glance

Glance comes configured almost completely, the only setting that you might want to change is the backend used for storing the images. This defaults to a local filesystem store, which tends to quickly fill up the local disk, not a good idea.

We experimented with using cinder as a backend storage for glance, but after numerous tries this was abandoned. We ended up using the filesystem store as a backend after all, but using a bigger logical volume via iSCSI. To manage iSCSI we downloaded the Linux SCSI target framework (tgt)[8]. The way to set this up to work as a glance backend is as follows[9]:

service tgtd start
tgtadm --lld iscsi --op new --mode target --tid 1 -T prefix.example:server
#you ca verify if the connection works with
tgtadm --lld iscsi --op show --mode target
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/sda

#create logical volume to mount
pvcreate /dev/sda
vgcreate vg_glance /dev/sda
lvcreate -V size -n lv_glance vg_glance 

mount /dev/vg_glance/lv_galnce /mnt

Now all that is left to be done is change the following variable in /etc/glance/glance-api.conf

filesystem_store_datadir=/mnt

Nova

The main nova configuration file can be found in /etc/nova/nova.conf.

nova-compute

The important bits of this configuration are the ones regarding XenServer, and are as followed:

compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://<XenServer pool master host ip on management network>
xenapi_connection=_username=root
xenapi_connection_password=password

Another important flag is:

sr_matching_filter=default-sr:true

This flag defaults to 'default-sr:true', which is what we need in our case. This flag will make OpenStack provision disk space for its VMs through XenServer, using the SR (Storage Repository) configured as default. In our setup the default SR in XenCenter is an 'lvmoiscsi' type backend.

nova-network

After numerous attempts we finally managed to set up the FlatDHCP network using the VM network created earlier. The difficulty lies in attaching the newly created VMs to the right bridge in dom0. Note, that OpenStack is not aware of the VLAN configuration set up earlier, because it's transparent to it. The setup is as follows:

my_ip=<public ip from eth1>

network_driver=nova.network.linux_net
network_manager=nova.network.manager.FlatDHCPManager
public_interface=eth1
dns_server=<valid dns server>
force_snat_range=0.0.0.0/0
send_arp_for_ha=True
flat_network_bridge=xapi17
flat_injected=false
flat_interface=eth2
fixed_range=172.22.192.0/18
force_dhcp_release=True 

It's important to choose the right flat_network_bridge because this determines to which bridge the new VMs are getting connected to. IP injection is not required because we are using a DHCP server with ips from the provided fixed_range. Traffic from the VM network is going to be snat-ed through the public_interface.

Note! Do not set multi_host to true, unless you really have multiple network hosts.

Note! Do not set share_dhcp_address to true in an all-in-one setup! This flag makes it possible to have multiple gateways with seemingly the same ip address for high availability. As a side-effect it also adds some rules into ebtables that filter out all traffic coming from the VMs. This lead to a scenario where VM instances could ping each other, but not the gateway.

vnc console

If you want OpenStack to provide vnc consoles through the browser for you:

novncproxy_base_url=http://<public_ip>:6080/vnc_auto.html
xvpvncproxy_base_url=http://<public_ip>:6081/console

vncserver_listen=<XenServer_ip>
vncserver_proxyclient_address=<XenServer_ip>

vnc_enabled=True
vnc_keymap=en-us

xvpvncproxy_host=<public_ip>
xvpvncproxy_port=6081
novncproxy_host=<public_ip>
novncproxy_port=6080

This will enable nova to generate an address containing a token with which you can get console access to an instance. Encryption, however is not enabled on the connection by default. Note, that most cloud images do not come with a predefined password for their users, in which case you cannot login using the console, unless you set the password beforehand.

Cinder

Only experimental. Coming Soon!

Horizon

Only experimental. Coming Soon!

Firewall

Be careful with setting custom firewall rules, since whenever you restart the OpenStack services (namely nova-network) they will add their own chain on top of the filter. If you decide to go ahead and craft you own rules, here are the ports that OpenStack expects connections on:

nova-api            8773 (for EC2 API)
                    8774 (for openstack API)
                    8775 (metadata port)
                    3333 (when accessing S3 API)
nova-novncproxy     6080
                    5800/5900 (VNC)
cinder              8776
glance              9191 (glance registry)
                    9292 (glance api)
keystone            5000 (public port)
                    35357 (admin port)
http                80 (dashboard)
Mysql               3306
AMQP                5672

Some of these ports might not be listening, depending on the setup that you are having. Amazon specific ports, like 8773 (for EC2 API), can be closed from nova.conf. Moreover, if you are having an all-in-one setup, there is no need to allow outside access to either the MySQL or the AMQP ports. In our setup we created a set of rules which can be restored manually if needed from /etc/sysconfig/iptables.custom.

Operating OpenStack

Commands issued to OpenStack using the API have to be authenticated with right credentials. By default, the OpenStack installer deploys two user accounts 'demo' and 'admin', with their corresponding rc files 'keystonerc_demo' and 'keystonerc_admin'. If you do not have these files, you can create them according to the following template:

export OS_USERNAME=demo
export OS_TENANT_NAME=demo
export OS_PASSWORD=<demo password>
export OS_AUTH_URL=http://<ip>:5000/v2.0/
export PS1='[\u@\h \W(keystone_demo)]\$ '

Starting VMs

Source the 'keystonerc_demo' file, so that all the following commands get authenticated.

Adding images

In order to start a VM, first you have to make sure you have images configured in glance that can be used to boot. We only experimented with specific cloud images provided by CentOS and Fedora. These images can simply be imported into glance with:

glance image-create --name=<name> --disk-format=raw --container-format=bare --location <image url>

Note that glance will save a local copy of the image, so it's advised to have a backend configured that is large enough. The 'container-format' is a deprecated option still required by this release [10]. The 'disk-format' which worked for us was 'raw'. Many images come in the compressed qcow2 format, but unfortunately we could not boot these because of XenServer incompatibilities. Images can be converted from qcow2 to raw format using the 'qemu-img' tool. You could also, in theory, build your own bootable images but this is not yet tested out by us.

Create Network

Next, you need a predefined network to which the new VM will be attached. We call this the 'tenant-network'. To create a network from the command line you need the admin privileges, so source 'keystonerc_admin'. You can create a new network with:

nova network-create --fixed-range-v4 172.22.192.0/18 --project-id <projectID> tenant-network

This command will use the values of the network configuration defined here as a default, which can be overridden with direct options. OpenStack projects/tenants are a form of user grouping. By default, the demo account should be part of a demo project. You can get the id of this using:

keystone tenant-list

The network created by this command will only have the fixed ips set up. The floating ip setup is still future plan.

Note! Before you create the network you should configure a static IP for the interface that you're going to use for the tenant-network (flat_interface in nova.config). This is the IP that the tenant network will take as the gateway address.

Note! Whenever you're trying to recreate a network with the same ip range, first you have to tear down the old one. Existing networks can be deleted by disassociating them from their projects first, as such

nova-manage project scrub --project <projectID>
nova net-delete <networkID>

To avoid conflicts and confusion between separate networks you should also stop the running 'dnsmasq' processes and remove the existing network bridges created by OpenStack using 'brctl delbr <bridge>'.

Booting VM

Set up a keypair to use for ssh interaction and register it with OpenStack. Now you can switch back to the demo user to start a VM:

nova keypair-add --pub-key <pub-key> demo-key
nova boot --flavor m1.small --image centOS-7-x86_64 --nic net-id=<network id> --security-group default --key-name demo-key centos7--instance

Creating custom cloud images

Often times you would like to have your own cloud images ready to boot with your specialized software stack already installed on it. You can find a guide on how to do this at [11]. Here you can find some additional notes on how to build such a custom image for yourself. There are multiple ways of building your own image, but the easiest is to just build a virtual machine using the hypervisor and take a snapshot of it which you import into glance afterwards. With XenServer you would go ahead and start a VM as such:

xe vm-install template=<template-id> new-name-label='snapshot'

If you are performing an OS installation from an online repository you will have to plug the VM into a network where it can reach the resource. If you don't have any alternative networks to use, you can plug it into the OpenStack VM network created before. In this case you have to configure the OS installer with a static IP (from the OpenStack network) and gateway (OpenStack node).

xe vm-param-set uuid=<vm id> other-config:install-repository=<os repository>
xe vif-create vm-uuid=<vm id> network-uuid=<network id> device=0 mac=random

Next you have to start the instance, get console access to it and follow in installation instructions.

xe vm-start <vm id>
xe console name-label=<vm name>

After the installation is completed, you can go ahead with customizing the image for your own needs, installing packages and modifying configurations. There are a couple of important packages that should be present in your instance for it to correctly work, such as acpid and cloud-init. The cloud-init is used for early VM configuration on first boots. Among many things it is responsible for connecting to the OpenStack metadata server and downloading the preconfigured ssh public key into the VMs' authorized_keys file. You can find examples and guides on how to configure cloud-init at [12]. Additionally, you might like to install your hypervisor specific guest tools, in order to improve how the VMs you create and your hypervisor interact. In case of XenServer you should install the XenServer Tools.

Note! If you don't have cloud-init configured correctly to pull the ssh public key you will not be able to ssh into your instance!

Note! If you don't install the XenServer guest tools you will be unable to use the 'suspend' feature in OpenStack.

Once you configured the VM to fit your needs you have to export a snapshot of it. This proved to be a bit tricky using XenServer, because its cli export functionality only allows for exports in .xva format (Xen Virtual Application). OpenStack on the other hand does not recognize this, and expects something like a .vhd file. The only possible way to convert from .xva to .vhd is to use Citrix XenConverter, a tool offered only for Windows. Alternatively you can try to export directly OVA/OVF images using Citrix XenCenter, a manager application for XenServer (this option failed for me).

Note! Recent versions of XenConverter don't have the desired conversion functionality. In order to convert from .xva to .vhd (OVA/OVF) you need version 2.3.1. Moreover, do not use the 'Compress OVF' option while converting, because you will have problems with qemu-img recognizing it.

The exported .vhd needs a little bit of polishing before you can add it to glance, namely removing persistent networking information, old log files, temp files, old user account. There is a tool called virt-sysprep which can help you do this in an automated manner, but it only operated on raw format disks, so you will have to extract the .vhd into a raw disk using qemu-img.

qemu-img convert -f vpc -O raw <VHD> <RAW>
virt-sysprep -a <RAW>

Note! It happens that networking informations are still persisting even after executing virt-sysprep, in which case you would have to remove them manually. The easiest way to go about this it to mount the disk image and remove static IP addresses, HW addresses from /etc/sysconfig/network-script/ifcfg-eth0.

Now you can go ahead and import your image into glance, so that it will be available to boot:

glance image-create --disk-format raw --container-format bare --file <RAW> --name 'custom image'