Difference between revisions of "Agile testbed/Cloud"

From PDP/Grid Wiki
Jump to navigationJump to search
m
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
There is an effort to add a cloud to the [[Agile testbed]]. It is based on [http://opennebula.org/ OpenNebula] and currently runs on autana.nikhef.nl as master and arauca.nikhef.nl as a slave. It is currently in development.
+
There is an effort to move our [[Agile testbed]] to a cloud-based infrastructure. It is based on [http://opennebula.org/ OpenNebula] and currently in development.
  
 
[[Agile testbed/Cloud/Installation notes|Installation notes]]
 
[[Agile testbed/Cloud/Installation notes|Installation notes]]
 +
  
  
Line 13: Line 14:
 
** Public: public IP addresses, bridged (domain <tt>.nikhef.nl</tt>)
 
** Public: public IP addresses, bridged (domain <tt>.nikhef.nl</tt>)
  
These networks are all connected via <tt>eth1</tt>, so that the networks across different physical hosts can reach each other safely. Autana then masquerades the internal range for internet connectivity. Autana also runs a nameserver on 10.198.x.1 that serves the dynamic DNS in addition to being a DNS cache.
+
These networks are all connected via the network interfaces, so that the networks across different physical hosts can reach each other. Each host does masquerading the internal range for internet connectivity by itself to divide the load. Bleek runs a nameserver on 10.198.x.240 that serves the dynamic DNS in addition to being a DNS cache.
  
 
[http://www.semicomplete.com/articles/dynamic-dns-with-dhcp/ Dynamic DNS] [http://www.cameratim.com/computing/linux/using-bind-as-a-local-dns-server DDNS on Fedora]
 
[http://www.semicomplete.com/articles/dynamic-dns-with-dhcp/ Dynamic DNS] [http://www.cameratim.com/computing/linux/using-bind-as-a-local-dns-server DDNS on Fedora]
 +
[http://mperedim.wordpress.com/2011/02/17/opennebula-dhcpd-contextualization-magic/ auto DHCP leases]
  
==Image repository==
+
==Authentication==
There will be an image repository at <tt>/var/cloud/images</tt> containing base images for various operating systems. These will be updated daily by a cronjob.
+
OpenNebula's simple authentication. An improvement may be to use a security token such as the ssh authentication backend uses, based on either OpenNebula's own user database, ldap, or a public key.
  
 
==Contextualisation==
 
==Contextualisation==
When a cloud machine is instantiated from a base image from the repository, the machine should still be configured for the specific instance. This is done by an init script that is present in all base images, which sets up the network and runs any machine-specific initialisation. The OpenNebula [http://www.opennebula.org/doku.php?id=documentation:rel1.4:cong contextualisation] features will be used for this.
+
When a cloud machine is instantiated from a base image from the repository, the machine should still be configured for the specific instance. This is done by an init script that is present in all base images, which sets up the network and runs any machine-specific initialisation. The OpenNebula [http://www.opennebula.org/documentation:rel2.2:cong contextualisation] features will be used for this.
 +
 
 +
==Services in the Cloud==
 +
 
 +
===Continuous Integration: Hudson===
 +
A virtual machine will run [http://www.hudson-ci.org/ Hudson] with the [http://wiki.hudson-ci.org/display/HUDSON/Amazon+EC2+Plugin EC2 Plugin] so that builds are done on dynamic virtual machines. The current CruiseControl service for VL-e builds will migrate to this (see also [http://www.rpm.org/max-rpm/s1-rpm-anywhere-different-build-area.html RPM build area customisation]).
 +
 
 +
Authentication will preferably be done using a custom client-side ssl certificate plugin, which has received some effort already.
  
 
==Future directions==
 
==Future directions==
Line 30: Line 39:
 
** for tests running on virtual machines
 
** for tests running on virtual machines
 
* Moving other parts of the testbed to the cloud, if it turns out to be an improvement.
 
* Moving other parts of the testbed to the cloud, if it turns out to be an improvement.
 +
 +
==Links==
 +
* [http://www.cse.nd.edu/~ccl/research/papers/psempoli-cloudcom.pdf A Comparison and Critique of Eucalyptus, OpenNebula and Nimbus]
 +
* [http://mperedim.wordpress.com/2010/09/26/opennebula-zfs-and-xen-part-1-get-going/ OpenNebula, ZFS and Xen] - uses snapshot and file sharing capabilities of ZFS

Latest revision as of 11:47, 21 February 2011

There is an effort to move our Agile testbed to a cloud-based infrastructure. It is based on OpenNebula and currently in development.

Installation notes


Networking

  • Plug'n'play networking
    • MAC and IP addresses handed out by OpenNebula
    • An OpenNebula hook will be added to register the machine's name with DNS dynamically
  • There will be three networks, initially:
    • Closed: no internet connection (10.198.5.0/24, domain .closed)
    • Private: internet access, masqueraded to the outside world (10.198.6.0/24, .private)
    • Public: public IP addresses, bridged (domain .nikhef.nl)

These networks are all connected via the network interfaces, so that the networks across different physical hosts can reach each other. Each host does masquerading the internal range for internet connectivity by itself to divide the load. Bleek runs a nameserver on 10.198.x.240 that serves the dynamic DNS in addition to being a DNS cache.

Dynamic DNS DDNS on Fedora auto DHCP leases

Authentication

OpenNebula's simple authentication. An improvement may be to use a security token such as the ssh authentication backend uses, based on either OpenNebula's own user database, ldap, or a public key.

Contextualisation

When a cloud machine is instantiated from a base image from the repository, the machine should still be configured for the specific instance. This is done by an init script that is present in all base images, which sets up the network and runs any machine-specific initialisation. The OpenNebula contextualisation features will be used for this.

Services in the Cloud

Continuous Integration: Hudson

A virtual machine will run Hudson with the EC2 Plugin so that builds are done on dynamic virtual machines. The current CruiseControl service for VL-e builds will migrate to this (see also RPM build area customisation).

Authentication will preferably be done using a custom client-side ssl certificate plugin, which has received some effort already.

Future directions

When the cloud is properly setup, future directions can be explored like:

  • Secure networking: ebtables hooks to only allow allocated mac from node
  • Using virtual machines from within Hudson
    • for on-demand build slaves (multiple platforms)
    • for tests running on virtual machines
  • Moving other parts of the testbed to the cloud, if it turns out to be an improvement.

Links