From PDP/Grid Wiki
Jump to navigationJump to search
Diagram of the agile test bed

The state of the testbed is going to change, as we are planning to integrate several more machines and change the overall setup of systems and services. See Testbed_Update_Plan.


The testbed currently consists of five physical machines: bleek, toom, kudde, span and ent.

name Serial No type chipset #cores mem OS disk remarks
bleek CQ9NK2J PE1950 Intel 5150 @2.66GHz 4 8GB CentOS4-64 software raid1 2×500GB disks High Availability, dual power supply
toom DC8QG3J PE1950 Intel E5440 @2.83GHz 8 16GB CentOS5-64 Hardware raid1 2×715GB disks
kudde CC8QG3J PE1950 Intel E5440 @2.83GHz 8 16GB CentOS5-64 Hardware raid1 2×715GB disks
span FP1BL3J P2950 Intel E5440 @2.83GHz 8 24GB CentOS5-64 Hardware raid10 on 4×470GB disks (950GB net) DHCP,DNS,NFS,LDAP
ent Mac Mini Intel Core Duo @1.66GHz 2 2GB OS X 10.6 SATA 80GB OS X virtual machines using Parallels

Ent, the Mac OS X host

To enable testing on Mac OS X, Apple hardware is required, allowing Mac OS X Server to be run virtualized. Please don't use ent for anything else, since it's a mere mac mini. Its configuration is a little different from the other machines: Parallels Desktop is the virtualization solution (VMWare Fusion doesn't appear to work because the cpu is too old), and the ipfw firewall is used (see also here and here, and something on nat).

For remote gui access, Vino is used as a system service, running on localhost:5900 without a password. This requires having the Parallels guest tools installed (or OSXvnc-server will fail with "screen format not supported"). To access it, you need to login on ent with your ssh identity with a vnc port forward. Note that the default local user is always logged on, running Parallels.


The network between these machines is a bit particular: They all live in the same VLAN ( but they all have an extra alias interface in the range. The Xen DomUs on each of the Xen machines that live in that address range are given connectivity to the other DomUs in the same VLAN without using NAT, and connectivity to the outside with SNAT. Here's an example of the iptables on span:

Chain POSTROUTING (policy ACCEPT 58M packets, 3693M bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    any    
  436 63986 ACCEPT     all  --  any    any       
    1   190 SNAT       all  --  any    any        anywhere            to:

So all traffic from a DomU on span will appear to have come from span to the outside.

Note that DomUs that have interfaces in the public address range do not need SNAT at all, they simply connect to the hosts xen bridge.

There is a separate network attached to each machine to allow IPMI management and Serial-Over-Lan (except ent).

Software Installation

The central machine in the network is span, it runs

  • dnsmasq for DNS and DHCP based on /etc/hosts and /etc/ethers
  • NFS server for the home directories and ssh and pem host keys

The other Xen machines, toom and kudde, run Xen 3.1. On these machines the creation and destruction of virtual machines is best left to the generate-machine and destroy-machine scripts, part of the nl.vl-e.poc.ctb.mktestbed software package.

Remote Access and Management

The testbed is accessible only from within the Nikhef domain (including VPN). Login through ssh is by public key only.

Management of the machines can be done in one of three ways:

  1. ssh root@localhost, again using your ssh key.
  2. Through IMPI:
    1. toom, span and kudde have modern cards with a web interface, so go to to manage.
    2. ipmitool can be used for Serial-over-LAN and other low-level tools (i.e. power cycling).
  3. KVM switch. You need the DELL remote console switch software. All machines are connected through except ent which is connected through

To connect to the serial console, you can do e.g.

ipmitool -I lanplus -H -U <username> sol activate

To set up username/passwd for IPMI on a particular machine, log onto that machine as root and load the drivers:

/etc/init.d/ipmi start

Now you can use the ipmitool commands directly, e.g. to show the users:

# ipmitool user list 1

Virtual machine management

To set up/tear down virtual machines on the Xen hosts, use the mktestbed scripts.