User:Dennisvd@nikhef.nl/testbed
Hardware
The testbed currently consists of five physical machines: bleek, toom, kudde, span and ent.
name | type | #cores | mem | OS | disk | remarks |
---|---|---|---|---|---|---|
bleek | Intel 5150 @2.66GHz | 4 | 8GB | CentOS4-64 | software raid1 2×500GB disks | High Availability, dual power supply |
toom | Intel E5440 @2.83GHz | 8 | 16GB | CentOS5-64 | Hardware raid1 2×715GB disks | |
kudde | Intel E5440 @2.83GHz | 8 | 16GB | CentOS5-64 | Hardware raid1 2×715GB disks | |
span | Intel E5440 @2.83GHz | 8 | 24GB | CentOS5-64 | Hardware raid10 on 4×470GB disks (950GB net) | DHCP,DNS,NFS,LDAP |
ent | Intel Core Duo @1.66GHz | 2 | 2GB | OS X 10.6 | SATA 80GB | OS X virtual machines using Parallels |
Network
The network between these machines is a bit particular: They all live in the same VLAN (194.171.96.16/28) but they all have an extra alias interface in the 10.198.0.0/16 range. The Xen DomUs on each of the Xen machines that live in that address range are given connectivity to the other DomUs in the same VLAN without using NAT, and connectivity to the outside with SNAT. Here's an example of the iptables on span:
Chain POSTROUTING (policy ACCEPT 58M packets, 3693M bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- any any 10.198.0.0/16 194.171.96.16/28 436 63986 ACCEPT all -- any any 10.198.0.0/16 10.198.0.0/16 1 190 SNAT all -- any any 10.198.0.0/16 anywhere to:194.171.96.28
So all traffic from a DomU on span will appear to have come from span to the outside.
Note that DomUs that have interfaces in the public address range do not need SNAT at all, they simply connect to the hosts xen bridge.
There is a separate network attached to each machine to allow IPMI management and Serial-Over-Lan (except ent).
Software Installation
The central machine in the network is span, it runs
- dnsmasq for DNS and DHCP based on /etc/hosts and /etc/ethers
- NFS server for the home directories and ssh and pem host keys
The other Xen machines, toom and kudde, run Xen 3.1. On these machines the creation and destruction of virtual machines is best left to the generate-machine and destroy-machine scripts, part of the nl.vl-e.poc.ctb.mktestbed software package.
Remote Access and Management
The testbed is accessible only from within the Nikhef domain (including VPN). Login through ssh is by public key only.
Management of the machines can be done in one of three ways:
- ssh root@localhost, again using your ssh key.
- Through IMPI:
- toom, span and kudde have modern cards with a web interface, so go to https://span.ipmi.nikhef.nl/ to manage.
- ipmitool can be used for Serial-over-LAN and other low-level tools (i.e. power cycling).
- KVM switch. You need the DELL remote console switch software. All machines are connected through drcs-1.ipmi.nikhef.nl except ent which is connected through drcs-2.ipmi.nikhef.nl.
To connect to the serial console, you can do e.g.
ipmitool -I lanplus -H bleek.ipmi.nikhef.nl -U <username> sol activate
To set up username/passwd for ipmi, run
/etc/init.d/ipmi start
locally and use the ipmitool commands as root, e.g. to show the users:
# ipmitool user list 1
Ent, the Mac OS X host
To enable testing on Mac OS X, Apple hardware is required. This allows Mac OS X Server to be run as virtual machines. Please don't use ent for anything else, since it's a mere mac mini. Its configuration is a little different from the other machines. Parallels Desktop is the virtualization solution (VMWare Fusion doesn't appear to work because the cpu is too old).
The Mac OS X firewall is ipfw (see also here and here, and something on nat).