Difference between revisions of "User:Dennisvd@nikhef.nl/testbed"

From PDP/Grid Wiki
Jump to navigationJump to search
 
(22 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[NDPF_Node_Functions#P4CTB|The testbed]] currently consists of four physical machines:
+
[[Image:P4ctb.svg|thumb|Diagram of the agile test bed]]
bleek, toom, kudde and span.
+
 
 +
'''The state of the testbed is going to change, as we are planning to integrate several more machines and change the overall setup of systems and services. See [[Testbed_Update_Plan]].'''
 +
 
 +
 
 +
== Hardware ==
 +
 
 +
 
 +
[[NDPF_Node_Functions#P4CTB|The testbed]] currently consists of five physical machines:
 +
bleek, toom, kudde, span and ent.
  
 
{| class="wikitable" border="0" cellpadding="8"
 
{| class="wikitable" border="0" cellpadding="8"
 
|-style="background-color: #ccc;"
 
|-style="background-color: #ccc;"
 
! name
 
! name
 +
! Serial No
 
! type
 
! type
 +
! chipset
 
! #cores
 
! #cores
 
! mem
 
! mem
Line 13: Line 23:
 
|-
 
|-
 
| bleek
 
| bleek
| Intel 5150  @ 2.66GHz
+
| CQ9NK2J
 +
| PE1950
 +
| Intel 5150  @2.66GHz
 
| 4
 
| 4
 
| 8GB
 
| 8GB
Line 21: Line 33:
 
|-
 
|-
 
| toom
 
| toom
| Intel E5440  @ 2.83GHz
+
| DC8QG3J
 +
| PE1950
 +
| Intel E5440  @2.83GHz
 
| 8
 
| 8
 
| 16GB
 
| 16GB
Line 28: Line 42:
 
|-
 
|-
 
| kudde
 
| kudde
| Intel E5440  @ 2.83GHz
+
| CC8QG3J
 +
| PE1950
 +
| Intel E5440  @2.83GHz
 
| 8
 
| 8
 
| 16GB
 
| 16GB
Line 35: Line 51:
 
|-
 
|-
 
| span
 
| span
| Intel  E5440  @ 2.83GHz
+
| FP1BL3J
 +
| P2950
 +
| Intel  E5440  @2.83GHz
 
| 8
 
| 8
 
| 24GB
 
| 24GB
Line 41: Line 59:
 
| Hardware raid10 on 4×470GB disks (950GB net)  
 
| Hardware raid10 on 4×470GB disks (950GB net)  
 
| DHCP,DNS,NFS,LDAP
 
| DHCP,DNS,NFS,LDAP
 +
|-
 +
| ent
 +
|
 +
| Mac Mini
 +
| Intel Core Duo  @1.66GHz
 +
| 2
 +
| 2GB
 +
| OS X 10.6
 +
| SATA 80GB
 +
| OS X virtual machines using Parallels
 
|}
 
|}
  
The network between these machines is a bit particular: They all live in the same VLAN (194.171.96.16/28) but they all have an extra alias interface in the 10.198.0.0/16 range. The Xen DomUs on each of the Xen machines that live in that address range are given connectivity to the other DomUs in the same VLAN without using NAT, and connectivity to the outside with SNAT. Here's an example of the iptables on span:
+
=== Ent, the Mac OS X host ===
 +
 
 +
To enable testing on Mac OS X, Apple hardware is required, allowing Mac OS X Server to be run virtualized. Please don't use ent for anything else, since it's a mere mac mini. Its configuration is a little different from the other machines: [http://www.parallels.com/eu/products/desktop/ Parallels Desktop] is the virtualization solution ([http://www.vmware.com/products/fusion/ VMWare Fusion] doesn't appear to work because the cpu is too old), and the <tt>[http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man8/ipfw.8.html ipfw]</tt> firewall is used (see also [http://www.ibiblio.org/macsupport/ipfw/ here] and
 +
[http://www.macshadows.com/kb/index.php?title=Firewall_Tunning_on_Mac_OS_X here], and [http://osxfaq.com/ReaderReports/NAT_PPP/index.ws something] on <tt>[http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man8/natd.8.html nat]</tt>).
 +
 
 +
For remote gui access, [http://www.testplant.com/osxvnc Vino] is used as a system service, running on localhost:5900 without a password. This requires having the Parallels guest tools installed (or <tt>OSXvnc-server</tt> will fail with "screen format not supported"). To access it, you need to login on ent with your ssh identity with a vnc port forward. Note that the default local user is always logged on, running Parallels.
 +
 
 +
== Network ==
 +
 
 +
The network between these machines is a bit particular: They all live in the same [[NDPF_System_Functions#P4CTB|VLAN]] (194.171.96.16/28) but they all have an extra alias interface in the 10.198.0.0/16 range. The Xen DomUs on each of the Xen machines that live in that address range are given connectivity to the other DomUs in the same VLAN without using NAT, and connectivity to the outside with SNAT. Here's an example of the iptables on span:
 
  Chain POSTROUTING (policy ACCEPT 58M packets, 3693M bytes)
 
  Chain POSTROUTING (policy ACCEPT 58M packets, 3693M bytes)
 
   pkts bytes target    prot opt in    out    source              destination         
 
   pkts bytes target    prot opt in    out    source              destination         
Line 53: Line 90:
  
 
Note that DomUs that have interfaces in the public address range do not need SNAT at all, they simply connect to the hosts xen bridge.
 
Note that DomUs that have interfaces in the public address range do not need SNAT at all, they simply connect to the hosts xen bridge.
 +
 +
There is a separate network attached to each machine to allow IPMI management and Serial-Over-Lan (except ent).
 +
 +
== Software Installation ==
 +
 +
The central machine in the network is span, it runs
 +
* dnsmasq for DNS and DHCP based on /etc/hosts and /etc/ethers
 +
* NFS server for the home directories and ssh and pem host keys
 +
 +
The other Xen machines, toom and kudde, run Xen 3.1. On these machines the creation and destruction of virtual machines is best left to the generate-machine and destroy-machine scripts, part of the [https://gforge.vl-e.nl/plugins/scmcvs/cvsweb.php/nl.vl-e.poc.ctb.mktestbed/?cvsroot=build nl.vl-e.poc.ctb.mktestbed] software package.
 +
 +
== Remote Access and Management ==
 +
 +
The testbed is accessible only from within the Nikhef domain (including VPN). Login through ssh is by public key ''only''.
 +
 +
Management of the machines can be done in one of three ways:
 +
 +
# ssh root@localhost, again using your ssh key.
 +
# Through IMPI:
 +
## toom, span and kudde have modern cards with a web interface, so go to https://span.ipmi.nikhef.nl/ to manage.
 +
## ipmitool can be used for [[Serial_Consoles|Serial-over-LAN]] and other low-level tools (i.e. power cycling).
 +
# KVM switch. You need the [[Remote_usage_of_the_Dell_console_switches|DELL remote console switch software]]. All machines are connected through drcs-1.ipmi.nikhef.nl except ent which is connected through drcs-2.ipmi.nikhef.nl.
 +
 +
 +
To connect to the serial console, you can do e.g.
 +
ipmitool -I lanplus -H bleek.ipmi.nikhef.nl -U <username> sol activate
 +
 +
To set up username/passwd for IPMI on a particular machine, log onto that machine as root and load the drivers:
 +
/etc/init.d/ipmi start
 +
Now you can use the ipmitool commands directly, e.g. to show the users:
 +
# ipmitool user list 1
 +
 +
== Virtual machine management ==
 +
 +
To set up/tear down virtual machines on the Xen hosts, use the mktestbed scripts.

Latest revision as of 15:06, 9 November 2010

Diagram of the agile test bed

The state of the testbed is going to change, as we are planning to integrate several more machines and change the overall setup of systems and services. See Testbed_Update_Plan.


Hardware

The testbed currently consists of five physical machines: bleek, toom, kudde, span and ent.

name Serial No type chipset #cores mem OS disk remarks
bleek CQ9NK2J PE1950 Intel 5150 @2.66GHz 4 8GB CentOS4-64 software raid1 2×500GB disks High Availability, dual power supply
toom DC8QG3J PE1950 Intel E5440 @2.83GHz 8 16GB CentOS5-64 Hardware raid1 2×715GB disks
kudde CC8QG3J PE1950 Intel E5440 @2.83GHz 8 16GB CentOS5-64 Hardware raid1 2×715GB disks
span FP1BL3J P2950 Intel E5440 @2.83GHz 8 24GB CentOS5-64 Hardware raid10 on 4×470GB disks (950GB net) DHCP,DNS,NFS,LDAP
ent Mac Mini Intel Core Duo @1.66GHz 2 2GB OS X 10.6 SATA 80GB OS X virtual machines using Parallels

Ent, the Mac OS X host

To enable testing on Mac OS X, Apple hardware is required, allowing Mac OS X Server to be run virtualized. Please don't use ent for anything else, since it's a mere mac mini. Its configuration is a little different from the other machines: Parallels Desktop is the virtualization solution (VMWare Fusion doesn't appear to work because the cpu is too old), and the ipfw firewall is used (see also here and here, and something on nat).

For remote gui access, Vino is used as a system service, running on localhost:5900 without a password. This requires having the Parallels guest tools installed (or OSXvnc-server will fail with "screen format not supported"). To access it, you need to login on ent with your ssh identity with a vnc port forward. Note that the default local user is always logged on, running Parallels.

Network

The network between these machines is a bit particular: They all live in the same VLAN (194.171.96.16/28) but they all have an extra alias interface in the 10.198.0.0/16 range. The Xen DomUs on each of the Xen machines that live in that address range are given connectivity to the other DomUs in the same VLAN without using NAT, and connectivity to the outside with SNAT. Here's an example of the iptables on span:

Chain POSTROUTING (policy ACCEPT 58M packets, 3693M bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    any     10.198.0.0/16        194.171.96.16/28    
  436 63986 ACCEPT     all  --  any    any     10.198.0.0/16        10.198.0.0/16       
    1   190 SNAT       all  --  any    any     10.198.0.0/16        anywhere            to:194.171.96.28

So all traffic from a DomU on span will appear to have come from span to the outside.

Note that DomUs that have interfaces in the public address range do not need SNAT at all, they simply connect to the hosts xen bridge.

There is a separate network attached to each machine to allow IPMI management and Serial-Over-Lan (except ent).

Software Installation

The central machine in the network is span, it runs

  • dnsmasq for DNS and DHCP based on /etc/hosts and /etc/ethers
  • NFS server for the home directories and ssh and pem host keys

The other Xen machines, toom and kudde, run Xen 3.1. On these machines the creation and destruction of virtual machines is best left to the generate-machine and destroy-machine scripts, part of the nl.vl-e.poc.ctb.mktestbed software package.

Remote Access and Management

The testbed is accessible only from within the Nikhef domain (including VPN). Login through ssh is by public key only.

Management of the machines can be done in one of three ways:

  1. ssh root@localhost, again using your ssh key.
  2. Through IMPI:
    1. toom, span and kudde have modern cards with a web interface, so go to https://span.ipmi.nikhef.nl/ to manage.
    2. ipmitool can be used for Serial-over-LAN and other low-level tools (i.e. power cycling).
  3. KVM switch. You need the DELL remote console switch software. All machines are connected through drcs-1.ipmi.nikhef.nl except ent which is connected through drcs-2.ipmi.nikhef.nl.


To connect to the serial console, you can do e.g.

ipmitool -I lanplus -H bleek.ipmi.nikhef.nl -U <username> sol activate

To set up username/passwd for IPMI on a particular machine, log onto that machine as root and load the drivers:

/etc/init.d/ipmi start

Now you can use the ipmitool commands directly, e.g. to show the users:

# ipmitool user list 1

Virtual machine management

To set up/tear down virtual machines on the Xen hosts, use the mktestbed scripts.