Difference between revisions of "Testbed Update Plan"

From PDP/Grid Wiki
Jump to navigationJump to search
Line 49: Line 49:
 
| keep/move
 
| keep/move
 
| hardware token is plugged into kudde to generate vlemed robot certificates using cronjob + software in <tt>/root/etoken</tt>. Token can be moved to another machine but it should remain within the P4CTB
 
| hardware token is plugged into kudde to generate vlemed robot certificates using cronjob + software in <tt>/root/etoken</tt>. Token can be moved to another machine but it should remain within the P4CTB
 
+
|-
 +
| VMWare Server 1.0
 +
| bleek
 +
| move
 +
| If bleek is destined to be home directory server, VMWare should be put on other hardware.
 
|}
 
|}
  

Revision as of 11:39, 16 November 2010

Planning the update of the middleware/development test bed

There is a number of tasks involved in bringing the testbed to where we like to be. We also need to agree on a timeframe in which we like to see these things accomplished.

inventory of current services

This section should list the current services we run and use on the testbeds. For each service, we should explain what we should like to do with it (keep, drop, change?).

Service System keep/move/lose Comments
LDAP span lose to be discontinued after migration to central LDAP
DHCP span move by dnsmasq, /etc/hosts and /etc/ethers. Should migrate elsewhere
Cruisecontrol bleek move build system for VL-e and BiG Grid
Hudson kudde move continuous integration, currently for jGridstart but could serve others
Home directories everywhere move should be merged onto single NFS server
X509 host keys and pre-generated SSH keys span move all in /var/local/hostkeys
Robot certificate (etoken) kudde keep/move hardware token is plugged into kudde to generate vlemed robot certificates using cronjob + software in /root/etoken. Token can be moved to another machine but it should remain within the P4CTB
VMWare Server 1.0 bleek move If bleek is destined to be home directory server, VMWare should be put on other hardware.

Data plan for precious data

Precious means anything that took effort to put together, but nothing that lives in version control elsewhere. Think home directories, system configurations, pre-generated ssh host keys, X509 host certs, etc.

One idea is to put all of this on a box that is not involved in regular experimentation and messing about, and have backup arranged from this box to beerput. After this is arranged we begin to migrate precious data from all the other machines here, leaving the boxen in a state that we don't get sweaty palms over scratching and reinstalling them.

Hardware inventory

Perhaps this should be done first. Knowing what hardware we have is prerequisite to making sensible choices about what we try to run where.

Changes here should probably also go to NDPF System Functions.

name ipmi name* type chipset #cores mem OS disk remarks
bleek bleek PE1950 Intel 5150 @2.66GHz 4 8GB CentOS4-64 software raid1 2×500GB disks High Availability, dual power supply, to be renamed
toom toom PE1950 Intel E5440 @2.83GHz 8 16GB CentOS5-64 Hardware raid1 2×715GB disks
kudde kudde PE1950 Intel E5440 @2.83GHz 8 16GB CentOS5-64 Hardware raid1 2×715GB disks Contains hardware token/robot proxy for vlemed
span span PE2950 Intel E5440 @2.83GHz 8 24GB CentOS5-64 Hardware raid10 on 4×470GB disks (950GB net) DHCP,DNS,NFS,LDAP
melkbus hals PEM600 Intel E5450 @3.00GHz 8 32GB CentOS5-64 2x 320GB SAS disks to be renamed to hals
odin kop PE1950 Intel 5150 @2.66GHz 4 8GB CentOS5-64 software raid1 2×500GB disks High Availability, dual power supply; to be renamed to kop
put put PE2950 former garitxako
autana blade-14 PEM610 to be renamed to ren
arauca blade-13 PEM610 to be renamed to mient
arrone voor PE1950 to be renamed to voor
aulnes wiers PE1950 to be renamed to wiers
ent (no ipmi) Mac Mini Intel Core Duo @1.66GHz 2 2GB OS X 10.6 SATA 80GB OS X box (no virtualisation)
  • *ipmi name is used for IPMI access; use <name>.ipmi.nikhef.nl.
  • System details such as serial numbers can be retrieved from the command line with dmidecode -t 1.

IPMI serial-over-LAN

  • For details, see Serial Consoles.
  • can be done by ipmitool -I lanplus -H name.ipmi.nikhef.nl -U user sol activate.
  • SOL access needs to be activated in the BIOS once, by setting console redirection through COM2.

For older systems that do not have a web interface for IPMI, the command-line version can be used. Install the OpenIPMI service so root can use ipmitool. Here is a sample of commands to add a user and give SOL access.

ipmitool user enable 5
ipmitool user set name 5 ctb
ipmitool user set password 5 '<blah>'
ipmitool channel setaccess 1 5 ipmi=on
# make the user administrator (4) on channel 1.
ipmitool user priv 5 4 1
ipmitool channel setaccess 1 5 callin=on ipmi=on link=on
ipmitool sol payload enable 1 5

Network plan

All of the machines should be put in the P4CTB VLAN (vlan 2), which is covered by ACLs to prevent public access. This is a first line in defence against intrusions. In some cases we may like to run virtual machines in the open/experimental network (vlan 8); for that the trick is to create a second bridge with a tagged ethernet device in vlan 8: see /etc/sysconfig/network-scripts/ifcfg-eth0.8

VLAN=yes
DEVICE=eth0.8
BOOTPROTO=static
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
IPV4INIT=no

Then: ifup eth0.8 and

brctl addbr broe
brctl addif broe eth0.8

Unfortunately, the IPV6INIT=no doesn't help, it gets an IPv6 address anyway. This bridge can then be used to add virtual network devices for machines that live in open/experimental.


All systems have at least 1GB interface, but put has two which may be trunked. This could be useful for serving machine images. The blade systems have extra interfaces and may be capable of doing iSCSI offloading to the NIC.

TODO: draw a network lay-out.


IPv4 space is limited, and until the network upgrade (planned 2011Q1-3?) we're stuck with that. The current scheme of SNATting may help us out for a while.

LDAP migration

We're going to ditch our own directory service (it served us well, may it rest in peace) in favour of the central Nikhef service. This means changing user ids in some (all?) cases which should be done preferable in a single swell foop.

We should request to add a testbed 'service' to LDAP with ourselves as managers, so we can automatically populate /root/.ssh/authorized_keys.