Difference between revisions of "Testbed Update Plan"

From PDP/Grid Wiki
Jump to navigationJump to search
 
(71 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== Planning the update of the middleware/development test bed ==
+
The upgrade of the P4CTB has taken place; all information of this page is merged into [[Agile testbed]].
 
 
 
 
There is a number of tasks involved in bringing the testbed to where we like to be. We also need to agree on a timeframe in which we like to see these things accomplished.
 
 
 
=== inventory of current services ===
 
 
 
This section should list the '''current''' services we run and use on the testbeds. For each service, we should explain what we should like to do with it (keep, drop, change?).
 
 
 
{| class="wikitable" border="0" cellpadding="8"
 
|-style="background-color: #f81;"
 
! Service
 
! System
 
! keep/move/lose
 
! Comments
 
|-
 
| LDAP
 
| span
 
| lose
 
| to be discontinued after migration to central LDAP
 
|-
 
| DHCP
 
| span
 
| move
 
| by dnsmasq, /etc/hosts and /etc/ethers. Should migrate elsewhere
 
|-
 
| Cruisecontrol
 
| bleek
 
| move
 
| build system for VL-e and BiG Grid
 
|-
 
| Hudson
 
| kudde
 
| move
 
| continuous integration, currently for jGridstart but could serve others
 
|-
 
| Home directories
 
| everywhere
 
| move
 
| should be merged onto single NFS server
 
|-
 
| X509 host keys and pre-generated SSH keys
 
| span
 
| move
 
| all in /var/local/hostkeys
 
|}
 
 
 
=== Data plan for precious data ===
 
 
 
Precious means anything that took effort to put together, but nothing that lives in version control elsewhere. Think home directories, system configurations, pre-generated ssh host keys, X509 host certs, etc.
 
 
 
One idea is to put all of this on a box that is not involved in regular experimentation and messing about, and have backup arranged from this box to beerput. ''After'' this is arranged we begin to migrate precious data from all the other machines here, leaving the boxen in a state that we don't get sweaty palms over scratching and reinstalling them.
 
 
 
=== Hardware inventory ===
 
 
 
Perhaps this should be done first. Knowing what hardware we have is prerequisite to making sensible choices about what we try to run where.
 
 
 
{| class="wikitable" border="0" cellpadding="8"
 
|-style="background-color: #ccc;"
 
! name
 
! ipmi name*
 
! type
 
! chipset
 
! #cores
 
! mem
 
! OS
 
! disk
 
! remarks
 
|-
 
| bleek
 
| bleek
 
| PE1950
 
| Intel 5150 @2.66GHz
 
| 4
 
| 8GB
 
| CentOS4-64
 
| software raid1 2×500GB disks
 
| High Availability, dual power supply, to be renamed
 
|-
 
| toom
 
| toom
 
| PE1950
 
| Intel E5440 @2.83GHz
 
| 8
 
| 16GB
 
| CentOS5-64
 
| Hardware raid1 2×715GB disks
 
|-
 
| kudde
 
| kudde
 
| PE1950
 
| Intel E5440 @2.83GHz
 
| 8
 
| 16GB
 
| CentOS5-64
 
| Hardware raid1 2×715GB disks
 
|-
 
| span
 
| span
 
| PE2950
 
| Intel E5440 @2.83GHz
 
| 8
 
| 24GB
 
| CentOS5-64
 
| Hardware raid10 on 4×470GB disks (950GB net)
 
| DHCP,DNS,NFS,LDAP
 
|-
 
| melkbus
 
| melkbus
 
| M600
 
| Intel E5450 @3.00GHz
 
| 8
 
| 32GB
 
| CentOS5-64
 
| 2x 320GB SAS disks
 
| to be renamed
 
|-
 
| odin
 
| odin
 
| PE1950
 
| Intel 5150 @2.66GHz
 
| 4
 
| 8GB
 
| CentOS5-64
 
| software raid1 2×500GB disks
 
| High Availability, dual power supply
 
|-
 
| put
 
|
 
| PE2950
 
|
 
|
 
|
 
|
 
|
 
| former garitxako
 
|-
 
| autana
 
| blade-14
 
| M610
 
|
 
|
 
|
 
|
 
|
 
| to be renamed
 
|-
 
| arauca
 
| blade-13
 
| M610
 
|
 
|
 
|
 
|
 
|
 
| to be renamed
 
|-
 
| arrone
 
|
 
| PE1950
 
|
 
|
 
|
 
|
 
|
 
| to be renamed
 
|-
 
| aulnes
 
|
 
| PE1950
 
|
 
|
 
|
 
|
 
|
 
| to be renamed
 
|-
 
| ent
 
| (no ipmi)
 
| Mac Mini
 
| Intel Core Duo  @1.66GHz
 
| 2
 
| 2GB
 
| OS X 10.6
 
| SATA 80GB
 
| OS X box (no virtualisation)
 
|}
 
 
 
* *ipmi name is used for IPMI access; use <code><name>.ipmi.nikhef.nl</code>.
 
* System details such as serial numbers can be retrieved from the command line with <code>dmidecode -t 1</code>.
 
* IPMI serial-over-LAN can be done by <code>ipmitool -I lanplus -H name.ipmi.nikhef.nl -U user sol activate</code>.
 
 
 
=== Network plan ===
 
 
 
All of the machines should be put in the P4CTB VLAN (vlan 2), which is covered by ACLs to prevent public access. This is a first line in defence against intrusions. In some cases we may like to run virtual machines in the open/experimental network (vlan 8); for that the trick is to create a second bridge with a tagged ethernet device in vlan 8: see /etc/sysconfig/network-scripts/ifcfg-eth0.8
 
VLAN=yes
 
DEVICE=eth0.8
 
BOOTPROTO=static
 
ONBOOT=yes
 
TYPE=Ethernet
 
IPV6INIT=no
 
IPV4INIT=no
 
 
 
Then: ifup eth0.8 and
 
brctl addbr broe
 
brctl addif broe eth0.8
 
Unfortunately, the IPV6INIT=no doesn't help, it gets an IPv6 address anyway. This bridge can then be used to add virtual network devices for machines that live in open/experimental.
 
 
 
 
 
All systems have at least 1GB interface, but put has two which may be trunked. This could be useful for serving machine images. The blade systems have extra interfaces and '''may''' be capable of doing iSCSI offloading to the NIC.
 
 
 
TODO: draw a network lay-out.
 
 
 
 
 
IPv4 space is limited, and until the network upgrade (planned 2011Q1-3?) we're stuck with that. The current scheme of SNATting may help us out for a while.
 
 
 
=== LDAP migration ===
 
 
 
We're going to ditch our own directory service (it served us well, may it rest in peace) in favour of the central Nikhef service. This means changing user ids in some (all?) cases which should be done preferable in a single swell foop.
 
 
 
We should request to add a testbed 'service' to LDAP with ourselves as managers, so we can automatically populate /root/.ssh/authorized_keys.
 

Latest revision as of 15:38, 9 January 2015

The upgrade of the P4CTB has taken place; all information of this page is merged into Agile testbed.