Difference between revisions of "Managing the security training sites"
(Adding details about new users) |
(add saltstack details) |
||
Line 85: | Line 85: | ||
./make-crl /var/www/html/7140638d.r0 | ./make-crl /var/www/html/7140638d.r0 | ||
− | == Adding users == | + | == Saltstack setup == |
+ | |||
+ | After systems are installed with Cobbler, saltstack takes over. The machine cobbler.darknet is the salt master. Test the connection to the minions by running a ping test. | ||
+ | salt '*' test.ping | ||
+ | All machines should report in. If not, log in to the machine and check the salt minion log | ||
+ | /var/log/salt/minion | ||
+ | |||
+ | The saltstack description of the system is kept in a collection of YAML files under | ||
+ | /srv | ||
+ | /srv/pillar | ||
+ | /srv/salt | ||
+ | The pillar is just static data. The salt tree contains references to various machine types and modules, als well as files that are copied over to the minions such as ssh keys. | ||
+ | |||
+ | === git repository === | ||
+ | |||
+ | The salt tree is maintained with git. Set up a remote: | ||
+ | git clone ssh://root@cobbler.darknet/root/salt | ||
+ | |||
+ | and check out the 'useyaim' branch. Whenever this branch gets pushed, the repository under /srv will be automatically updated (via a post-receive hook). | ||
+ | |||
+ | |||
+ | === Adding users === | ||
Adding administrators for one of the sites mentioned above is done by adding their ssh public keys to the corresponding file /srv/salt/ssh_keys/''domain''.pub and running the salt state command: | Adding administrators for one of the sites mentioned above is done by adding their ssh public keys to the corresponding file /srv/salt/ssh_keys/''domain''.pub and running the salt state command: |
Revision as of 13:46, 19 March 2014
These are the quick notes about how to set up and run a bunch of virtual grid sites for training purposes.
Overview
The virtual machines for these sites are managed with Xen Cloud Platform (XCP) on blade 0, partition b. Log in as root@bl0b.pool.inst.ipmi.nikhef.nl. to manage these virtual machines or use a client tool such as Xen Centre.
The sites live on vlan 41, which is only available on bl0b. There is only one host with an interface to the outside, called melkstal.nikhef.nl (in the Open/Experimental network). This host serves as the gateway for all the participants in the training and the site administrators. It also serves as a NAT box. Participants won't log into melkstal directly; port forwarding has been set up so using ssh to a specific port on melkstal will land the user on the root account on a machine in the virtual domain, using their ssh public key.
On the inside of vlan 41, the network addressing is divided up by virtual site:
IP range | domain name | login method | details |
---|---|---|---|
10.1.0.0/16 | darknet | ssh root@melkstal.nikhef.nl | management systems and example site |
10.1.1.0/16 | frogstar | ssh -p 2201 root@melkstal.nikhef.nl | |
10.1.2.0/16 | traal | ssh -p 2202 root@melkstal.nikhef.nl | |
10.1.3.0/16 | krikkit | ssh -p 2203 root@melkstal.nikhef.nl | |
10.1.4.0/16 | megadodo | ssh -p 2204 root@melkstal.nikhef.nl | |
10.1.5.0/16 | magrathea | ssh -p 2205 root@melkstal.nikhef.nl | |
10.1.6.0/16 | vogsphere | ssh -p 2206 root@melkstal.nikhef.nl |
Each site runs a number of machines to represent what is typical for a Grid site:
machine name | machine type | metapackage |
---|---|---|
ui | User interface | emi-ui |
wms | Workload management system and site BDII | emi-wms, emi-lb |
ce | CREAM Compute Element | emi-cream-ce |
headnode | batch system head node (HTCondor) | condor |
wn | Worker node | emi-wn |
There is one management host to help install and configure all other machines: cobbler.darknet. This system runs cobbler to help systems install with DHCP, DNS and kickstart files. It also runs saltstack to manage state on each system.
Installing and re-installing machines
Installation of new machines is done on the XCP master node. Log in as
root@pool-bl0b.inst.ipmi.nikhef.nl
In the home directory you will find this script which will create basic machine definitions from a template. It will give the machines a new interface with a generated MAC address.
The next step is to take the list of machines and their mac addresses (using another script) to cobbler.darknet and to define the systems in cobbler. There is a script for that. For example:
echo 22:05:e5:52:19:cc wms.darknet | ./cobbler-add-machine.sh
Right now, the script only adds machines consecutively to the darknet site.
It is also possible (but more tedious) to add machines via cobbler's web interface.
Once the machines are defined in cobbler it is time to start them. This is again done on the XCP node with the command
xe vm-start vm=wms.darknet
This will install a basic system, using cobbler for DHCP and for downloading the kickstart file.
As part of the basic installation, the package salt-minion will be installed with cobbler.darknet as the master. Once the installation is done, accept the key on cobbler with
salt-key -a <hostname>
re-installation
It may be necessary to start from scratch with a machine. This is easier than a new installation, as the definition already exists in XCP and cobbler. The only thing that needs to be reset is the bootloader. The script Media:Vm-reinstall.sh does just that. After that, run the command
xe vm-reboot vm=wms.darknet
to initiate the reinstallation. Cobbler has some tricks to preserve ssh host keys and minion keys from a previous installation, but this may or may not work. In one experiment it actually seemed not to have worked. There is a bug in the restore_keys function, which is now remedied in the keep_more_files snippet in cobbler.
Otherwise make sure to remove the minion key from cobbler
salt-key -d <machine>
before starting the machine, otherwise the minion will fail and ssh login will be more cumbersome as the correct ssh key has not yet been installed by salt!
Darknet CA
The test sites will need certificates, so a local simple CA is set up on cobbler.darknet in /srv/ca.
New certificates can be generated with the gen-host-cert.sh script. This automatically places the cert and key in /srv/salt/host_keys.
Issuing CRLs can be done by calling
./make-crl /var/www/html/7140638d.r0
Saltstack setup
After systems are installed with Cobbler, saltstack takes over. The machine cobbler.darknet is the salt master. Test the connection to the minions by running a ping test.
salt '*' test.ping
All machines should report in. If not, log in to the machine and check the salt minion log
/var/log/salt/minion
The saltstack description of the system is kept in a collection of YAML files under
/srv /srv/pillar /srv/salt
The pillar is just static data. The salt tree contains references to various machine types and modules, als well as files that are copied over to the minions such as ssh keys.
git repository
The salt tree is maintained with git. Set up a remote: git clone ssh://root@cobbler.darknet/root/salt
and check out the 'useyaim' branch. Whenever this branch gets pushed, the repository under /srv will be automatically updated (via a post-receive hook).
Adding users
Adding administrators for one of the sites mentioned above is done by adding their ssh public keys to the corresponding file /srv/salt/ssh_keys/domain.pub and running the salt state command:
salt '*.<domain>' state.highstate
Adding ordinary users for a site is done by adding their details in the pillar, much like the examples already given in /srv/pillar/users/frogstar.sls:
users: dent: fullname: Arthur Dent shell: /bin/bash home: /home/dent uid: 604 groups: - users
Take care to keep the uids unique. After adding the user, put the ssh public key in /srv/salt/ssh_keys/dent.pub. Again running
salt '*.frogstar' state.highstate
will update all frogstar machines to include the new user.