Difference between revisions of "Hooikanon server set up (ppc64le)"

From PDP/Grid Wiki
Jump to navigationJump to search
Line 16: Line 16:
 
Set the mode to static by running this command: ipmitool lan set 1 ipsrc static
 
Set the mode to static by running this command: ipmitool lan set 1 ipsrc static
  
# Set your IP address by running this command: ipmitool lan set 1 ipaddr ip_address
+
# Set your IP address by running this command: ipmitool lan set 1 ipaddr ip_address where ip_address is the static IP address that you are assigning to this system.
where ip_address is the static IP address that you are assigning to this system.
+
# Set your netmask by running this command: ipmitool lan set 1 netmask netmask_address where netmask_address is the netmask for the system.
# Set your netmask by running this command:
+
# Set your gateway server by running this command: ipmitool lan set 1 defgw ipaddr gateway_server where gateway_server is the gateway for this system.
ipmitool lan set 1 netmask netmask_address where netmask_address is the netmask for the system.
 
# Set your gateway server by running this command:
 
ipmitool lan set 1 defgw ipaddr gateway_server where gateway_server is the gateway for this system.
 
 
# Confirm the IP address by running the command ipmitool lan print 1 again.
 
# Confirm the IP address by running the command ipmitool lan print 1 again.
  
Line 31: Line 28:
 
https://www.nikhef.nl/pdp/ndpf/files/packages/arcconf/.
 
https://www.nikhef.nl/pdp/ndpf/files/packages/arcconf/.
 
General syntax for using arcconf is:
 
General syntax for using arcconf is:
  ARCCONF CREATE <Controller#> <LOGICALDRIVE|MAXCACHE> [Options] <Size> <RAID#>  
+
  ARCCONF CREATE <Controller#> <LOGICALDRIVE|MAXCACHE> [Options] <Size> <RAID#> <CHANNEL# DRIVE#> [CHANNEL# DRIVE#] ... [noprompt]
<CHANNEL# DRIVE#> [CHANNEL# DRIVE#] ... [noprompt]
 
 
Hooikanons have 1 controller with 2 disks each which should be configured as RAID 1.
 
Hooikanons have 1 controller with 2 disks each which should be configured as RAID 1.
 
I used this command to configure the setup:
 
I used this command to configure the setup:
Line 39: Line 35:
 
at RAID 1, on channel 0 to disk 0 and on channel 0 to disk 1.
 
at RAID 1, on channel 0 to disk 0 and on channel 0 to disk 1.
 
You can check the configuration of the drives by using
 
You can check the configuration of the drives by using
arcconf getconfig 1 AL
+
arcconf getconfig 1 AL
 
A useful site for more information:
 
A useful site for more information:
 
http://fibrevillage.com/9-storage/3-arcconf-command-examples-adaptec-array-controller
 
http://fibrevillage.com/9-storage/3-arcconf-command-examples-adaptec-array-controller
Line 52: Line 48:
 
To use a kickstart file1, generally this can be the same as the x86_64 kickstart file,
 
To use a kickstart file1, generally this can be the same as the x86_64 kickstart file,
 
however, the partitioning scheme should follow something like this:
 
however, the partitioning scheme should follow something like this:
clearpart --drives=sda --all
+
clearpart --drives=sda --all
part "PPC PReP Boot" --size=8 --asprimary --fstype="PPC PReP Boot" --ondisk=sda
+
part "PPC PReP Boot" --size=8 --asprimary --fstype="PPC PReP Boot" --ondisk=sda
part /boot --size=1000 --asprimary --fstype=ext4 --ondisk=sda
+
part /boot --size=1000 --asprimary --fstype=ext4 --ondisk=sda
part pv.01 --size=1 --grow --ondisk=sda
+
part pv.01 --size=1 --grow --ondisk=sda
volgroup system pv.01
+
volgroup system pv.01
logvol /    --fstype ext4 --size=65536    --name=root --vgname=system
+
logvol /    --fstype ext4 --size=65536    --name=root --vgname=system
logvol swap --fstype swap --size=32768    --name=swap --vgname=system
+
logvol swap --fstype swap --size=32768    --name=swap --vgname=system
logvol /var --fstype ext4 --size=65536    --name=var  --vgname=system
+
logvol /var --fstype ext4 --size=65536    --name=var  --vgname=system
logvol /tmp --fstype ext4 --size=65536    --name=tmp  --vgname=system
+
logvol /tmp --fstype ext4 --size=65536    --name=tmp  --vgname=system
 
Note the "PPC PReP Boot" partition at the start is important for the system to boot
 
Note the "PPC PReP Boot" partition at the start is important for the system to boot
 
properly. More information about the specifics for ppc64le with kickstart files can be found here:
 
properly. More information about the specifics for ppc64le with kickstart files can be found here:
Line 75: Line 71:
 
The Power9 can take the pxe boot argument from the ipmitool, so to pxe boot the system,
 
The Power9 can take the pxe boot argument from the ipmitool, so to pxe boot the system,
 
you can use:
 
you can use:
[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS -I lanplus chassis bootdev pxe
+
[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS -I lanplus chassis bootdev pxe
Set Boot Device to pxe
+
Set Boot Device to pxe
[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS power cycle
+
[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS power cycle
  
 
====RPMs for ppc64le====
 
====RPMs for ppc64le====
Line 94: Line 90:
 
Use the makefile to autogenerate the .spec/.unit/.init files as needed
 
Use the makefile to autogenerate the .spec/.unit/.init files as needed
 
In the sources directory, download the correct tarball release as needed:
 
In the sources directory, download the correct tarball release as needed:
i.e. wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-ppc64le.tar.gz
+
i.e.  
 +
 
 +
wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-ppc64le.tar.gz
  
 
Copy the relevant files into rpmbuild/SOURCES
 
Copy the relevant files into rpmbuild/SOURCES

Revision as of 09:25, 9 August 2019

Set up for ppc64le systems

NIKHEF-ELPROD has 3 IBM Power9 machines for the grid dCache storage systems. The systems are called hooikanon-01, 02 and 03. (hooikanon-04 is a dCache storage system for stoomboot.) These servers have a different architecture from x86_64, which means they require different tricks to get them configured properly. Note that ppc64le is different from ppc64--the ppc64le is purely for the little endian format where ppc64 is for big endian systems.

IPMI set up

Turn on the machine and once in the Petitboot menu, exit into the shell to start configuring the IPMI set up. IBM instructions for setting up the ipmi interfaces: https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liabw/rhel_guide_Power9_network.html The basic steps to follow are:

Set the mode to static by running this command: ipmitool lan set 1 ipsrc static

  1. Set your IP address by running this command: ipmitool lan set 1 ipaddr ip_address where ip_address is the static IP address that you are assigning to this system.
  2. Set your netmask by running this command: ipmitool lan set 1 netmask netmask_address where netmask_address is the netmask for the system.
  3. Set your gateway server by running this command: ipmitool lan set 1 defgw ipaddr gateway_server where gateway_server is the gateway for this system.
  4. Confirm the IP address by running the command ipmitool lan print 1 again.


Server RAID set up

ARCCONF can be used to configure the logical drives and set up the RAID level for the servers for the internal disks. ARCCONF can be used by putting the arcconf binary on a USB stick available from https://www.nikhef.nl/pdp/ndpf/files/packages/arcconf/. General syntax for using arcconf is:

ARCCONF CREATE <Controller#> <LOGICALDRIVE|MAXCACHE> [Options] <Size> <RAID#> <CHANNEL# DRIVE#> [CHANNEL# DRIVE#] ... [noprompt]

Hooikanons have 1 controller with 2 disks each which should be configured as RAID 1. I used this command to configure the setup:

arcconf create 1 logicaldrive max 1 0 0 0 1 noprompt

This command creates a logical drive on Controller 1, with the maximum size possible, at RAID 1, on channel 0 to disk 0 and on channel 0 to disk 1. You can check the configuration of the drives by using

arcconf getconfig 1 AL

A useful site for more information: http://fibrevillage.com/9-storage/3-arcconf-command-examples-adaptec-array-controller

Installing an OS

Make sure to get the ppc64le (or alt architecture) builds for the OS distribution. It is possible to install the OS via a USB stick or a virtual iso from the BMC interface. To mount a virtual iso (only available from the Java interface) from the BMC, select Virtual Media -> Virtual Storage and choose the logical drive type, open your image, and 'plug in' the virtual iso. More instructions for how to set this up can be found: https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/rhelqs_guide_Power_p9_usb.pdf?view=kc To use a kickstart file1, generally this can be the same as the x86_64 kickstart file, however, the partitioning scheme should follow something like this:

clearpart --drives=sda --all
part "PPC PReP Boot" --size=8 --asprimary --fstype="PPC PReP Boot" --ondisk=sda
part /boot --size=1000 --asprimary --fstype=ext4 --ondisk=sda
part pv.01 --size=1 --grow --ondisk=sda
volgroup system pv.01
logvol /    --fstype ext4 --size=65536    --name=root --vgname=system
logvol swap --fstype swap --size=32768    --name=swap --vgname=system
logvol /var --fstype ext4 --size=65536    --name=var  --vgname=system
logvol /tmp --fstype ext4 --size=65536    --name=tmp  --vgname=system

Note the "PPC PReP Boot" partition at the start is important for the system to boot properly. More information about the specifics for ppc64le with kickstart files can be found here: https://docs.centos.org/en-US/centos/install-guide/Kickstart2/ Also check that none of the packages are architecture dependent. I.e. biosdevname is for x86_64 based systems, so the udev package was substituted and works. It is useful to add debugging during the installation process. This can be done by "manually" (because I haven't found another way of doing it) adding inst.logging=debug in the Petit boot menu under the boot arguments. (Scroll over the linux "pxe" boot device and press 'e' for edit.)

Rebooting

The Power9 can take the pxe boot argument from the ipmitool, so to pxe boot the system, you can use:

[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS -I lanplus chassis bootdev pxe
Set Boot Device to pxe
[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS power cycle

RPMs for ppc64le

Most of the RPMs used for the ppc64le come from the EPEL Centos 7 repo: https://dl.fedoraproject.org/pub/epel/7/ppc64le/ A mirror is set up on hoen to take care of updating and managing these packages. However, Prometheus requires an architecture-dependent rpm (node exporter) which will not install without a ppc64le rpm (which is currently not available). So following the instructions for building the RPMs from https://github.com/lest/prometheus-rpm

Choose a ppc64le machine to build the rpm on (i.e., hooikanon-02)2

Install rpmbuild and any dependent packages (https://wiki.centos.org/HowTos/SetupRpmBuildEnvironment) Check out or clone the source files for the RPMs - Prometheus in this case (https://github.com/lest/prometheus-rpm) Create a separate directory called rpmbuild with these subdirectories (BUILD/ RPMS/ SOURCES/ SPECS/ SRPMS/ tmp/) Use the makefile to autogenerate the .spec/.unit/.init files as needed In the sources directory, download the correct tarball release as needed: i.e.

wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-ppc64le.tar.gz

Copy the relevant files into rpmbuild/SOURCES Use the rpmbuild tool to create the RPM from the .spec file: rpmbuild -ba autogen_node_exporter.spec

Move the SRPM and RPM to the server for storing and mirroring.

The prometheus node exporter RPM was then placed under the nikhef external repo on hoen (/srv/repos/mirrors/nikhef/external/7/ppc64le/).



For the kickstart method to work, the ppc64le distribution must be imported into Cobbler -- the network install server. This requires a lot of manual tuning for the kickstart metadata.


It may be possible to cross-build this on a different platform but that was not tested. Docker containers did not work in building the RPM. But the source RPM for prometheus node exporter are stored under my user directory on stal.