Difference between revisions of "User talk:Gertp"
Line 35: | Line 35: | ||
First, copy the sample configuration for corosync to the default configuration: | First, copy the sample configuration for corosync to the default configuration: | ||
− | + | # cp /etc/corosync/corosync.conf{.example,} | |
Then, change te "bindnetaddr": the network that your cluster nodes are in: | Then, change te "bindnetaddr": the network that your cluster nodes are in: | ||
− | perl -p -i -e 's|(bindnetaddr:).*|\1 194.171.X.0|' /etc/corosync/corosync.conf | + | # perl -p -i -e 's|(bindnetaddr:).*|\1 194.171.X.0|' /etc/corosync/corosync.conf |
Also, append the following: | Also, append the following: | ||
− | cat >>/etc/corosync/corosync.conf <<UFO | + | # cat >>/etc/corosync/corosync.conf <<UFO |
aisexec { | aisexec { | ||
user: root | user: root | ||
Line 51: | Line 51: | ||
This tells corosync to run as root and to use the pacemaker resource manager. | This tells corosync to run as root and to use the pacemaker resource manager. | ||
+ | |||
+ | Now start the cluster: | ||
+ | # /etc/init.d/corosync start | ||
+ | (For RHC[TE]'s that is: | ||
+ | # service corosync start | ||
+ | ). |
Revision as of 13:15, 31 March 2011
Generic active/passive clusters
Configuring a cluster using corosync and heartbeat involves you having to write a start/stop and monitoring script for the service you are building the cluster for.
This script is very much like an "init.d" script, but you can't directly use an init.d script as heartbeat scripts use tri-state logic in stead of two-state logic. I.e., heartbeat controlled services are "running", "stopped" or "failed", whereas services controlled by init that fail are stopped and must be restarted. Heartbeat uses the third state "failed" as the trigger to migrate the service to another node in your services pool.
For a simple service consisting of one process, monitoring is easy and adaptation of an existing init.d script straighforward. Hint: use the sample
/usr/lib/ocf/resource.d/heartbeat/Dummy
as a starting-point.
For services comprised of two or more processes, you'll have to loop over all processes, their pid and lock files to see whether processes are running and correspond to the lock and pid files. Assuming that all processes are well behaved and store pid and lock files in the standard locations.
BDII setup on active/passive failover cluster
The bdii services are not entirely well behaved. The pid files for the slapd daemon and bdii-update daemon are not in the same place (/var/run and /var/run/bdii/db, respectively), and the init.d script for the slapd daemon contains lots of cruft that shouldn't be part of an init script to begin with (such as initialization of the database).
Therefore the heartbeat script for the bdii service is a bit of kludge, remaining as close as possible to the init.d script it is derived from (so as to make adaptations doable as the init.d script evolves).
Click for the BDII heartbeat script.
Install the cluster engine & resource manager
You need to perform the installation on each cluster node.
Add the EPEL repo to /etc/yum.repos.d:
# rpm -Uhv http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
Add the Clusterlabs repo to /etc/yum.repos.d:
# wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo
Now have yum install the cluster engine and resource managers. This will install loads of dependencies:
# yum -y install pacemaker
Configure the cluster engine
You need to do this on each cluster node.
First, copy the sample configuration for corosync to the default configuration:
# cp /etc/corosync/corosync.conf{.example,}
Then, change te "bindnetaddr": the network that your cluster nodes are in:
# perl -p -i -e 's|(bindnetaddr:).*|\1 194.171.X.0|' /etc/corosync/corosync.conf
Also, append the following:
# cat >>/etc/corosync/corosync.conf <<UFO aisexec { user: root group: root } service { name: pacemaker ver: 0 } UFO
This tells corosync to run as root and to use the pacemaker resource manager.
Now start the cluster:
# /etc/init.d/corosync start
(For RHC[TE]'s that is:
# service corosync start
).