Difference between revisions of "RAID-1 configuration and management"
(13 intermediate revisions by 2 users not shown) | |||
Line 20: | Line 20: | ||
raid /tmp --level=RAID1 --device=md3 --fstype=ext3 raid.04 raid.08 | raid /tmp --level=RAID1 --device=md3 --fstype=ext3 raid.04 raid.08 | ||
− | Please note that the grub boot loader is only installed on the first disk. Manually install it on the second disk, as described in section '' | + | In addition, the clearpart entry in the kickstart file should clear all partitions (at least on CentOS 3): |
+ | clearpart --all | ||
+ | |||
+ | Clearing the partition tables on the individual drives (--drives=sda,sdb) leads to errors in Anaconda (the RedHat system installer). | ||
+ | |||
+ | Please note that the grub boot loader is only installed on the first disk. Manually install it on the second disk, as described in section ''Installing the boot loader''. | ||
== Restoring data on a new disk == | == Restoring data on a new disk == | ||
Line 28: | Line 33: | ||
Well, all data are still available on the other disk, so they can be restored when a new disk is placed. The following steps show how to restore the data: | Well, all data are still available on the other disk, so they can be restored when a new disk is placed. The following steps show how to restore the data: | ||
− | 0. '''Do not reboot the machine!''' A reboot may change the drive names (e.g., /dev/sdb becoming /dev/sda) and hang the machine if it cannot find the boot loader grub. If this happens, you may be saved by the section ''Restoring the boot loader'' | + | 0. '''Do not reboot the machine!''' A reboot may change the drive names (e.g., /dev/sdb becoming /dev/sda) and hang the machine if it cannot find the boot loader grub. If this happens, you may be saved by the section ''Restoring the boot loader''. |
1. Remove the partitions of the defect disk from the RAID configuration: | 1. Remove the partitions of the defect disk from the RAID configuration: | ||
Line 94: | Line 99: | ||
mdadm -Q -D /dev/mdX (X=0,1,2,...) | mdadm -Q -D /dev/mdX (X=0,1,2,...) | ||
− | |||
− | '' | + | Finally, the boot loader should be installed on the new disk. This is described in the section ''Installing the bootloader''. |
− | |||
− | + | == Installing the boot loader == | |
− | + | The steps described in the section ''Restoring data on a new disk'' do not install a boot loader (i.e., grub) on the new disk. If a system with a new disk is rebooted without installing the bootloader, the system may be unable to boot. Therefore, always | |
− | (hd0 | + | |
− | + | To install the bootloader on a disk /dev/sdX, use the procedure described below. | |
+ | |||
+ | 1. Start the utility /sbin/grub | ||
+ | |||
+ | 2. At grub the prompt, issue the following commands: | ||
+ | |||
+ | device (hd0) /dev/sdX | ||
+ | root (hd0,0) | ||
+ | setup (hd0) | ||
quit | quit | ||
− | + | A script that takes care of these steps is available via the following link: | |
+ | |||
+ | http://stal.nikhef.nl/scripts/install_grub.sh | ||
+ | |||
+ | The script installs the boot loader on all partitions in RAID device /dev/md0 (which is assumed to be the boot device). Running this script more than once is harmless if no other means of configurating grub are used. | ||
− | |||
− | |||
− | + | == Shuffling data and partitions around == | |
− | + | If, for some reason, you want to mess with the partitions on your machine (because your once so carefully chosen partitioning scheme doesn't fit the current requirements) you can do this without needing to reinstall everything--if you're careful. | |
− | + | The RAID-1 configuration means having two identical copies of your data. What is there to stop you from: | |
− | setup ( | + | 1. breaking up your RAID set into individual disks |
− | + | ||
− | reboot | + | 2. using one disk to fashion up a new partitioning scheme (possibly destroying some data) |
+ | |||
+ | 3. recovering data from the second disk to the first disk | ||
+ | |||
+ | 4. matching the second disk's partitioning scheme with the first disk | ||
+ | |||
+ | 5. recreating a new RAID-1 set starting from the first. | ||
+ | |||
+ | This is not for the faint of heart. Here are some tips: | ||
+ | |||
+ | * Start by making a backup of all that is precious. If nothing is precious, don't even bother with this procedure, just reinstall the machine from scratch. | ||
+ | |||
+ | * The key files to keep an eye on are: | ||
+ | /boot/grub/grub.conf | ||
+ | See the line with root=/dev/md''X''? You need to boot into a non-raid setup at some point. You could also do this from the Grub boot prompt by editing the parameters before booting; | ||
+ | /etc/fstab | ||
+ | also mentions the root partition. Remove all /dev/md''X'' stuff untill you're (nearly) done. | ||
+ | /etc/mdadm.conf | ||
+ | Throw that one away, you don't need it anymore. | ||
+ | /proc/mdstat | ||
+ | reports the status of your raid sets. | ||
+ | |||
+ | * Make sure you know the root password, in case you are thrown into maintenance mode on startup. | ||
+ | |||
+ | * '''CAVEAT:''' the size of a raid device is some 88 kB ''less'' than the partition it is based on. This is because of metadata written to the end of the partition, the so-called raid superblock (not to be confused with the filesystem superblock). | ||
+ | |||
+ | Ok, here's a little more detail on the steps involved. | ||
+ | |||
+ | 1. Start by entering fdisk and change '''all''' 'Linux raid autodetect' partitions on '''both''' disks to either 'Linux' or 'Linux swap' according to the nature of the partition. | ||
+ | |||
+ | 2. remove /etc/mdadm.conf | ||
+ | |||
+ | 3. change /boot/grub/grub.conf to use a real partition as root, i.e. | ||
+ | ... root=/dev/sda1 | ||
+ | (depending on what your root partition is) | ||
+ | |||
+ | 4. change /etc/fstab so you just have the basic system on next reboot; also use /dev/sda''X'' instead of /dev/md''X'' | ||
+ | |||
+ | 5. reboot into single user mode (append the word single on the command-line after interrupting grub) | ||
+ | |||
+ | 6. assuming you booted to /dev/sda, reorganize /dev/sdb according to plan, minding above CAVEAT. | ||
+ | |||
+ | 7. any data you had to destroy in the process can now be recovered from /dev/sda. | ||
+ | |||
+ | 8. create ''degenerated'' raid1 arrays for each new partition, by specifying an empty replica: | ||
+ | mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/sdb1 empty | ||
+ | I advise you to keep the device numbers in sync, so md1 == sdb1, md2 == sdb2, etc. | ||
+ | |||
+ | 9. prepare the new root partition (not the current one!) with proper values in /etc/fstab. Also modify grub.conf (on both disks, just to be sure) to select | ||
+ | ... root=/dev/md1 | ||
+ | |||
+ | 10. with fdisk, change all partitions back to 'Linux raid autodetect' (0xfd) | ||
+ | |||
+ | 11. reboot! If all is well, you should have your (degenerated) raid sets active and mounted | ||
+ | |||
+ | 12. copy the partition table from /dev/sdb to /dev/sda as described in ''Restoring data on a new disk'' | ||
+ | |||
+ | 13. hot-add your sda partitions to the raid set. | ||
− | + | 14. recreate your /etc/mdadm.conf (required for mdadm monitoring); mine looks like: | |
+ | DEVICE partitions | ||
+ | MAILADDR root | ||
+ | ARRAY /dev/md4 super-minor=4 | ||
+ | ARRAY /dev/md2 super-minor=2 | ||
+ | ARRAY /dev/md3 super-minor=3 | ||
+ | ARRAY /dev/md1 super-minor=1 |
Latest revision as of 09:28, 25 April 2007
Software RAID-1 configuration via Kickstart
Manually change the Kickstart file to define software RAID-1, which is also known as mirroring.
The following example uses two serial ATA disks (/dev/sda and /dev/sdb) with four partitions (/boot, /, swap and /tmp), each in software RAID-1 configuration:
part raid.01 --size=128 --ondisk=sda part raid.02 --size=8192 --ondisk=sda part raid.03 --size=3072 --ondisk=sda part raid.04 --size=512 --ondisk=sda part raid.05 --size=128 --ondisk=sdb part raid.06 --size=8192 --ondisk=sdb part raid.07 --size=3072 --ondisk=sdb part raid.08 --size=512 --ondisk=sdb raid /boot --level=RAID1 --device=md0 --fstype=ext2 raid.01 raid.05 raid / --level=RAID1 --device=md1 --fstype=ext3 raid.02 raid.06 raid swap --level=RAID1 --device=md2 --fstype=swap raid.03 raid.07 raid /tmp --level=RAID1 --device=md3 --fstype=ext3 raid.04 raid.08
In addition, the clearpart entry in the kickstart file should clear all partitions (at least on CentOS 3):
clearpart --all
Clearing the partition tables on the individual drives (--drives=sda,sdb) leads to errors in Anaconda (the RedHat system installer).
Please note that the grub boot loader is only installed on the first disk. Manually install it on the second disk, as described in section Installing the boot loader.
Restoring data on a new disk
OK, so there are two disks in a RAID-1 (mirror) configuration. What to do if on of them dies?
Well, all data are still available on the other disk, so they can be restored when a new disk is placed. The following steps show how to restore the data:
0. Do not reboot the machine! A reboot may change the drive names (e.g., /dev/sdb becoming /dev/sda) and hang the machine if it cannot find the boot loader grub. If this happens, you may be saved by the section Restoring the boot loader.
1. Remove the partitions of the defect disk from the RAID configuration:
mdadm /dev/mdX -r /dev/sdYZ (X=0,1,2,... Y=a,b,c,... Z=1,2,3,...)
For example,
mdadm /dev/md0 -r /dev/sdb1
to remove partition 1 on disk sdb from raid device md0.
2. Replace the bad disk by a fresh one. Do not reboot the machine!
3. Rescan the SCSI bus for new devices. Use the script
/usr/local/bin/rescan-sci-bus.sh
(if installed from rpm), or download it from:
http://stal.nikhef.nl/scripts/rescan-scsi-bus.sh
4. Now the partition table should be created on the new disk. Use the following command to clone the partition table of existing disk sdX to the new disk sdY:
sfdisk -d /dev/sdX | \ sed -e s/sdX/sdY/ | \ sdisk /dev/sdY
(using X=a,b,c,.. and Y=a,b,c,..., X different from Y).
5. Add all partitions of the new disk to the corresponding raid devices:
mdadm /dev/mdX -a /dev/sdYZ (X=0,1,2,... Y=a,b,c,... Z=1,2,3,...)
For example,
mdadm /dev/md0 -a /dev/sdb1
to add partition 1 on disk sdb from raid device md0. This will automatically trigger the synchronization of the data on that partition to the one on the new disk. All above command may immediately be repeated for all partitions; the actual synchronization takes place sequentially.
6. The progress of the sycnhronization can be monitored via the following command:
cat /proc/mdstat
which produces output like:
Personalities : [raid1] read_ahead 1024 sectors Event: 23 md0 : active raid1 sda1[0] sdb1[1] 128384 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 8385856 blocks [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 3148672 blocks [2/2] [UU] md3 : active raid1 sda5[2] sdb5[1] 521984 blocks [2/1] [_U] [==============>......] recovery = 73.5% (384524/521984) finish=0.0min speed=54932K/sec unused devices: <none>
Note: By default, the transfer rate for synchronizing data is ~10 MB/s. If the system is not running other processes or services, the rate may be increased. The following example sets the maximum speed to 100 MB/s:
echo 100000 > /proc/sys/dev/raid/speed_limit_max
The actual status of a RAID device can be obtained with the following command:
mdadm -Q -D /dev/mdX (X=0,1,2,...)
Finally, the boot loader should be installed on the new disk. This is described in the section Installing the bootloader.
Installing the boot loader
The steps described in the section Restoring data on a new disk do not install a boot loader (i.e., grub) on the new disk. If a system with a new disk is rebooted without installing the bootloader, the system may be unable to boot. Therefore, always
To install the bootloader on a disk /dev/sdX, use the procedure described below.
1. Start the utility /sbin/grub
2. At grub the prompt, issue the following commands:
device (hd0) /dev/sdX root (hd0,0) setup (hd0) quit
A script that takes care of these steps is available via the following link:
http://stal.nikhef.nl/scripts/install_grub.sh
The script installs the boot loader on all partitions in RAID device /dev/md0 (which is assumed to be the boot device). Running this script more than once is harmless if no other means of configurating grub are used.
Shuffling data and partitions around
If, for some reason, you want to mess with the partitions on your machine (because your once so carefully chosen partitioning scheme doesn't fit the current requirements) you can do this without needing to reinstall everything--if you're careful.
The RAID-1 configuration means having two identical copies of your data. What is there to stop you from:
1. breaking up your RAID set into individual disks
2. using one disk to fashion up a new partitioning scheme (possibly destroying some data)
3. recovering data from the second disk to the first disk
4. matching the second disk's partitioning scheme with the first disk
5. recreating a new RAID-1 set starting from the first.
This is not for the faint of heart. Here are some tips:
- Start by making a backup of all that is precious. If nothing is precious, don't even bother with this procedure, just reinstall the machine from scratch.
- The key files to keep an eye on are:
/boot/grub/grub.conf
See the line with root=/dev/mdX? You need to boot into a non-raid setup at some point. You could also do this from the Grub boot prompt by editing the parameters before booting;
/etc/fstab
also mentions the root partition. Remove all /dev/mdX stuff untill you're (nearly) done.
/etc/mdadm.conf
Throw that one away, you don't need it anymore.
/proc/mdstat
reports the status of your raid sets.
- Make sure you know the root password, in case you are thrown into maintenance mode on startup.
- CAVEAT: the size of a raid device is some 88 kB less than the partition it is based on. This is because of metadata written to the end of the partition, the so-called raid superblock (not to be confused with the filesystem superblock).
Ok, here's a little more detail on the steps involved.
1. Start by entering fdisk and change all 'Linux raid autodetect' partitions on both disks to either 'Linux' or 'Linux swap' according to the nature of the partition.
2. remove /etc/mdadm.conf
3. change /boot/grub/grub.conf to use a real partition as root, i.e.
... root=/dev/sda1
(depending on what your root partition is)
4. change /etc/fstab so you just have the basic system on next reboot; also use /dev/sdaX instead of /dev/mdX
5. reboot into single user mode (append the word single on the command-line after interrupting grub)
6. assuming you booted to /dev/sda, reorganize /dev/sdb according to plan, minding above CAVEAT.
7. any data you had to destroy in the process can now be recovered from /dev/sda.
8. create degenerated raid1 arrays for each new partition, by specifying an empty replica:
mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/sdb1 empty
I advise you to keep the device numbers in sync, so md1 == sdb1, md2 == sdb2, etc.
9. prepare the new root partition (not the current one!) with proper values in /etc/fstab. Also modify grub.conf (on both disks, just to be sure) to select
... root=/dev/md1
10. with fdisk, change all partitions back to 'Linux raid autodetect' (0xfd)
11. reboot! If all is well, you should have your (degenerated) raid sets active and mounted
12. copy the partition table from /dev/sdb to /dev/sda as described in Restoring data on a new disk
13. hot-add your sda partitions to the raid set.
14. recreate your /etc/mdadm.conf (required for mdadm monitoring); mine looks like:
DEVICE partitions MAILADDR root ARRAY /dev/md4 super-minor=4 ARRAY /dev/md2 super-minor=2 ARRAY /dev/md3 super-minor=3 ARRAY /dev/md1 super-minor=1