[DRBD-user] How does this look? Vserver on DRBD on RAID1 on a Gentoo system

Evert evert at poboxes.info
Fri Jan 27 11:10:09 CET 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi all!

How does this plan look?

using:	Gentoo linux:	http://www.gentoo.org/
	Linux-VServer:	http://linux-vserver.org/
	DRBD:		http://www.drbd.org/

current situation:
server A:
2x HD, partitioned as follows:
/boot	100M
[root]	[remainder]
/var	50G
[swap]	5G

all partitions RAID1 (with Linux softraid /dev/md[X] devices)

server B:
2x HD, empty

Both servers:
	eth0: 1000Mbit cross-over link to other server
	eth1: 100Mbit link to switch

The goal: move the Vserver virtual servers, currently on [root], to a partition of their own. This partition should be DRBD on RAID1. All of this with the minimum amount of downtime.
The other partitions should be RAID1.

The plan:

* Edit /etc/fstab on server A so that /dev/sda[X] gets mounted, instead of /dev/md[X].
* Power down
* Take out /dev/sdb (and put aside just in case of a major FUBAR) and replace it with an empty disk
* Boot up from /dev/sda
* Partition /dev/sdb as follows:
		/boot		100M
		[root]		25G
		/var		20G
		[swap]		5G
		/vservers	[remainder]
* Make all partitions RAID1 with 1 drive 'missing':
		mdadm --create /dev/md[X] --level 1 --raid-devices=2 /dev/sdb[X] missing
* Edit /etc/drbd.conf so that it knows about the /vservers partition on both servers.
* Load the correct module: 'modprobe drbd'
* Make the /vservers partition a DRBD-partition by doing 'drbdadm up all'
* Format the RAID1 partitions & the DRBD partition & the swap.
* Copy /boot from /dev/sda to /dev/sdb and use grub to make /dev/sdb bootable.
* Boot from liveCD and:

	/dev/sda		/dev/sdb

copy	/var ->			/var
move	[root]/vservers		/vservers
copy	[root] (remainder)	[root]

* Edit /etc/fstab on /dev/sdb as follows:
	/dev/md0 /boot
	/dev/md1 [swap]
	/dev/md2 /var
	/dev/md3 [root]
	/dev/drbd0 /vservers

* Reboot. Boot from /dev/sdb
* If all ok -> Repartition & format /dev/sda to match /dev/sdb
* Use mdadm to add the /dev/sda partitions to /dev/sdb to get a working RAID1 system.

Now there should be a fully functional server again, with all the virtual servers on their own partition. This is now a drbd partition with 1 node missing.

On to server B:
* Power down server A move 1 of its drives to server B. Put an empty drive in server A. Reboot server A & resync the RAID partitions on-the-fly.

(for people who lost track of all the disk juggling, we should now have:
   * 1 HD in server A with the final setup
   * 1 HD in server A - empty
   * 1 HD in server B with the final setup
   * 1 HD put aside for the moment, containing the original installation, destined for server B )

* Boot server B from liveCD and edit IP addresses & /etc/drbd.conf
* Reboot server B, now from HD.
* Check that drbd on server A & server B see each other. Then tell drbd on server A that it is the primary (since this server has been up and running while we worked on server B), and resync the drbd.
* Once this is finished, and all is still working the way it should, we can add the 2nd drive to server B:
* Power down server B
* Add the drive we had put aside, with the original setup.
* Boot server B from the drive with the NEW setup. Repartition the drive we just inserted to match the boot drive. Use mdadm to make server B also have a fully functional RAID1.

If I'm not mistaken we should now have 2 fully functional servers which have RAID1 on all partitions, one of those partitions being DRBD-synched in addition to this.

The next step will be configuring heartbeat to do the actual failover, but I'll cross that hurdle when I get to it... ;-)


Does this epistle make any sense? Did I make booh-booh's anywhere, or forget something?


More information about the drbd-user mailing list