[DRBD-user] LVM on many DRBD-devices + heartbeat

Michael Schwartzkopff misch at multinet.de
Sat Apr 10 15:21:41 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Am Samstag, 10. April 2010 14:50:30 schrieb Oliver Hoffmann:
> Hi list,
>
> probably more a heartbeat question but maybe some-one can push me in
> the right direction.
>
> I want two storage servers with a bunch of drbd-devices and LVM on top
> in order to make snapshots and resizing disk space. The idea is to add a
> HD on each node, make it a drbd-device and extend a logical volume
> accordingly whenever I run out of space. heartbeat (or pacemaker?)
> should take care and do a failover to node2 if node1 dies. Finally
> clients should connect via iSCSI or smb or nfs.
>
> I successfully made drbd0 and drbd1 and created a volume group with
> logical volumes. I resized my lv after adding drbd2 as well. Furthermore
> one drbd (without LVM) and heatbeat works without problems.
>
> So far so good, but how does heartbeat put all three drbd to primary,
> do a vgchange -ay and finally mount my logical volumes after a reboot
> or when node1 dies? It was even not possible to handle just the three
> drbd-devices by heartbeat.
>
> I tried different haressources:
>
> node1 IPaddr2::external-IP/24/eth0 drbddisk::raid
> Filesystem::/dev/drbd0
>
> I know that I need my mount point and ext4 here but I don't want it to
> be mounted because a lv should be mounted. I only want it to be primary.
>
> Now the setup with three devices.
>
> node1 IPaddr2::external-IP/24/eth0
> node1 drbddisk::raid Filesystem::/dev/drbd0
> node1 drbddisk::raid2 Filesystem::/dev/drbd1
> node1 drbddisk::raid3 Filesystem::/dev/drbd2
>
> This doesn't work at all. After a reboot all devices are in state
> secondary/unknown. It is possible to get them back to normal
> (primary/secondary up-to-date) by hand though. I have to connect,
> perform "drbdsetup /dev/drbdX primary -o" and restart drbd.
>
> drbd and LVM together:
>
> node1 drbddisk::raid LVM::drbd-vg
> Filesystem::/dev/mapper/drbd-vg-lv1::/mnt/lv1::ext4
>
> Works not at all.
>
> Thanx a lot for guidance!
>
>
> Oliver
>
>
> ################### The other configs ###########################
>
> #/etc/drbd.conf
>   resource raid {
>
> protocol C;
>
> handlers {
>   pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
>   pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
>   local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
>   outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater";
> }
>
> startup { wfc-timeout 0; degr-wfc-timeout 120; }
>  disk { fencing resource-only; }
> syncer { rate 50M; al-extents 257; }
>
> net {
>   after-sb-0pri discard-younger-primary;
>   after-sb-1pri consensus;
>   after-sb-2pri disconnect;
>   rr-conflict call-pri-lost;
>   ping-timeout 20;
> }
>
> on node1 {
>   device     /dev/drbd0;
>   disk       /dev/sda6;
>   address    192.168.1.1:7788;
>   meta-disk  internal;
> }
>
> on node2 {
>   device     /dev/drbd0;
>   disk       /dev/sda6;
>   address    192.168.1.2:7788;
>   meta-disk  internal;
> }
>
> }
>
> resource raid2 {
>
> protocol C;
>
> handlers {
>   pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
>   pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
>   local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
>   outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater";
> }
>
> startup { wfc-timeout 0; degr-wfc-timeout 120; }
>  disk { fencing resource-only; }
> syncer { rate 50M; al-extents 257; }
>
> net {
>   after-sb-0pri discard-younger-primary;
>   after-sb-1pri consensus;
>   after-sb-2pri disconnect;
>   rr-conflict call-pri-lost;
>   ping-timeout 20;
> }
>
> on node1 {
>   device     /dev/drbd1;
>   disk       /dev/sda7;
>   address    192.168.2.1:7788;
>   meta-disk  internal;
> }
>
> on node2 {
>   device     /dev/drbd1;
>   disk       /dev/sda7;
>   address    192.168.2.2:7788;
>   meta-disk  internal;
> }
>
> }
>
> resource raid3 {
>
> protocol C;
>
> handlers {
>   pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
>   pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
>   local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
>   outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater";
> }
>
> startup { wfc-timeout 0; degr-wfc-timeout 120; }
>  disk { fencing resource-only; }
> syncer { rate 50M; al-extents 257; }
>
> net {
>   after-sb-0pri discard-younger-primary;
>   after-sb-1pri consensus;
>   after-sb-2pri disconnect;
>   rr-conflict call-pri-lost;
>   ping-timeout 20;
> }
>
> on node1 {
>   device     /dev/drbd2;
>   disk       /dev/sda8;
>   address    192.168.3.1:7788;
>   meta-disk  internal;
> }
>
> on node2 {
>   device     /dev/drbd2;
>   disk       /dev/sda8;
>   address    192.168.3.2:7788;
>   meta-disk  internal;
> }
>
> }
>
> #/etc/network/interfaces on node2
> auto eth1
> iface eth1 inet static
> 	address 192.168.1.2
> 	netmask 255.255.255.0
> 	network 192.168.1.0
> 	broadcast 192.168.1.255
>
> auto eth1:0
> iface eth1:0 inet static
> name Ethernet alias LAN card
> address 192.168.2.2
> netmask 255.255.255.0
> broadcast 192.168.2.255
> network 192.168.2.0
>
> auto eth1:1
> iface eth1:1 inet static
> name Ethernet alias LAN card
> address 192.168.3.2
> netmask 255.255.255.0
> broadcast 192.168.3.255
> network 192.168.3.0
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


Hi,

first advise:

do not use heartbeat version 1 - style configurations any more. You will not 
get any support from anybody no more.

Please use a distinct cluster manger like pacemaker. See www.clusterlabs,org.

Greetings,

Michael Schwartzkopff.




More information about the drbd-user mailing list