[DRBD-user] HA DrBD with Heartbeat and LVM

Donovan Francesco donovan at ecntelecoms.com
Tue Jan 12 10:13:03 CET 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi All,

We are attempting the following,

We have to machines, each machine has 2 LvM's with 50GB partitions each.

We have drbd the lvm partitions.

Config file :

global {
    usage-count no;
}

common {
  protocol C;
  syncer {
        rate 100M;
        al-extents 1801;
        verify-alg md5;
  }
  handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
    outdate-peer "/usr/lib64/heartbeat/drbd-peer-outdater -t 5";

    split-brain "echo split-brain. drbdadm -- --discard-my-data connect 
$DRBD_RESOURCE ? | mail -s 'DRBD Alert' root";
    out-of-sync "echo out-of-sync. drbdadm down $DRBD_RESOURCE. 
drbdadm ::::0 set-gi $DRBD_RESOURCE. drbdadm up $DRBD_RESOURCE. | mail -s 
'DRBD Alert' root";
  }
  startup {
    # wfc-timeout  0;
    degr-wfc-timeout 120;    # 2 minutes.

    # In case you are using DRBD for GFS/OCFS2 you want that the
    # startup script promotes it to primary. Nodenames are also
    # possible instead of "both".
    become-primary-on none;
}

  disk {
    fencing resource-only;
    on-io-error  call-local-io-error;


}

  net {
    cram-hmac-alg "sha1";
    shared-secret "ECNNFS$";
    sndbuf-size 256k;
    max-buffers 16000;
    max-epoch-size 16000;
    unplug-watermark 16000;
  }
}

resource r0 {
  device     /dev/drbd0;
  disk       /dev/vg0/drbd0;
  flexible-meta-disk  internal;

  on vzbb-prodtest-01 {
    address    10.202.4.110:7788;
   }
   on vzbb-prodtest-02 {
    address    10.202.4.111:7788;
   }

}

resource r1 {
  device     /dev/drbd1;
  disk       /dev/vg0/drbd1;
  flexible-meta-disk  internal;

  on vzbb-prodtest-01 {
    address    10.202.4.110:7789;
   }
   on vzbb-prodtest-02 {
    address    10.202.4.111:7789;
   }


}

My idea is to run the DrBD partition A on Node 1 as primary with openvz 
containers on it. And DrBD partition B on Node 2 as primary with openvz 
containers on it.

Firstly with Heartbeat controlling the DrBD primary options would i 
rather use primary-on-both to achieve the drbd split.

Example.

version: 8.3.1 (api:88/proto:86-89)
GIT-hash: fd40f4a8f9104941537d1afc8521e584a6d3003c build by 
pavel at xemulnb.sw.ru, 2009-05-22 16:02:49
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Or would it be recommended that I use Cluster aware filesystem like GFS 
to achieve the HA for the OpenVZ containers.


I hope this make sense.





More information about the drbd-user mailing list