[DRBD-user] Detection of Logical Volume on nested LVM replication

Olivier LAMBERT lambert.olivier at gmail.com
Fri Mar 5 12:25:09 CET 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


To complete this, here is my conf file on DRBD servers :
My ressource :

resource xen {
  syncer {
  rate 50M;
  }

  disk {
  no-md-flushes;
  }

  on SL01A {
    device    /dev/drbd0;
    disk      /dev/vg0/xen;
    address   192.168.40.21:7789;
    meta-disk internal;
  }
  on SL01B {
    device    /dev/drbd0;
    disk      /dev/vg0/xen;
    address   192.168.40.22:7789;
    meta-disk internal;
  }


My global conf :

global {
        usage-count yes;
}

common {
        protocol C;

        handlers {
                pri-on-incon-degr
"/usr/lib/drbd/notify-pri-on-incon-degr.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger
; reboot -f";
                pri-lost-after-sb
"/usr/lib/drbd/notify-pri-lost-after-sb.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger
; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh;
/usr/lib/drbd/notify-emergency-shutdown.sh; echo o >
/proc/sysrq-trigger ; halt -f";

        }
        startup {
                become-primary-on both;
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout
wait-after-sb;
        }
        disk {
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs
                no-disk-barrier;
                no-disk-flushes;
                no-disk-drain;
        }
        net {
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
        }

        syncer {
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
        }
}

And my lvm conf filter :
    filter = ["a|sd.*|", "r|drbd.*|", "r|.*|"]

My LV exported :
  ACTIVE            '/dev/vg0/xen' [500,00 GB] inherit



That's it.  If you want to get my "CLIENT" conf (lvm ?) I can post it here to !


Regards,


Olivier
XO Project
http://xen-orchestra.com


On Thu, Mar 4, 2010 at 5:48 PM, Olivier LAMBERT
<lambert.olivier at gmail.com> wrote:
> Hello,
>
> I'll try to be clear, but it's really hard to explain..
> In few words : when I create a Logical Volume on a host attached
> (iSCSI) to one DRBD server, it doesn't appear on the second host,
> attached to the other DRBD. But, on existing LV, it works like a charm
> (all data's are replicated). And more, if I create a LV on "one side",
> and fill it with data, I saw the replication link with heavy traffic,
> but always no LV.
> Now, to understand, here is how it happened with this infrastructure.
>
> 1) On both DRBD hosts : Debian GNU/Linux, Dual Primary setup, version:
> 8.3.7 (api:88/proto:86-91). I choose to configure a resource on a
> Logical volume of 500Gb (named /dev/vg0/xen). To be clear, two hosts
> are "DRBD1" & "DRBD2". a cat /proc/drbd gave this :
>  0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----
>    ns:153716 nr:408752 dw:562408 dr:355156 al:179 bm:109 lo:0 pe:0
> ua:0 ap:0 ep:1 wo:n oos:0
>
>
> 2) This resource is exported with iSCSI. For the example, let's say
> that CLIENT1 is connected to DRBD1, and CLIENT2 is connected to DRBD2.
> It's simplifed because in real, it's multiple Xen Dom0's clients with
> multipath (but we don't care of that here).
>
> 3) CLIENT1 see the device as a block device (so far so good, iSCSI
> works). I choose to use LVM on this block device. I create for example
> : /dev/vg_xen/mydisk on CLIENT1. I mount it, I put some stuff in there
> : I saw the replication like working : some traffic. DRBD2 and 1 says
> it's all OK. If I lvscan on CLIENT1, I can see my brand new volume.
>
> 4) CLIENT2, (so, connected on iSCSI with DRBD2), see the block device,
> the volume group, but NOT the Logical Volume. If I disconnect DRBD2's
> resource, and reconnect it, and reconnect iSCSI of CLIENT2, wow, I saw
> the LV !
>
> And more : if the LV exists on both side (after disconnect/reconnect
> the resource), data's are correctly replicated (obviously, I do NOT
> mount LV on both side, I'm aware of that !). But, if I fill on one
> side (e.g CLIENT1), dismount it, then mount it on CLIENT2, data are
> here, without any problem.
>
> So, my "theory", is that LVM operations (lvcreate or lvremove) on a
> volume group, which is on top of iSCSI and LVM replicated device by
> DRBD, are NOT replicated, UNTIL disconnect/reconnect the ressource. I
> don't know why, and that's why I ask here to understand what I miss.
>
> Additional Informations: If my clients are connected to the SAME DRBD
> (let's tell DRBD1), if CLIENT1 creates a LV, CLIENT2 is immediatly
> aware of that (just inactive, not a problem, vgchange and it works).
> So the "problem" is during the replication.
>
> Thanks for your help.
>
>
> Regards,
>
>
> Olivier
> XO Project
> http://xen-orchestra.com
>



More information about the drbd-user mailing list