[DRBD-user] DRBD 9 auto-promote not changing role to Primary, but is writable
Doug Cahill
handruin at gmail.com
Mon Nov 18 22:40:28 CET 2019
Thank you Phil. Yes, I replied earlier this morning to this thread
confirming that it is indeed zfs using the wrong flag (FMODE_EXCL) in newer
kernels (post 2.6.32) when opening the drbd resource (or any block device
for that matter). We didn't know this until late Friday evening, so I
wasn't aware it was a zfs issue before starting this thread. We are going
to pull in the fix from zfs 0.8.x to make this work with auto-promote.
Thanks for your time and replies.
Thanks,
-Doug
On Mon, Nov 18, 2019 at 4:27 PM Philipp Reisner <philipp.reisner at linbit.com>
wrote:
> Hi,
>
> from top of my head ZFS is special in that sense that it opens the backing
> device only for a short amount of time.
>
> I mean it does the eqivalent of an open(2) call on the backend block device
> and than very soon after that does the equivalent of a close(2).
>
> All other linux file systems open(2) it during mount and keep it
> open until unmount. Which seems pretty logical.
>
> DRBD's auto-promote relies on open/close calls. In other words, if you
> put ZFS on top of DRBD, do not use auto-promote. Use manual promote.
>
> It is nothing we can fix on DRBD's side. This should be fixed in ZFS.
>
> best regards,
> Phil
>
>
> Am Mittwoch, 13. November 2019, 14:08:37 CST schrieb Doug Cahill:
> > I'm configuring a two node setup with drbd 9.0.20-1 on CentOS 7
> > (3.10.0-957.1.3.el7.x86_64) with a single resource backed by an SSDs.
> I've
> > explicitly enabled auto-promote in my resource configuration to use this
> > feature.
> >
> > The drbd device is being used in a single-primary configuration as a
> zpool
> > SLOG device. The zpool is only ever imported on one node at a time and
> the
> > import is successful during cluster failover events between nodes. I
> > confirmed through zdb that the zpool includes the configured drbd device
> > path.
> >
> > My concern is that the drbdadm status output shows the Role of the drbd
> > resource as "Secondary" on both sides. The documentations reads that the
> > drbd resource will be auto promoted to primary when it is opened for
> > writing.
> >
> > drbdadm status
> > r0 role:Secondary
> > disk:UpToDate
> > dccdx0 role:Secondary
> > peer-disk:UpToDate
> >
> > My device should be opened for writing when the zpool is imported. I've
> > even tested writing to the pool with "dd oflag=sync" to force sync writes
> > to the SLOG which is the drbd resource. The drbd device never changes
> the
> > reported state but it appears to be writable.
> >
> > Have I misconfigured my drbd resource for an auto-promote configuration
> > and/or is my use case to obscure for auto-promote to detect the device is
> > being written to when used in a zpool?
> >
> > =========== /etc/drbd.d/r0.res
> > resource r0 {
> > disk "/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0-part1";
> > meta-disk internal;
> > device minor 0;
> >
> > on dccdx0 {
> > address 192.0.2.10:7000;
> > }
> >
> > on dccdx1 {
> > address 192.0.2.20:7000;
> > }
> >
> > disk {
> > read-balancing when-congested-remote;
> > no-disk-flushes;
> > no-md-flushes;
> > # al-updates no; # turn off al-tracking; requires full sync on
> crash
> > c-fill-target 4M;
> > resync-rate 500M;
> > c-max-rate 1000M;
> > c-min-rate 700M;
> > }
> >
> > net {
> > sndbuf-size 10M;
> > rcvbuf-size 10M;
> > max-buffers 20000;
> > fencing resource-and-stonith;
> > }
> > }
> >
> > -Doug
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20191118/2181995b/attachment.htm>
More information about the drbd-user
mailing list