[DRBD-user] New 3-way drbd setup does not seem to take i/o
Roland Kammerer
roland.kammerer at linbit.com
Wed May 2 08:30:54 CEST 2018
On Tue, May 01, 2018 at 04:14:52PM +0000, Remolina, Diego J wrote:
> Hi, was wondering if you could guide me as to what could be the issue here. I configured 3 servers with drbdmanage-0.99.16-1 and drbd-9.3.1-1 and related packages.
>
>
> I created a zfs pool, then use zfs2.Zfs2 plugin and created a
> resource. All seems fine, up to the point when I want to test the
> resource and create a file system in it. At that point, if I try to
> create say an XFS filesystem, things freeze. If I create a ZFS pool on
> the drbd device, the creation succeeds, but then I cannot write or
> read from that.
>
>
> # zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> mainpool 11.6T 1.02T 24K none
> mainpool/export_00 11.6T 12.6T 7.25G -
>
> The plugin configuration:
> [GLOBAL]
>
> [Node:ae-fs01]
> storage-plugin = drbdmanage.storage.zvol2.Zvol2
>
> [Plugin:Zvol2]
> volume-group = mainpool
>
>
> # drbdmanage list-nodes
> +------------------------------------------------------------------------------------------------------------+
> | Name | Pool Size | Pool Free | | State |
> |------------------------------------------------------------------------------------------------------------|
> | ae-fs01 | 13237248 | 1065678 | | ok |
> | ae-fs02 | 13237248 | 1065683 | | ok |
> | ae-fs03 | 13237248 | 1065672 | | ok |
> +------------------------------------------------------------------------------------------------------------+
>
>
> # drbdmanage list-volumes
> +------------------------------------------------------------------------------------------------------------+
> | Name | Vol ID | Size | Minor | | State |
> |------------------------------------------------------------------------------------------------------------|
> | export | 0 | 10.91 TiB | 106 | | ok |
> +------------------------------------------------------------------------------------------------------------+
>
> But trying to make one node primary and creating a file system, either
> a new zfs pool for data or XFS file system fail.
>
>
> # drbdadm primary export
> # drbdadm status
> .drbdctrl role:Secondary
> volume:0 disk:UpToDate
> volume:1 disk:UpToDate
> ae-fs02 role:Primary
> volume:0 peer-disk:UpToDate
> volume:1 peer-disk:UpToDate
> ae-fs03 role:Secondary
> volume:0 peer-disk:UpToDate
> volume:1 peer-disk:UpToDate
>
> export role:Primary
> disk:UpToDate
> ae-fs02 role:Secondary
> peer-disk:UpToDate
> ae-fs03 role:Secondary
> peer-disk:UpToDate
>
> # zpool create export /dev/drbd106
> # zfs set compression=lz4 export
> # ls /export
> ls: reading directory /export: Not a directory
>
> If I destroy the pool and try to format /dev/drbd106 as XFS, it just
> hangs forever. Any ideas as to what is happening?
Carving out zvols which are then use by DRBD should work. Putting
another zfs/zpool on top might have it's quirks, especially with
auto-promote. And maybe the failed XFS was then a follow up problem.
So start with somthing easier then:
create a small (like 10M) resource with DM and then try to create the
XFS on on (without the additional zfs steps).
Regards, rck
More information about the drbd-user
mailing list