[DRBD-user] New 3-way drbd setup does not seem to take i/o
Remolina, Diego J
dijuremo at aerospace.gatech.edu
Tue May 1 18:14:52 CEST 2018
Hi, was wondering if you could guide me as to what could be the issue here. I configured 3 servers with drbdmanage-0.99.16-1 and drbd-9.3.1-1 and related packages.
I created a zfs pool, then use zfs2.Zfs2 plugin and created a resource. All seems fine, up to the point when I want to test the resource and create a file system in it. At that point, if I try to create say an XFS filesystem, things freeze. If I create a ZFS pool on the drbd device, the creation succeeds, but then I cannot write or read from that.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mainpool 11.6T 1.02T 24K none
mainpool/export_00 11.6T 12.6T 7.25G -
The plugin configuration:
[GLOBAL]
[Node:ae-fs01]
storage-plugin = drbdmanage.storage.zvol2.Zvol2
[Plugin:Zvol2]
volume-group = mainpool
# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name | Pool Size | Pool Free | | State |
|------------------------------------------------------------------------------------------------------------|
| ae-fs01 | 13237248 | 1065678 | | ok |
| ae-fs02 | 13237248 | 1065683 | | ok |
| ae-fs03 | 13237248 | 1065672 | | ok |
+------------------------------------------------------------------------------------------------------------+
# drbdmanage list-volumes
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | | State |
|------------------------------------------------------------------------------------------------------------|
| export | 0 | 10.91 TiB | 106 | | ok |
+------------------------------------------------------------------------------------------------------------+
But trying to make one node primary and creating a file system, either a new zfs pool for data or XFS file system fail.
# drbdadm primary export
# drbdadm status
.drbdctrl role:Secondary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
ae-fs02 role:Primary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate
ae-fs03 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate
export role:Primary
disk:UpToDate
ae-fs02 role:Secondary
peer-disk:UpToDate
ae-fs03 role:Secondary
peer-disk:UpToDate
# zpool create export /dev/drbd106
# zfs set compression=lz4 export
# ls /export
ls: reading directory /export: Not a directory
If I destroy the pool and try to format /dev/drbd106 as XFS, it just hangs forever. Any ideas as to what is happening?
Diego
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20180501/a660275b/attachment.htm>
More information about the drbd-user
mailing list