[DRBD-user] One resource per disk?

Robert Altnoeder robert.altnoeder at linbit.com
Fri May 11 11:50:50 CEST 2018

On 05/10/2018 09:44 PM, Gandalf Corvotempesta wrote:
> Il giorno mer 2 mag 2018 alle ore 08:33 Paul O'Rorke
> <paul at tracker-software.com <mailto:paul at tracker-software.com>> ha scritto:
>     I create a large data drive out of a bunch of small SSDs using
>     RAID and make that RAID drive an LVM PV.
>     I can then create LVM volume groups and volumes for each use (in
>     my case virtual drives for KVM) to back specific DRBD resources
>     for each VM.  It allows me to have a DRBD resource for each VM,
>     each backed by an LVM volume which is in turn on a large LVM PV.
>     VM has DRBD resource as it's block device --> DRBD resources
>     backed by LVM volume --> LVM volume on a large RAID based Physical
>     Volume.
> Too many drbd resources to manage. 
> I prefere a single resource, if possible.

The only sane active-active setup is one with separate DRBD resources
per VM, because write access is granted per resource, so if one VM is
running on hypervisor A and another VM is running on hypervisor B, they
must each be writable on different nodes. That is easy with separate
resources, whereas with a single resource, it would require a dual
primary setup and cluster-aware volume management (e.g., Cluster LVM) or
filesystems on top), and the slightest interruption of the replication
link is guaranteed to cause a split-brain situation, which means that
half of the VMs will lose some data as soon as the split-brain is
resolved by dropping one of the two split datasets.

So the standard setup is indeed to have each VM backed by a DRBD
resource, the DRBD resource backed by LVM or ZFS volumes, and that
backed by an LVM volume group or ZFS zpool, which in turn is backed by
RAID or single harddisks, SSDs, etc.

> What I would like is to create an HA NFS storage, if possible with ZFS
> but without putting ZFS on top of DRBD (i prefere the opposite: DRBD
> on top of ZFS)

If it s supposed to become a storage system (e.g., one that is used by
the Hypervisors via NFS), then the whole thing is a different story, and
we may be talking about an active/passive NFS storage cluster that the
Hypervisors connect to. The setup mentioned above is probably still the
way to go with regard to the storage stack layout, however there could
obviously be a single large NFS volume, which would only be active
(accessible) on one of the storage nodes at a time.

Best regards,
Robert Altnoeder
+43 1 817 82 92 0
robert.altnoeder at linbit.com

LINBIT | Keeping The Digital World Running
DRBD - Corosync - Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

More information about the drbd-user mailing list