[DRBD-user] Dual primary and LVM

Igor Cicimov igorc at encompasscorporation.com
Thu Jul 27 03:52:55 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Gionatan,

On Thu, Jul 27, 2017 at 12:14 AM, Gionatan Danti <g.danti at assyoma.it> wrote:

> Hi all,
> I have a possibly naive question about a dual primary setup involving LVM
> devices on top of DRBD.
>
> The main question is: using cLVM or native LVM locking, can I safely use a
> LV block device on the first node, *close it*, and reopen it on the second
> one? No filesystem is involved and no host is expected to concurrently use
> the same LV.
>
> Scenario: two CentOS 7 + DRBD 8.4 nodes with LVs on top of DRBD on top of
> a physical RAID array. Basically, DRBD replicate anything written to the
> specific hardware array.
>
> Goal: having a redundant virtual machine setup, where vms can be live
> migrated between the two hosts.
>
> Current setup: I currently run a single-primary, dual nodes setup, where
> the second host has no access at all to any LV. This setup worked very well
> in the past years, but it forbid using live migration (the secondary host
> has no access to the LV-based vdisk attached to the vms, so it is
> impossible to live migrate the running vms).
>

> I thought to use a dual-primary setup to have the LVs available on *both*
> nodes, using a lock manager to arbitrate access to them.
>
> How do you see such a solution? It is workable? Or would you recommend to
> use a clustered filesystem on top of the dual-primary DRBD device?
>
>
​I would recommend going through this lengthy post
http://lists.linbit.com/pipermail/drbd-user/2011-January/015236.html
covering all pros and cons of several possible scenarios.

The easiest scenario for dual-primary DRBD would be a DRBD device per VM,
so something like this RAID -> PV -> VG -> LV -> DRBD -> VM, where you
don't even need LVM locking (since that layer is not even exposed to the
user) and is great for dual-primary KVM clusters. You get live migration
and also keep the resizing functionality too since you can grow the
underlying LV and then the DRBD it self to increase the VM disk size lets
say. The VM needs to be started on one node only of course so you (or your
software) need to make sure this is always the case. One huge drawback of
this approach though is the large number of DRBD device to maintain in case
of hundreds of VM's! Although since you have already committed to different
approach this might not be possible at this point.

Note: In this case though you don't even need dual primary since the DRBD
for each VM can be independently promoted to primary on any node at any
time. In case of single DRBD it is all-or-nothing so no possibility of
migrating individual VMs.

Now adding LV on top of DRBD is bit more complicated. I guess your current
setup is something like this?

                                              LV1 -> VM1
RAID -> DRBD -> PV -> VG -> LV2 -> VM2
                                              LV3 -> VM3
                                               ...

In this case when DRBD is in dual-primary the DLM/cLVM setup is imperative
so the LVMs know who has the write access. But then on VM migration
"something" needs to shutdown the VM on Node1 to release the LVM lock and
start the VM on Node2. Same as above, as long as each VM is running on
*only one* node you should be fine, the moment you start it on both you
will probably corrupt your VM. Software like Proxmox should be able to help
you on both points.

Which brings me to an important question I should have asked at the very
beginning actually: what do you use to manage the cluster?? (if anything)

Regards,
Igor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20170727/d989634d/attachment.htm>


More information about the drbd-user mailing list