[DRBD-user] Feature suggestion for primary/primary configurations

. Honmans wania_mark at hotmail.co.uk
Fri Nov 27 18:37:56 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.







For the past few years we've been running VMware server on top of DRBD in normal single-primary mode. 
This has worked well for us, except for the pain involved in enlarging virtual disks.

So
we have been looking at Proxmox VE as an alternative virtualisation
solution - this is nice in that it uses LVM on top of DRBD so it is
quick and easy to expand a virtual disk drive. 

In order to
facilitate migration of VMs from one peer to the other, Proxmox VE
makes use of DRBD in a dual-primary configuration. This works very
well, *except* in split-brain situations where both peers have running
VMs and therefore both have updated their DRBD volumes. 

As I
understand it (and I would be delighted if there is a better solution)
it is necessary to discard the data on one of the peers before they can
be resynchronised - which is a bit of a deal-breaker, as it appears
that data loss would result. 

If it is not possible now, would
be possible in future for a primary/primary split-brain to be resolved
by keeping track of "recently updated" data on the two nodes and merge
the changes rather than rolling them back on one of the nodes as in
http://www.drbd.org/users-guide-emb/s-resolve-split-brain.html ?

Obviously
a primary/primary configuration will be anarchy unless there is
coordination between peers at a higher software level - such as the
Proxmox VE software which "knows" which of the peers each VM is running
on. 

Can DRBD make use of that information to synchronise
regions of a resource in different directions - or would it be possible
to add that as a feature at some future date? 

For example, if we have LVM on top of DRBD (say /dev/drbd0), and each VM has a distinct Logical Volume for its virtual disk...

VM vm1 on LV vm1data - currently running on peer A
VM vm2 on LV vm2data - currently running on peer B

When
resolving split-brain, the region of /dev/drbd0 occupied by LV vm1data
is synced from peer A to  B, and the region occupied by vm2data is
synced from B to A. 

In effect, there would be synchronisation of regions of resources rather that whole DRBD resources. 

In the meaintime, any more information on resolving primary/primary split-brain conditions would be very welcome.

Mark 		 	   		  
_________________________________________________________________
Got more than one Hotmail account? Save time by linking them together
 http://clk.atdmt.com/UKM/go/186394591/direct/01/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20091127/1b15c82d/attachment.htm>


More information about the drbd-user mailing list