[DRBD-user] Best Practice wanted for DRBD8 / KVM Cluster

Juergen Sauer juergen.sauer at automatix.de
Tue Feb 5 10:27:06 CET 2019

Am 05.02.19 um 07:49 schrieb Chris Hartman:
> Greetings!
> I actually messaged about this several months(?) ago, though you
> articulated it better than I.
> I run a 2-node HA VM Cluster with KVM/pacemaker on top of DRBD 8.4 very
> comparable to your hardware.and have experienced similar symptoms during
> backup procedures. When it's really bad, one node will fence the other
> because the remote disk becomes unresponsive past the DRBD timeout
> threshold (auto calculated around 42 seconds).
> My only work around has been to keep all VMs on a single node at a time and
> manually move all nodes periodically- this setup tolerates the I/O spike
> much better. However, we don't get the performance benefit of having both
> nodes active, not to mention the added administrative overhead.

Hi Chris,
I found, that thotteling, limiting the I/O for virtual machine drives helps:

# virsh blkdeviotune Virt-Name vda --total-iops-sec 1000
--total-bytes-sec 52428800

See also:


But this is only a workaround, not the solution that drbd8 fails on
heavy I/O, due self generated Load escalation.

mit freundlichen Grüßen
Jürgen Sauer
Jürgen Sauer - automatiX GmbH,
+49-4209-4699, juergen.sauer at automatix.de
Geschäftsführer: Jürgen Sauer,
Gerichtstand: Amtsgericht Walsrode • HRB 120986
Ust-Id: DE191468481 • St.Nr.: 36/211/08000
GPG Public Key zur Signaturprüfung:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: juergen_sauer.vcf
Type: text/x-vcard
Size: 374 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20190205/b0e5169a/attachment.vcf>

More information about the drbd-user mailing list