[DRBD-user] Best Practice wanted for DRBD8 / KVM Cluster
qrstuv at gmail.com
Thu Feb 7 16:22:51 CET 2019
Thanks for the tip! I'll do some experimentation the next chance I get.
I agree that the root cause seems like a performance bug from my
On Tue, Feb 5, 2019 at 4:27 AM Juergen Sauer <juergen.sauer at automatix.de>
> Am 05.02.19 um 07:49 schrieb Chris Hartman:
> > Greetings!
> > I actually messaged about this several months(?) ago, though you
> > articulated it better than I.
> > I run a 2-node HA VM Cluster with KVM/pacemaker on top of DRBD 8.4 very
> > comparable to your hardware.and have experienced similar symptoms during
> > backup procedures. When it's really bad, one node will fence the other
> > because the remote disk becomes unresponsive past the DRBD timeout
> > threshold (auto calculated around 42 seconds).
> > My only work around has been to keep all VMs on a single node at a time
> > manually move all nodes periodically- this setup tolerates the I/O spike
> > much better. However, we don't get the performance benefit of having both
> > nodes active, not to mention the added administrative overhead.
> Hi Chris,
> I found, that thotteling, limiting the I/O for virtual machine drives
> # virsh blkdeviotune Virt-Name vda --total-iops-sec 1000
> --total-bytes-sec 52428800
> See also:
> But this is only a workaround, not the solution that drbd8 fails on
> heavy I/O, due self generated Load escalation.
> mit freundlichen Grüßen
> Jürgen Sauer
> Jürgen Sauer - automatiX GmbH,
> +49-4209-4699, juergen.sauer at automatix.de
> Geschäftsführer: Jürgen Sauer,
> Gerichtstand: Amtsgericht Walsrode • HRB 120986
> Ust-Id: DE191468481 • St.Nr.: 36/211/08000
> GPG Public Key zur Signaturprüfung:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the drbd-user