<div dir="ltr"><div dir="ltr"><br></div><div dir="ltr"><div dir="ltr">Greetings!<div><br></div><div>I actually messaged about this several months(?) ago, though you articulated it better than I.</div><div><br></div><div>I
run a 2-node HA VM Cluster with KVM/pacemaker on top of DRBD 8.4 very
comparable to your hardware.and have experienced similar symptoms during
backup procedures. When it's really bad, one node will fence the other
because the remote disk becomes unresponsive past the DRBD timeout
threshold (auto calculated around 42 seconds).<br></div><div><div><div dir="ltr" class="gmail-m_-1944104392493102607m_-2833151843353402484gmail-m_3351722127921154012gmail_signature"><div><br></div><div>My
only work around has been to keep all VMs on a single node at a time
and manually move all nodes periodically- this setup tolerates the I/O
spike much better. However, we don't get the performance benefit of
having both nodes active, not to mention the added administrative
overhead.</div></div></div></div>
<div><div dir="ltr" class="gmail-m_-1944104392493102607m_-2833151843353402484gmail_signature"><div><br></div>-Chris</div></div></div></div>
</div>