[DRBD-user] Best Practice wanted for DRBD8 / KVM Cluster
juergen.sauer at automatix.de
Mon Feb 4 09:31:54 CET 2019
for a small to medium high availability cluster based on simple
server/standard hardware (SATA drives), we build an DRBD8 Cluster with 2
2 x Hardware (2 x mdm raid 5)
2 x DRBD /dev/drbd0 running nearly fine
File system on DRBD Resource today ocfs2.
Kernel on Host: 4.20.6, 8 CPU
Scheduler on Host: mp-deadline
Network for DRBD dedicated, ptp, 10GBe.
virt hosts are managed by libvirt/kvm/qemu:
Guests: Debian Jessie, Stretch, Buster (qcow2, scheduler noop/none,
virtio-net, virtio storage, file system btrfs noatime,nodiratime)
And on demand: Win7, not permanent online
This solutions works nearly fine, but:
The available storage bandwidth in the guests is max 30 M Byte/sec,
looks a little to slow.
On heavy I/O like BTRFS operations in one guest (scrub, balance,
de-fragment), the host gets a huge load problem, the host load runs
against 40, 50, 60, 80 ..., even iothreads are set to 4.
The guest latency is arisen a lot, nfs clients to the guest are spamming
"nfs server not responding" ...
After a few minutes the load impact solves by itself...
sometimes the guest looses network connectivity, even during high load
situations on DRBD, like read of drbd8/ocfs2 and write backup to
On OCFS2 the read operations are breaking down to 10, 20 M Byte/sec
during rsync backups of the DRBD8/OCFS2 Resource.
About 99% of production time, all is really fine, we are worrying about
the 1% in case of running backups and the side effects of the load.
Has anybody seen those effects in practice?
mit freundlichen Grüßen
Jürgen Sauer - automatiX GmbH,
+49-4209-4699, juergen.sauer at automatix.de
Geschäftsführer: Jürgen Sauer,
Gerichtstand: Amtsgericht Walsrode • HRB 120986
Ust-Id: DE191468481 • St.Nr.: 36/211/08000
GPG Public Key zur Signaturprüfung:
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 374 bytes
Desc: not available
More information about the drbd-user