Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I will coalesce a couple of replies, thanks everyone :) On 21/05/12 07:14, Florian Haas wrote: > Matthew, > > On Wed, May 16, 2012 at 10:11 PM, Matthew Bloch<matthew at bytemark.co.uk> wrote: >> I'm trying to understand a symptom for a client who uses drbd to run >> sets of virtual machines between three pairs of servers (v1a/v1b, >> v2a/v2b, v3a/v3b), and I wanted to understand a bit better how DRBD I/O >> is buffered depending on what mode is chosen, and buffer settings. > > When you say virtual machines, how exactly are they being virtualized? > VMware? Libvirt/KVM? Xen? These are KVM-based, but the DRBD happens outside the VMs, on the host, and the /dev/drbdX devices presented as the VMs' /dev/vda. So I'm not sure why the VM networking could be anything to do with it, particularly as the DRBD goes over a separate interface. I'm replicating the test on the host, just scribbling directly to a new /dev/drbd to make sure I see the same performance drops while _not_ going via KVM. I can't see how something guest-side could affect it though, when the problem manifests itself in the hosts' drbd. Interesting about bandwidth - so DRBD doesn't have any special buffers of its own, just sits on the usual TCP buffers. That makes sense. As I said, the interface stats do not show that they are sending any more DRBD traffic than the pairs of servers that are working fine though. Thanks for the accounts Pascal and Felix, though Felix I'm pretty certain Debian/lenny's kernel had a virtio bug that does cause its network to break and require a "rmmod virtio_net; modprobe virtio_net" to fix. That's nothing to do with drbd, and your problem may be entirely separate from that as well :) -- Matthew Bloch Bytemark Hosting http://www.bytemark.co.uk/ tel: +44 (0) 1904 890890