Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi Philipp, Sorry for my late reply. I made those adjustments you had mentioned, however I was still getting the same results. Isolating the VMs out of it, I did a few transfer tests between each machine using rcp. I no longer have the results but I recall I was getting roughly the same speeds on both. They were what I expected from the network. However I've been testing on another cluster at the moment of which is made up of identical hardware and I'm getting expected results. I appreciate the replies though. If I can make the time, I'll see if I can find the cause of whatever it was. Regards, Richard. On Mon, Apr 27, 2015 at 6:30 PM, Philipp Marek <philipp.marek at linbit.com> wrote: > >> I'm doing some testing with Ganeti, a cluster-based VM manager, which >> uses DRBD as the default syncing system between nodes. My test cluster >> has two nodes, Alpha & Beta. Ganeti provides a way to migrate from one >> to another, handling DRBD automatically to sync to the node that isn't >> running the VM. >> >> I'm running into an issue that I'm having difficultly understanding. >> When benchmarking from within the VM, I'm getting drastically >> different results depending on the node that the VM is running from. >> >> The DRBD benchmarks are below; > ... >> For comparison, I've done benchmarks with DRBD turned off. These are below; >> Alpha; >> Sequential Write: 206.250 MB/s > ... >> Beta; >> Sequential Write: 207.551 MB/ > > As the direct write speeds look similar, I'd guess it's network related. > > Try turning the various optimizations off (TCP checksum offloading etc.) > - if one of the machines generates bad CRCs every so many packets, it will > mostly limit the data stream in the Primary => Secondary direction > as that one runs much more packets. >