[DRBD-user] SOLVED - kind of (Fwd: slow drbd over tripple gigabit bonding balance-rr)

Zoltan Patay zoltanpatay at gmail.com
Sat Aug 1 03:23:28 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


turns out, once I did the testing with 1M block size instead of 10M, it was
showing the performance I expected.

sync; echo 3 > /proc/sys/vm/drop_caches # free pagecache, dentries and
inodes

sync;dd if=/dev/zero of=/vz/blob bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 4.04935 seconds, 259 MB/s

I wonder if "max-buffers " and "max-epoch-size" set to 2048 has anything to
do with this.

z

PS: I want to thank Mark for replying to the post.

---------- Forwarded message ----------
From: Zoltan Patay <zoltanpatay at gmail.com>
Date: Thu, Jul 30, 2009 at 3:57 AM
Subject: slow drbd over tripple gigabit bonding balance-rr
To: drbd-user at lists.linbit.com


using "dd if=/dev/zero of=/dev/drbd26 bs=10M count=100" I get:

drbd connected
1048576000 bytes (1.0 GB) copied, 13.6526 seconds, 76.8 MB/s
1048576000 bytes (1.0 GB) copied, 13.4238 seconds, 78.1 MB/s
1048576000 bytes (1.0 GB) copied, 13.2448 seconds, 79.2 MB/s

drbd disconnected
1048576000 bytes (1.0 GB) copied, 4.04754 seconds, 259 MB/s
1048576000 bytes (1.0 GB) copied, 4.06758 seconds, 258 MB/s
1048576000 bytes (1.0 GB) copied, 4.06758 seconds, 258 MB/s

The three (intel) gigabit PCIe cards are bonded with balance-rr, and iperf
gives me:

iperf 0.0-10.0 sec  2.52 GBytes  2.16 Gbits/sec (276.48MB/s)

So clearly there is enough speed for both on the network and in the backend
to support higher speeds. The boxes are with cross-over back-to-back
no-switch.

version: 8.3.0 (api:88/proto:86-89)
GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by phil at fat-tyre,
2008-12-18 15:26:13

global { usage-count yes; }
       common { syncer { rate 650M; } }

resource OpenVZ_C1C2_B_LVM5 {
  protocol C;
  startup {degr-wfc-timeout 120;}
  disk {on-io-error
detach;no-disk-flushes;no-md-flushes;no-disk-drain;no-disk-barrier;}
  net {
    cram-hmac-alg sha1;
    shared-secret "OpenVZ_C1C2_B";
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
    timeout 300;
    connect-int 10;
    ping-int 10;
    max-buffers 2048;
    max-epoch-size 2048;
  }
  syncer {rate 650M;al-extents 257;verify-alg crc32c;}
  on c1 {
    device     /dev/drbd26;
    disk       /dev/mapper/xenvg-OpenVZ_C1C2_B_LVM5;
    address    10.0.10.10:7826;
    meta-disk  /dev/mapper/xenvg-DRBD_MetaDisk[26];
  }
  on c2 {
    device    /dev/drbd26;
    disk       /dev/mapper/xenvg-OpenVZ_C1C2_B_LVM5;
    address   10.0.10.20:7826;
    meta-disk  /dev/mapper/xenvg-DRBD_MetaDisk[26];
  }
}


Some of the settings above are unsafe (no-disk-flushes;no-md-flushes), they
were turned on to see if it makes any different (did not)

The two boxes are quad core 3GHz Nehalems, 12GB  tripple channel DDR3-1600,
6 western digital caviar black 750GB hdds, in RAID10 with LVM on top of it,
the DRBD backends are carved out of LVM. Three separate Intel gigabit PCIe
cards are bonded with ballance-rr, and connects them back-to-back, with a
forth gigabit card in each box (onboard) toward the outside.

The OS is Debian Etch + Backports with some custom deb packages rolled by
me. The machine is a Xen Dom0, kernel: 2.6.26, xen: 3.2.1, drbd: 8.3.0

Thanks any help / hint in advance,

z
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090731/6646774c/attachment.htm>


More information about the drbd-user mailing list