using "dd if=/dev/zero of=/dev/drbd26 bs=10M count=100" I get:<br><br>drbd connected<br>1048576000 bytes (1.0 GB) copied, 13.6526 seconds, 76.8 MB/s<br>1048576000 bytes (1.0 GB) copied, 13.4238 seconds, 78.1 MB/s<br>
1048576000 bytes (1.0 GB) copied, 13.2448 seconds, 79.2 MB/s<br><br>drbd disconnected<br>1048576000 bytes (1.0 GB) copied, 4.04754 seconds, 259 MB/s<br>1048576000 bytes (1.0 GB) copied, 4.06758 seconds, 258 MB/s<br>1048576000 bytes (1.0 GB) copied, 4.06758 seconds, 258 MB/s<br>
<br>The three (intel) gigabit PCIe cards are bonded with balance-rr, and iperf gives me:<br><br>iperf 0.0-10.0 sec 2.52 GBytes 2.16 Gbits/sec (276.48MB/s)<br><br>So clearly there is enough speed for both on the network and in the backend to support higher speeds. The boxes are with cross-over back-to-back no-switch.<br>
<br>version: 8.3.0 (api:88/proto:86-89)<br>GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by phil@fat-tyre, 2008-12-18 15:26:13<br><br>global { usage-count yes; }<br> common { syncer { rate 650M; } }<br><br>
resource OpenVZ_C1C2_B_LVM5 {<br> protocol C;<br> startup {degr-wfc-timeout 120;}<br> disk {on-io-error detach;no-disk-flushes;no-md-flushes;no-disk-drain;no-disk-barrier;}<br> net {<br> cram-hmac-alg sha1;<br> shared-secret "OpenVZ_C1C2_B";<br>
allow-two-primaries;<br> after-sb-0pri discard-zero-changes;<br> after-sb-1pri discard-secondary;<br> after-sb-2pri disconnect;<br> rr-conflict disconnect;<br> timeout 300;<br> connect-int 10;<br> ping-int 10;<br>
max-buffers 2048;<br> max-epoch-size 2048;<br> }<br> syncer {rate 650M;al-extents 257;verify-alg crc32c;}<br> on c1 {<br> device /dev/drbd26;<br> disk /dev/mapper/xenvg-OpenVZ_C1C2_B_LVM5;<br> address <a href="http://10.0.10.10:7826">10.0.10.10:7826</a>;<br>
meta-disk /dev/mapper/xenvg-DRBD_MetaDisk[26];<br> }<br> on c2 {<br> device /dev/drbd26;<br> disk /dev/mapper/xenvg-OpenVZ_C1C2_B_LVM5;<br> address <a href="http://10.0.10.20:7826">10.0.10.20:7826</a>;<br>
meta-disk /dev/mapper/xenvg-DRBD_MetaDisk[26];<br> }<br>}<br><br><br>Some of the settings above are unsafe (no-disk-flushes;no-md-flushes), they were turned on to see if it makes any different (did not)<br><br>The two boxes are quad core 3GHz Nehalems, 12GB tripple channel DDR3-1600, 6 western digital caviar black 750GB hdds, in RAID10 with LVM on top of it, the DRBD backends are carved out of LVM. Three separate Intel gigabit PCIe cards are bonded with ballance-rr, and connects them back-to-back, with a forth gigabit card in each box (onboard) toward the outside.<br>
<br>The OS is Debian Etch + Backports with some custom deb packages rolled by me. The machine is a Xen Dom0, kernel: 2.6.26, xen: 3.2.1, drbd: 8.3.0<br><br>Thanks any help / hint in advance,<br><br>z<br>