Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
May be you can increase the rate value of syncer, instead 50M, try 700M. Also what is the result of command "netstat -i" ? Your device TX or RX is without errors ? -- Jeronimo Zucco LPIC-1 Linux Professional Institute Certified Núcleo de Processamento de Dados Universidade de Caxias do Sul http://jczucco.blogspot.com Citando Ralf Schenk <rs at databay.de>: > Hello ! > > I've a performance problem running drbd 8.0.1 on a XEN Domain-0 (XEN > 3.0.4-1, Kernel 2.6.16.33 all built from source, Distribution is Ubuntu > Dapper Server). > > DRBD is working, but it's dead slow. > > The network is a Gigabit crosslink cabled Network (Intel onboard PCIe > E1000 Card). The servers are 2x DualXEON 5130 machines (Supermicro). The > storage System is a 3Ware 9550SX Controller with 3 500GB Seagate SATA > HDD's as RAID 5. I optimized Controller performance by setting some > settings /sys/block/sda/queue/XXX as advised by 3Ware and I found one > comment from one of the DRBD Authors to set "echo 1024 > > /sys/block/sda/queue/nr_requests" because of the queue depth of the > 3Ware driver/controller. I also tested with standard settings. > > The network interface shows a performance of about 985 Mbit throughput > measured with iperf. For low CPU usage I set the interfaces to MTU 9000 > (Jumbo frames) but I also tested with MTU of 1500. > > Speed on LVM Volume: > dm -a 0 -o /dev/mb01a/shared -b 1M -s 3g -m -y -p > 64.03 MB/sec (3221225472 B / 00:47.977005) > > Speed on DRBD Device built on above Volume in _disconnected_ mode: > dm -a 0 -o /dev/drbd2 -b 1M -s 3g -m -y -p > 34.95 MB/sec (3221225472 B / 01:27.886889) > > Thats already disappointing (I hoped to get 75% of the native > performance out of the drbd device and my network connection could take > that easily). > > Thats what I get in connected mode on the primary > dm -a 0 -o /dev/drbd2 -b 1M -s 3g -m -y -p > 2.66 MB/sec (3221225472 B / 19:13.836492) > > Thats hard. Are there any experiences out there like this. I think I > have a problem in my I/O subsystem perhaps related to the combo XEN/DRBD. > > All tests were done in a minimal Domain-0 limited to 768 MB memory and > no other Domains or prcesses running exept a SSH server. > > I already tried a bunch of max-buffer/snd-bufffer/unplug watermark etc. > settings but I couldn't increase the data transfer rate above 5-6 > MB/sec. Also I tried the setting use-bmbv in the disk section which > didn't help. > > This is the latest config: > resource "shared" { > protocol C; > startup { > wfc-timeout 0; ## Infinite! > degr-wfc-timeout 120; ## 2 minutes. > } > disk { > on-io-error detach; > use-bmbv; > } > net { > # allow-two-primaries; > cram-hmac-alg sha1; > shared-secret "EV8CbHPLwzIW"; > # timeout 60; > # connect-int 10; > # ping-int 10; > # sndbuf-size 1M; > max-buffers 8192; > max-epoch-size 1024; > # unplug-watermark 32768; > } > syncer { > rate 50M; > al-extents 257; > } > > on megabad01a { > device /dev/drbd2; > disk /dev/mb01a/shared; > address 10.10.10.1:7791; > meta-disk /dev/mb01a/drbd[2]; > } > > on megabad01b { > device /dev/drbd2; > disk /dev/mb01b/shared; > address 10.10.10.2:7791; > meta-disk /dev/mb01b/drbd[2]; > } > } > > Bye > -- > __________________________________________________ > > Ralf Schenk > fon (02 41) 9 91 21-0 > fax (02 41) 9 91 21-59 > rs at databay.de > __________________________________________________ > > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.