Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
ok, so we test the following set # blockdev --setra 8 /dev/sda (default) # echo 512 > /sys//block/sda/queue/nr_requests # echo deadline > /sys/block/sda/queue/scheduler # echo 20 > /proc/sys/vm/dirty_background_ratio # echo 60 > /proc/sys/vm/dirty_ratio copy 2,7GB iso file from /tmp > lv on top of drbd /dev/vgmirror/test - time 7:43 min then we set read-ahead # blockdev --setra 16384 /dev/sda (only change) # echo 512 > /sys//block/sda/queue/nr_requests # echo deadline > /sys/block/sda/queue/scheduler # echo 20 > /proc/sys/vm/dirty_background_ratio # echo 60 > /proc/sys/vm/dirty_ratio copy 2,7GB iso file from /tmp > lv on top of drbd /dev/vgmirror/test - time 0:35 min now we copy the same file direct on top of drbd - time 3:42 min then we attach the same phy lv /dev/vgmirror/test to DomU # xm block-attach 10 phy:/dev/vgmirror/test /dev/xvdb w copy the *.iso file from Host /tmp > DomU:/mnt which mount to /dev/vgmirror/test # scp /tmp/*.iso domU:/mnt/ after approximately copy 1GB eth transfer rate goes bck to 10 Mb/s after approximately copy 2GB -- stalled-- (time 5:09 min) I donŽt understand this behavior? Hardware / Software: two Supermicro Server 8GB RAM, 2xQuadcore Xeon 5x 3ware 9550 Raid 5 with 6*320GB Seagate, Network for DRBD between the two systems is a direct 10 GB connection (Intel 82598EB 10GbE AF Network Adapter) only for DRBD Sync SLES 10 SP2, 2.6.16.60-0.39.3-xen #1 SMP x86_64 DRDB 8.2.6 Raid 5 > phy. Disk > LVM > DBBD > LVM > XEN > VBD one lv on top of DRBD as OCFS2 for the DomUŽs /etc/drbd.conf ----------------- global { usage-count yes; # minor_count 5; # dialog-refresh 5; disable-ip-verification; } common { syncer { rate 600M; al-extents 257; } } resource drbd0 { protocol C; handlers { split-brain "/usr/lib/drbd/notify-split.brain.sh root"; } startup { wfc-timeout 0; degr-wfc-timeout 120; become-primary-on both; } disk { on-io-error detach; max-bio-bvecs 1; } net { allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } on enterprise5 { device /dev/drbd0; disk /dev/vglocal/lvlocal; address 192.168.1.15:7788; flexible-meta-disk internal; } on enterprise6 { device /dev/drbd0; disk /dev/vglocal/lvlocal; address 192.168.1.16:7788; flexible-meta-disk internal; } } Why is write performance so low? What else can I do to get better write performance? help me, please. Regards Andreas ______________________________________________________ GRATIS für alle WEB.DE-Nutzer: Die maxdome Movie-FLAT! Jetzt freischalten unter http://movieflat.web.de