Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello, I'm trying to use DRBD as storage for virtualised guests. DRBD is created on top of LVM partition, all LVM partitions reside on single software RAID1 array (2 disks) In this setup the DRBD is supposed to operate in standalone mode most of the time, network connections will only come into play when migrating a guest to another host (That's why I can't use RAID - DRBD - LVM - there can be more than 2 hosts and guests need to be able to migrate anywhere.) So I did very simple benchmark, created an LVM partition and tried to write into it and then read the data: # dd if=/dev/zero of=/dev/mapper/obrazy2-pokus \ bs=$((1024**2)) count=16384 # dd if=/dev/mapper/obrazy2-pokus of=/dev/null \ bs=$((1024**2)) count=16384 Both tests yielded about 80MB/s throughput. Then I created a DRBD on top of that LVM and retried the test: # /sbin/drbdmeta 248 v08 /dev/mapper/obrazy2-pokus internal create-md # drbdsetup 0 disk \ /dev/mapper/obrazy2-pokus /dev/mapper/obrazy2-pokus \ internal --set-defaults --create-device # drbdsetup 0 primary -o Reading performace was the same, but writes dropped to about 35MB/s, with flushes disabled (drbdsetup ... -a -i -D -m) about 45MB/s. I'd understand that, if the device was connected over network, but in standalone mode I was expecting DRBD to have roughly the same performance as underlying storage. My question: is that drop in write throughput normal or could there be some error in my setup, which is causing it? System setup is: Debian, kernel 2.6.34 from kernel.org, drbd-utils 8.3.7. Also tested on kernel 2.6.35.4 from kernel.org. Regards, J.B.