Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Apr 25, 2013 at 05:28:50AM -0700, Louis Voo wrote:
> Hi,
>
> I'm doing some drbd8.4.3 test and found some strange result.
> When I run a dd test on 5G partition it is slower then when I run on 100G partition. Why?
And without DRBD?
And on the other node?
Did you know that single disk throughput can vary greatly
depending on LBA?
I have seen large single disk delivery less than half the throughput
in high LBAs compared with what they can do in low LBAs.
So how is your on-disk layout of your LVs?
# lvs --segment -o +seg_pe_ranges
Lars
> Here is my setup
>
> root at server9:/mnt/drbd# cat /proc/mdstat
> md3 : active raid1 sdb5[1] sda5[0]
> 972524352 blocks super 1.2 [2/2] [UU]
>
> root at server9:/mnt/drbd# pvs
> PV VG Fmt Attr PSize PFree
> /dev/md3 vmVg lvm2 a- 927.47g 632.47g
>
> root at server9:/mnt/drbd# lvs
> LV VG Attr LSize Origin Snap% Move Log Copy% Convert
> ddtest vmVg -wi-ao 5.00g
> drbd vmVg -wi-ao 100.00g
>
>
>
> root at server9:/mnt/drbd# cat /etc/drbd.conf
> global {
> usage-count no;
> }
> common {
> protocol C;
> startup {
> degr-wfc-timeout 60;
> wfc-timeout 30;
> }
>
> net {
> allow-two-primaries; ### For Primary/Primary ###
> after-sb-0pri discard-zero-changes;
> after-sb-1pri violently-as0p;
> after-sb-2pri violently-as0p;
> sndbuf-size 0;
> }
>
>
> syncer {
> rate 200M;
> verify-alg sha1;
> }
> }
>
>
> resource ddtest {
>
> disk /dev/vmVg/ddtest;
> device /dev/drbd4;
> meta-disk internal;
>
> on server9 {
> address 10.0.0.150:7792;
> }
>
> on server10 {
> address 10.0.0.151:7792;
> }
> }
>
> resource drbd {
>
> disk /dev/vmVg/drbd;
> device /dev/drbd5;
> meta-disk internal;
>
> on server9 {
> address 10.0.0.150:7793;
> }
>
> on server10 {
> address 10.0.0.151:7793;
> }
> }
>
>
>
> Both partition are using ext4 file system.
>
> dd I used to run the test:
> dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync && dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync && dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
>
>
> The result I get for 5G partition are:
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 11.0991 s, 96.7 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 10.6238 s, 101 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 10.5732 s, 102 MB/s
>
>
>
> Result for 100G partition are:
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 8.34405 s, 129 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 8.26927 s, 130 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 8.28674 s, 130 MB/s
>
>
>
> Result run on root partition, no drbd and LVM, only raid1
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 8.34761 s, 129 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 8.60441 s, 125 MB/s
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 9.08852 s, 118 MB/s
>
>
>
> Regards
> Lousi
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list -- I'm subscribed