[DRBD-user] no-disk-flushes ineffective?

Andrew (Anything) anything at starstrike.net
Wed Mar 18 12:57:38 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Gordan.

Im pretty sure I tried no-disk-drain also in desperation. Ill redo it with
some benchmark results soon for you.
Its an almost unmodified default copy. Ive cut out all the comments for
here.


global {
    usage-count yes;
}

common {
  syncer {
        rate 100M;
  }
}
resource r0 {

  protocol C;

  handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
  }

  startup {

  }

  disk {
    on-io-error   detach;

    no-disk-barrier;
    no-disk-flushes;
    no-md-flushes;

  }

  net {
    sndbuf-size 0;
    allow-two-primaries;
    cram-hmac-alg "sha1";
    shared-secret "xxxxxxxxxxx";
    after-sb-0pri discard-younger-primary;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
  syncer {
    rate 100M;
    al-extents 1201;
  }
  on ocfstest1 {
        device  /dev/drbd0;
        disk    /dev/hdb;
        address 192.168.10.79:7788;
        meta-disk       internal;
  }
  on ocfstest2 {
        device  /dev/drbd0;
        disk    /dev/hdb;
        address 192.168.10.40:7788;
        meta-disk       internal;
  }
}

-----Original Message-----
From: drbd-user-bounces at lists.linbit.com
[mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Gordan Bobic
Sent: Wednesday, 18 March 2009 7:47 PM
To: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] no-disk-flushes ineffective?

Can you post the contents your /proc/drbd?
You might also want to add "no-disk-drain", and see if that helps.

Gordan

Andrew (Anything) wrote:
> Hi All
> 
> I've recently started trying to use DRBD + ocfs2 in a dual node setup and
> have had issues with very slow write performance.
> 
> Adding no-disk-barrier, no-disk-flushes and no-md-flushes seems to work
for
> hundreds of different google results, and sounds like that's why my drbd
> disk is so slow.
> 
> So I thought I'd fire up a test on 2 x virtual machines (on physically
> different machines) to test see the sort of change that this might make.
> These two virtual machines are far from super speedy, and are only on
> 100mbit (direct interconnect), but I expected to see at least some sort of
> improvement.
> 
> All ive done is add the no flush lines to the disk section of r0 on both
> servers.
>  no-disk-barrier;
>  no-disk-flushes;
>  no-md-flushes;
> 
> Theyre currently running v8.2.7, but I checked v8.3.0 aswell as v8.0.14
(on
> an older kernel) just in case. All with pretty much the same results.
> 
> I'm hoping someone can see clear as day what ive missed, ive included some
> of my benchmark results.
> 
> Thanks in advance.
> Andy..
> 
> 
> #dd if=/dev/zero of=/dev/sdb bs=512k count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 15.0304 seconds, 34.9 MB/s
> 
> # dd if=/dev/zero of=/dev/sdb bs=512 count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 512000 bytes (512 kB) copied, 0.351227 seconds, 1.5 MB/s
> 
> 
> ##### without flushes:
> #single node
> # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 15.0428 seconds, 34.9 MB/s
> 
> # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 512000 bytes (512 kB) copied, 0.367788 seconds, 1.4 MB/s
> 
> #dual node
> # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 51.0372 seconds, 10.3 MB/s
> 
> # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 512000 bytes (512 kB) copied, 2.03025 seconds, 252 kB/s
> 
> 
> ##### with flushes:
> #single node
> # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 17.5014 seconds, 30.0 MB/s
> 
> # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 512000 bytes (512 kB) copied, 0.420332 seconds, 1.2 MB/s
> 
> #dual node
> # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 49.6752 seconds, 10.6 MB/s
> 
> # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 512000 bytes (512 kB) copied, 1.99413 seconds, 257 kB/s
> 
> 
> # drbdsetup /dev/drbd0 show
> disk {
>         size                    0s _is_default; # bytes
>         on-io-error             detach;
>         fencing                 dont-care _is_default;
>         no-disk-barrier ;
>         no-disk-flushes ;
>         no-md-flushes   ;
>         max-bio-bvecs           0 _is_default;
> }
> ..
> al-extents              1201;
> ..
> sndbuf-size             0; # bytes (larger buffers had slower results for
> small files test)
> 
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user




More information about the drbd-user mailing list