AW: [DRBD-user] Re: drbd is performing at 12 MB/sec on recent

Bart Coninckx bart.coninckx at telenet.be
Sun Oct 19 17:58:52 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Saturday 18 October 2008 21:18, Petersen, Joerg wrote:
> Well, my experience is that 8.0.7 is the fastest DRDB...
> Later Versions just got slower!
> Try: no-disk-flushes
> and: no-md-flushes
> if you are using DRBD >= 8.0.13
> Not completely same to 8.0.7 but near to it...
>
> Mit freundlichen Grüßen
> Jörg
>
> -----Ursprüngliche Nachricht-----
> Von: drbd-user-bounces at lists.linbit.com
> [mailto:drbd-user-bounces at lists.linbit.com] Im Auftrag von Bart Coninckx
> Gesendet: Freitag, 17. Oktober 2008 16:40
> An: drbd-user at lists.linbit.com
> Betreff: [DRBD-user] Re: drbd is performing at 12 MB/sec on recent
>
> > On Thu, Oct 16, 2008 at 10:19:20AM +0200, Bart Coninckx wrote:
> > > > fix your local io subsystem.
> > > > fix your local io subsystem driver.
> > > >
> > > > check
> > > >  local io subsystem,
> > > >  (battery backed) write cache enabled?
> > >
> > > Yes, I did that yesterday, I got from 13 MB/Sec to 16 MB/Sec.
> > > Better, but still not good.
> > >
> > > >  raid resync/rebuild/re-whatever going on at the same time?
> > >
> > > nope, RAID has been built and is steady.
> > >
> > > >  does the driver honor BIO_RW_SYNC?
> > >
> > > errr ... how can I check that? /proc/driver/cciss/cciss0 does not
> > > reveal anything about that.
> > >
> > > >  does the driver introduce additional latency  because of
> > > > ill-advised "optimizations"?
> > >
> > > I use a stock SLES 10 SP1 cciss driver. I did understand that the
> > > I/O scheduler is not the best one. Perhaps you are referring to that?
> > >
> > > > if local io subsystem is ok,
> > > > but DRBD costs more than a few percent in throughput, check your
> > > > local io subsystem on the other node!
> > >
> > > That node is identical to the other one.
> > >
> > > > if that is ok as well, check network for throughput, latency and
> > > > packet loss/retransmits.
> > >
> > > scp gives me about 70 MB/Sec, so my guess is that things are more
> > > likely related to drbd and local io.
> >
> > scp does not know about fsync.
> >
> > use the same benchmark against drbd and non-drbd partition on that
> > cciss...
>
> Hi Lars,
>
> I did these:
>
> node1:/opt # hdparm -t /dev/drbd0
>
> /dev/drbd0:
>  Timing buffered disk reads:  1016 MB in  3.00 seconds = 338.34 MB/sec
>
> node1:/opt # hdparm -t /dev/mapper/vg1-opt
>
> /dev/mapper/vg1-opt:
>  Timing buffered disk reads:  730 MB in  3.01 seconds = 242.79 MB/sec
>
> node1:/opt # hdparm -t /dev/cciss/c0d0p3
>
> /dev/cciss/c0d0p3:
>  Timing buffered disk reads:  1666 MB in  3.01 seconds = 554.10 MB/sec
>
>
> The first one is obviously drbd (on top of LVM) The second one is /opt in
> the same LVM group The third is the root partition outside of LVM.
>
>
> This would suggest LVM slows things somewhat down. However, I did setup
> drbd outside of LVM and this gave no different sync results (did no
> performance tests once synced, since this the syncing took so long).
>
> Do these tests make any sence?
>
>
> thx!
>
>
> Bart

Hi Joerg,

I 'm using drbd-0.7.22, since I've installed SLES 10 SP1. As you probably 
know, one looses support for software versions other than those supplied with 
the DVDs. 


Rgds,

Bart






More information about the drbd-user mailing list