[DRBD-user] Re: drbd is performing at 12 MB/sec on recent

Bart Coninckx bart.coninckx at telenet.be
Fri Oct 17 16:39:47 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


> On Thu, Oct 16, 2008 at 10:19:20AM +0200, Bart Coninckx wrote:
> > > fix your local io subsystem.
> > > fix your local io subsystem driver.
> > >
> > > check
> > >  local io subsystem,
> > >  (battery backed) write cache enabled?
> >
> > Yes, I did that yesterday, I got from 13 MB/Sec to 16 MB/Sec. Better, but
> > still not good.
> >
> > >  raid resync/rebuild/re-whatever going on at the same time?
> >
> > nope, RAID has been built and is steady.
> >
> > >  does the driver honor BIO_RW_SYNC?
> >
> > errr ... how can I check that? /proc/driver/cciss/cciss0 does not reveal
> > anything about that.
> >
> > >  does the driver introduce additional latency
> > >  because of ill-advised "optimizations"?
> >
> > I use a stock SLES 10 SP1 cciss driver. I did understand that the I/O
> > scheduler is not the best one. Perhaps you are referring to that?
> >
> > > if local io subsystem is ok,
> > > but DRBD costs more than a few percent in throughput,
> > > check your local io subsystem on the other node!
> >
> > That node is identical to the other one.
> >
> > > if that is ok as well, check network for throughput,
> > > latency and packet loss/retransmits.
> >
> > scp gives me about 70 MB/Sec, so my guess is that things are more likely
> > related to drbd and local io.
>
> scp does not know about fsync.
>
> use the same benchmark against drbd and non-drbd partition on that cciss...

Hi Lars,

I did these:

node1:/opt # hdparm -t /dev/drbd0

/dev/drbd0:
 Timing buffered disk reads:  1016 MB in  3.00 seconds = 338.34 MB/sec

node1:/opt # hdparm -t /dev/mapper/vg1-opt

/dev/mapper/vg1-opt:
 Timing buffered disk reads:  730 MB in  3.01 seconds = 242.79 MB/sec

node1:/opt # hdparm -t /dev/cciss/c0d0p3

/dev/cciss/c0d0p3:
 Timing buffered disk reads:  1666 MB in  3.01 seconds = 554.10 MB/sec


The first one is obviously drbd (on top of LVM)
The second one is /opt in the same LVM group
The third is the root partition outside of LVM.


This would suggest LVM slows things somewhat down. However, I did setup drbd 
outside of LVM and this gave no different sync results (did no performance 
tests once synced, since this the syncing took so long). 

Do these tests make any sence?


thx!


Bart








More information about the drbd-user mailing list