[DRBD-user] Large block IO bottleneck

Ross S. W. Walker rwalker at medallion.com
Wed Jan 3 16:20:43 CET 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


> -----Original Message-----
> From: drbd-user-bounces at lists.linbit.com 
> [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of 
> Philipp Reisner
> Sent: Wednesday, January 03, 2007 5:58 AM
> To: drbd-user at lists.linbit.com
> Subject: Re: [DRBD-user] Large block IO bottleneck
> 
> Am Dienstag, 2. Januar 2007 22:05 schrieb Ross S. W. Walker:
> > Hi there I am using DRBD 0.7.21 with iSCSI Enterprise 
> Target 0.4.14 on
> > CentOS 4.4.
> >
> > When I run iSCSI direct to the LVM lv on top of hardware 
> RAID I can get
> > 225 MB/s over two sessions in MPIO with 256K block size, 
> but when I put
> > DRBD in-between iSCSI and LVM the throughput tops out at 80 
> MB/s and I
> > can't seem to go over that.
> >
> > DRBD seems to report it's max number of sectors as 8 (4K), does that
> > mean each io operation is limited to 4K? My hardware raid 
> reports it's
> > max sectors as 128, could this explain the reduction to 1/3 
> throughput?
> >
> 
> Hi,
> 
> The cause for the limitation to 4k is the Linux-2.4 compabitility of 
> DRBD-0.7.
> 
> Repeat your test with drbd-8.0(rc1).
> 
> drbd-8.0 will do BIOs up to 32k, but much more important are other
> changes (e.g. the non blocking make_request() function), that makes
> drbd-8.0 to scale much better with high end hardware.
> 
> PS: What kind of network link are you using ?
> 

We're using dual 1Gbps adapters, one for each path in the MPIO
connection (actually 4 adapters 2 separate bonded pairs using ALB since
we have multiple initiators, 4 to be exact).

Is there an issue with using the max_sectors from the underlying
hardware, that way DRBD would scale up or down depending on the backing
device that is used?

Of course DRBD may have to automatically re-configure the min-buffers
that it needs depending on the size of the BIOs it accepts, so
replication at that speed doesn't overflow.

The secondary peer isn't in place yet here and when it does come online
it will be geographically separated and therefore over a high latency
low bandwidth connection. I am planning on replicating to this peer
asynchronously using Prot A, is there a formula for calculating optimum
snd_buffer based on dataset/bandwidth/latency?


-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.




More information about the drbd-user mailing list