[DRBD-user] Speeding up sync rate on fast links and storage

Lars Ellenberg lars.ellenberg at linbit.com
Thu Dec 18 13:24:25 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, Dec 17, 2008 at 04:17:00PM -0500, Parak wrote:
> Hi all,
> 
> I'm currently playing with DRBD (8.2.7) on 20Gb/s Infiniband, and it seems that
> I'm running at the sync rate as the limiting speed factor. The local storage on
> both nodes is identical (SAS array), and has been benchmarked at about 650MB/s
> (or higher, depending on benchmark) to native disk, and about 550MB/s when
> writing to it through a disconnected DRBD device. The network link for DRBD is
> Infiniband as well (IPoIB), which has been benchmarked with netperf at ~800MB/
> s.
> 
> The fastest speed that I'm able to get from the DRBD sync with this
> configuration is ~340MB/s, which limits the speed from my initiator to that as
> well. Interestingly, I was also able to benchmark DRBD sync speed over 10Gbe,
> which despite my repeated attempts to tweak drbd.conf, mtu, and tcp kernel
> parameters, has produced the same speed as the aformentioned 340MB/s over
> IPoIB.
> 
> Here's the drbd.conf:
> 
> global {
>     usage-count yes;
> }
> 
> common {
>   syncer {
>      rate 900M;

check if
	cpu-mask 3;
or	cpu-mask 7;
or	cpu-mask f;
or something like that
has any effect.

>          }
> }
> 
> resource drbd0 {
> 
>   protocol C;
> 
>   handlers {
>   }
> 
>   startup {
>     degr-wfc-timeout 30;
>   }
> 
>   disk {
>     on-io-error   detach;
>     fencing dont-care;
>     no-disk-flushes;
>     no-md-flushes;
>     no-disk-drain;
>     no-disk-barrier;
>   }
> 
>   net {
>     ko-count 2;
>     after-sb-1pri discard-secondary;
>     sndbuf-size 1M;

you can try sndbuf-size 0; (auto-tuning)
and check whether tweaking
/proc/sys/net/ipv4/tcp_rmem
/proc/sys/net/ipv4/tcp_wmem
/proc/sys/net/core/optmem_max
/proc/sys/net/core/rmem_max
/proc/sys/net/core/wmem_max
and the like has any effect.

check wether the drbd option
	 no-tcp-cork;
has any positiv/negative effect.

>   }
> 
>   on srpt1 {
>     device     /dev/drbd0;
>     disk       /dev/sdb;
>     address    10.0.0.2:7789;
>     flexible-meta-disk  internal;
>   }
> 
>   on srpt2 {
>     device     /dev/drbd0;
>     disk       /dev/sdb;
>     address    10.0.0.3:7789;
>     flexible-meta-disk  internal;
>   }
> }
> 
> Any advice/thoughts would be highly appreciated; thanks!

cpu utilization during benchmarks?
"wait state"?
memory bandwidth?
interrupt rate?

maybe bind or unbind NIC interrupts to cpus?
 /proc/interrupts
 /proc/irq/*/smp_affinity

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list