[DRBD-user] performance tuning 8.0.12

alex at crackpot.org alex at crackpot.org
Sat May 24 00:54:30 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Quoting alex at crackpot.org:

> I am having trouble getting drbd to perform well.  I've measured the
> speed of both the raw disk and the network connection, and drbd is
> only getting to about 30% of these limits.  At best, I can't get over
> about 50% of the MB/second of a netcat transfer between the 2 nodes.


> The 2 nodes
> are in separate data centers (maybe 40km distance between them), but
> there are dual redundant fiber links between the 2 centers.  The drbd
> link is on a dedicated VLAN which was created only for those 2 boxes.

I had the machines re-racked in the same data center, and connected  
the drbd interfaces via crossover cable.  I wanted to see what  
difference eliminating the network made.

drbd performance has roughly doubled, from ~38 MB/second to ~79  
MB/second (measured with this command : 'dd if=/dev/zero  
of=/db/tmp/testfile bs=1G count=1 oflag=dsync')..

While the crossover cable has helped drbd enormously, the bandwidth  
between the 2 boxes is only slightly improved.  Using iperf I was  
seeing ~103 MB/second when the boxes were remote, and now I'm seeing  
~118 MB/second on the crossover.

Seems my limiting factor is not bandwidth.  Latency, perhaps?
When the 2 nodes were remote, this is what I was seeing :

--- ping statistics ---
9 packets transmitted, 9 received, 0% packet loss, time 7999ms
rtt min/avg/max/mdev = 1.098/1.105/1.119/0.027 ms
--- ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7000ms
rtt min/avg/max/mdev = 1.096/1.101/1.111/0.044 ms

On the crossover cable, I'm seeing this:
--- ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7000ms
rtt min/avg/max/mdev = 0.070/0.134/0.218/0.050 ms
--- ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7006ms
rtt min/avg/max/mdev = 0.061/0.158/0.257/0.068 ms

Can anyone suggest tuning strategies that can help in this situation?   
I've tried various settings for the kernel TCP buffers in  
/proc/sys/net/core and /proc/sys/net/ipv4, but I admit I'm stabbing in  
the dark as this is pretty unfamiliar ground.

Someone wrote me offlist and suggested I start with the suggested TCP  
parameters for Oracle on linux, and tune from there.  These values :  
didn't provide any measurable difference (in 'dd over drbd' tests)  
from the stock values set in RHEL5, which are below.

I've run these tests with both drbd 8.0.12 and 8.2.5, and seen  
basically no difference between them.


#RHEL5 defaults
net.core.rmem_default = 126976
net.core.rmem_max = 131071
net.core.wmem_default = 126976
net.core.wmem_max = 131071
net.ipv4.tcp_mem = 196608 262144 393216
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304

More information about the drbd-user mailing list