[DRBD-user] Poor network performance on 0.7.22

Oliver Hookins oliver.hookins at anchor.com.au
Thu Jun 12 08:04:10 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi again,

I've been doing a lot of testing and I'm fairly certain I've narrowed down
my performance issues to the network connection. Previously I was getting
fairly abysmal performance in even DRBD-disconnected mode but I realise now
this was mainly due to my test file size far exceeding the al-extents
setting.

I am performing dd tests (bs=1G, count=8) with syncs on the connected DRBD
resources and getting about 10MB/s only. The disks are 10krpm 300GB SCSI and
can easily get sustained speeds of 60-70MB/s when DRBD is disconnected or
not used. There is a direct cable between the machines giving them full
gigabit connectivity via their Intel 80003ES2LAN adaptors (running the e1000
driver version 7.3.20-k2-NAPI that is standard with RHEL4 x86_64). I have tested
this connection with Netpipe and get up to 940Mbps.

However DRBD still crawls along at 10MB/s. I have attempted to increase the
/proc/sys/net/core/{r,w}mem_{default,max} settings which were previously all
at 132KB, to 1MB for defaults and 2MB for max without any increase in
performance. MTU on the link is set to 9000 bytes.

In drbd.conf I have sndbuf-size 2M; max-buffers 8192; max-epoch-size 8192.
I've also played a little with the unplug watermark setting it to very low
and very high values without any apparent change.

Taking a look at a tcpdump of the traffic the only weird things I could see
are a lot of TCP window size change notifications and some strange packet
"clumping", but it's not really offering me any insights I can immediately
see.

Is there anything else I could tune to solve this problem?

-- 
Regards,
Oliver Hookins



More information about the drbd-user mailing list