[DRBD-user] Low drbd performance, ~50% of raw disk

mwoolfso info at woolfcomputing.com
Wed Feb 6 15:04:35 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

I agree with the previous poster's response.  Assuming you are using protocol
C, DRBD is going to wait for the RAID5 array to report the block has been
written to disk on both nodes.  Since RAID5 is expensive on write's you add
more expense with DRBD waiting for each write to be committed to disk.  

In addition I will second the barrier and flush optimizations you can make
as per Nils.  I saw a significant improvement in performance with my 8.3.11
setup this weekend.

sndbuf-size 0;   let DRBD auto-tune buffer sizes based on what it sees...
rcvbuf-size 0;

al-extents 1237;

no-disk-barrier;  I disabled command DRBD's reliance on command queuing on
my SATA drives.  Now they "flush".  During large write operation tests I saw
bursts of 200-500MB write activity on the physical drive every 45-50 secs. 
I don't use local (physical) raid on my NAS.

rate 45M;  significantly increases verification performance.  Instead of 10
days to verify a large volume it is done in 9 hours.  I did the necessary
calculations to come up with this number so I know I am safe.

View this message in context: http://drbd.10923.n7.nabble.com/Low-drbd-performance-50-of-raw-disk-tp17316p17353.html
Sent from the DRBD - User mailing list archive at Nabble.com.

More information about the drbd-user mailing list