[DRBD-user] very poor write performance with flush/barriers turned on

James Harper james.harper at bendigoit.com.au
Mon Jan 23 14:08:57 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I'm seeing something I didn't expect...  I was getting very poor results from the following command:

time dd if=/dev/zero of=/dev/vg-drbd/test oflag=direct count=512 bs=512

in the order of 5 seconds. Then I added the following to my configuration:

disk {
  no-disk-barrier;
  no-disk-flushes;
  no-md-flushes;
 }

And the time dropped to under half a second, which is on par with write performance when the secondary is offline.
 
Where is the flush/barrier having an effect? I'm using protocol B so I assumed that there would be no flush at all on the secondary in either case as the packet is only supposed to have hit the secondary queue, not necessarily the disk. I still want the data definitely on the disk on the primary though so the flush should be happening there (I'm prepared to lose data in the event of a power failure + simultaneous destruction of the primary's disks).

To summarise:

primary&secondary online with all flushing/barriers enabled = bad performance
As above but secondary offline = good performance
Primary&secondary online with all flushing/barriers disabled = good performance

My details are:
2 x HP DL180's, each with 2 x 2TB 7200RPM SATA disks
Backing store is md (RAID0) on a partition
Protocol B

I've tried all the other configuration tweaks I can think of, the only think of is that flushing is having an effect on the secondary node???

Can anyone clarify the situation for me?

Thanks

James




More information about the drbd-user mailing list