Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Mon, Jan 23, 2012 at 01:08:57PM +0000, James Harper wrote:
> I'm seeing something I didn't expect... I was getting very poor
> results from the following command:
>
> time dd if=/dev/zero of=/dev/vg-drbd/test oflag=direct count=512 bs=512
>
> in the order of 5 seconds. Then I added the following to my configuration:
>
> disk {
> no-disk-barrier;
> no-disk-flushes;
> no-md-flushes;
> }
>
> And the time dropped to under half a second, which is on par with
> write performance when the secondary is offline.
>
> Where is the flush/barrier having an effect? I'm using protocol B so I
> assumed that there would be no flush at all on the secondary in either
> case as the packet is only supposed to have hit the secondary queue,
> not necessarily the disk. I still want the data definitely on the disk
> on the primary though so the flush should be happening there (I'm
> prepared to lose data in the event of a power failure + simultaneous
> destruction of the primary's disks).
>
> To summarise:
>
> primary&secondary online with all flushing/barriers enabled = bad performance
> As above but secondary offline = good performance
> Primary&secondary online with all flushing/barriers disabled = good performance
>
> My details are:
> 2 x HP DL180's, each with 2 x 2TB 7200RPM SATA disks
> Backing store is md (RAID0) on a partition
> Protocol B
>
> I've tried all the other configuration tweaks I can think of, the only
> think of is that flushing is having an effect on the secondary node???
>
> Can anyone clarify the situation for me?
See if that helps to understand what we are doing, and why:
From: Lars Ellenberg
Subject: Re: massive latency increases from the slave with barrier or flush enabled
Date: 2011-07-03 08:50:53 GMT
http://article.gmane.org/gmane.linux.network.drbd/22056
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com