[DRBD-user] DRBD versus bcache and caching in general.

Christian Balzer chibi at gol.com
Wed Sep 6 03:37:42 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


And once again, the deafening silence shall be broken by replying to
myself.

The below is all on Debian Stretch with a 4.11 kernel.

I tested bcache w/o DRBD initially and the performance as well as default
behavior (not caching large IOs) was quite good and a perfect fit for my
use case (mailbox servers). 

This was true in combination with DRBD as well.

However it turns out that bcache will not work out of the box with DRBD
thanks to the slightly inane requirement by its udev helper to identify
things with lsblk. 
Which identifies the backing device as DRBD after a reboot and thus doesn't
auto-assemble the bcache device.
Hacking that udev rule or simply registering the backing device in
rc.local will do the trick, but it felt crude.

So I tried dm-cache, which doesn't have that particular issue.
But then again, the complexity of it (lvm in general), the vast changes
between versions and documentation gotchas, the fact that a package
required to assemble things at boot time wasn't
"required" (thin-provisioning-tools) also made this a rather painful and
involved experience compared to bache. 

Its performance is significantly lower and very spiky, with fio stdev an
order of magnitude higher than bcache.
For example I could run 2 fio processes doing 4k randwrites capped to 5k
IOPS each (so 10k total) on top of the bcache DRBD indefinitely with the
backing device never getting busier than 10% when flushing commenced.
This test on the same HW with dm-cache yielded 8K IOPS max, with high
fluctuations and both the cache and backing devices getting pegged at 100%
busy at times.

What finally broke the straw was that with dm-cache, formatting the drbd
device with ext4 hung things to the point of requiring a forced reboot.
This was caused by mkfs.ext4 trying to discard blocks (same for bcache),
which is odd, but then again should just work (it does for bcache).
Formatting with nodiscard works and the dm-cache drbd device then doesn't
support fstrim when mounted, unlike bcache.
 
So I've settled for bcache at this time, the smoother performance is worth
the rc.local hack in my book.

Christian

On Wed, 16 Aug 2017 12:37:21 +0900 Christian Balzer wrote:

> Hello,
> 
> firstly let me state that I of course read the old thread from 2014 and
> all the other bits I could find.
> 
> If anybody in the last 3 years actually deployed bcache or any of the
> other SSD caching approaches with DRBD, I'd love to hear about it.
> 
> I'm looking to use bcache with DRBD in the near future and was pondering
> the following scenarios, not all bcache specific.
> 
> The failure case I'm most interested in is a node going down due to HW or
> kernel issues, as that's the only case I encountered in 10 years. ^.^
> 
> 
> 1. DRBD -> RAID HW cache -> HDD
> 
> This is what I've been using for long time (in some cases w/o RAID
> controller and thus HW cache). 
> If node A spontaneously reboots due to a HW failure or kernel crash,
> things will fail over to node B, which is in best possible and up to
> date state at this point.
> Data in the HW cache (and the HDD local cache) is potentially lost.
> From the DRBD perspective block X has been successfully written to node A
> and B, even though it just reached the HW cache of the RAID controller.
> So in the worst case scenario (HW cache lost/invalidated, HDD caches also
> lost), we've just lost up to 4-5GB worth of in-flight data.
> And unless something changed those blocks on node B before node A comes
> back up, they will not be replicated back.
> 
> Is the above a correct, possible scenario?
> 
> As far as read caches are concerned, I'm pretty sure the HW caches get
> invalidated in regards to reads when a crash/reboot happens.
> 
> 
> 2. Bcache -> DRBD -> HW cache -> HDD
> 
> With bcache in writeback mode things become interesting in the Chinese
> sense. 
> If node A crashes, not only do we loose all the dirty kernel buffers (as
> always), but everything that was in-flight within bcache before being
> flushed to DRBD. 
> While the bcache documentation states that "Barriers/cache flushes are
> handled correctly." and thus hopefully at least the FS would be in a
> consistent state, the part that one needs to detach the bcache device or
> switch to writethrough mode before the backing device is clean and
> consistent confirms the potential for data loss.
> 
> I could live with bcache in write-through mode and leaving the write
> caching to the HW cache, if losing and re-attaching a backing device
> (DRBD) invalidates bcache and prevents it from delivering stale data. 
> Alas the bcache documentation is pretty quiet here, from the looks of it
> only detaching and re-attaching would achieve this.
> 
> 
> 3. DRBD -> bcache -> HW cache -> HDD
> 
> The sane and simple approach, as writes will get replicated, no additional
> dangers in the write path when compared to 1) above. 
> 
> If node A goes down and node B takes over, only previous (recent) writes
> will be in the bcache on node B, the cache will be "cold" otherwise. 
> Once node A comes back the re-sync should hopefully take care of all stale
> cache information on the A bcache.
> 
> 
> Obviously having bcache as an associated resource as per Florian's old
> video with would be the "safest" approach, but AFAICT there is no resource
> agent for this and it would also introduce the write latency for
> replication (twice?).
> 
> Regards,
> 
> Christian


-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Rakuten Communications



More information about the drbd-user mailing list