[DRBD-user] external metadata on ssd vs bcache

Arnold Krille arnold at arnoldarts.de
Thu Feb 28 12:22:57 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Thu, 28 Feb 2013 09:49:16 +0100 Lionel Sausin <ls at numerigraphe.com>
> It's interesting because, normally, writes do not directly translate
> to head seeks (thanks to dirty pages, caches, NCQ, firmware-level 
> optimization...), and ideally barriers should be disabled (and caches 
> reliable).

If you want secure storage on disk, every write translates to
disk-head-action, if you have random writes, every write results in
a head-seek. Otherwise the data wouldn't be on disk. Actually every
write has to result in two seeks: seek the position to write, seek the
position to write the fs' metadata. With drbd and internal-metadata you
add another seek to the end of the disk...

> Florian Haas once suggested[1] that "if using external metadata
> actually improves your performance versus internal metadata, you have
> underlying performance problems to fix."
> Have you been investigating this possibility?
> Or is the improvement specific to the type of write load your server 
> endures?

There isn't much you can optimize. Where are not upscaling the
solution to support big clouds and data-centers, we are down-scaling
the solutions to give small businesses the advantages of HA and
redundancy. We don't do big-data and web-apps on these
setups but file-, terminal- and internal-mailservers.
We hate raid-controllers and hw-raid for too much bad has happened
to us and our customers with missing spares, data-loss after
firmware-upgrades and similar problems. Give us a bunch of disk so we
can run sw-raid, lvm and drbd on them! If the motherboard has
ahci-hotplug, I can switch the whole lot of disks without the server
rebooting once, pvmove for the data-partitions and usage of mdadm for
the root-partition is enough.

And with one disk for data, one ssd for metadata, drbd for
network-raid1, a dedicated 1G-link for drbd and two of these setup, we
get dbench to tell us that throughput is maxing out the link (with
protocol C, most times I use protocol A) and latency is as low as a
local disc. And dbench is my benchmark of choice for that because it
plays back office usage patterns, not some artificial big reads/writes.
Exactly what we and our customers do all day. Thus the results of
dbench have proven to be a very good indicator of whether users will
complain or not.

Have fun,

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130228/cf9a7eb1/attachment.pgp>

More information about the drbd-user mailing list