[DRBD-user] peer-max-bio-size 1M

Lutz Vieweg lvml at 5t9.de
Fri Apr 11 19:10:42 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On 04/11/2014 09:02 AM, Lars Ellenberg wrote:
 >> peer-max-bio-size out of range (0...128k)
 >> Did I miss something ?
 > That's apparently an oversight (Bug),
 > there is still a value of 128k hardcoded in drbdmeta.

What a timing - I was just about to submit a report
that a block device stack "dm-crypt on drbd" delivers
only half the throughput of a "drbd on dm-crypt" setup
on the same underlying physical storage when I read this.

But not it seems the relevant reason was that
I benchmarked the performance before connecting a
second node.

It's kind of counter-intuitive that running a benchmark
on only one local disk does not avoid but actually introduce
a subtle additional reason for losing performance...

 >> Will both nodes discuss about the value to use up to 1M, even if I don't
 >> use "--peer-max-bio-size 1M" in drbdmeta ?
 > If you have two nodes,
 > don't use that option to drbdmeta.

But that doesn't answer the question - which IMHO is a really
important one, as lower values of queue/max_hw_sectors_kb can
dramatically reduce performance, especially when using fast SSDs.
And 128k _is_ a too low value for usage with contemporary SSDs.

 > For any normal use case, using that option to drbdmeta is simply wrong.
 > So don't.

I would recommend to put a big fat caveat emptor on the
"peer-max-bio-size" / max_hw_sectors_kb issue in the documentation
where the intial setup of a DRBD device is explained, and not
hide it in the drbdmeta man-page where people are unlikely
to ever spot it and realize its consequences.


Lutz Vieweg

More information about the drbd-user mailing list