[DRBD-user] BUG: High I/O Wait and reduced throughput due to wrong I/O size on the secondary

Felix Zachlod fz.lists at sis-gmbh.info
Sat Mar 16 19:44:45 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I have to answer myself again.

I tried reverting to drbd 8.3.11 on my test setup.
This again leaded to some interesting observations.

When running 8.3.11 on both my virtual machines the average request size
lowers but it is by far not as low as 4k.

This seems to be correct as according to
http://www.drbd.org/users-guide/re-drbdmeta.html
the max block i/o size of drbd is 128k in 8.3.11 and 1M in 8.4.3 that is
absolutely what I am able to observe on my test setup!

This is just NOT working on my production cluster.

I now am relatively sure this is a bug and possibly related to some
incompatibility with the raid card driver?!
But... why does drbd issue larger BIOs correctly on the primary and smaller
BIOs wrongly on the secondary ... while the raid cards running here are the
same!

I tried a different thing too. I updated my production secondary to 8.4.3 to
see i it is the same with 8.4.3 over there and what I can say now is that
neither 8.3.11 nor 8.4.3 is doing larger BIO than 4k on my production
secondary which leads to the assumption that this problem still persists
with the current version 8.4.3.

What I tried additionally is adding another block layer between drbd and the
raw disks on my test setup- I accomplished this by using lvm2. This lead to
another interesting observation:

When running drbd above lvm on my test setup I can see that the lvm device
mapper is being hit with 4k i/o as I can observe on my production cluster.
BUT lvm2 itself is aggregating all writes to larger BIO to the lower level
raid disk- this leads to BIO sizes it between the both setups. 

I thank everyone for his/her interest and hope this is of interest for the
DRBD crew too.

Feel free to add information from your setup or to get in touch for addition
information.


best regards, Felix





More information about the drbd-user mailing list