[DRBD-user] frequent wrong magic value with kernel >4.9 caused by big mtu
Lars Ellenberg
lars.ellenberg at linbit.com
Mon Feb 12 18:13:14 CET 2018
On Mon, Feb 12, 2018 at 05:17:24PM +0100, Andreas Pflug wrote:
> > After the tcpdump analysis showed that the problem must be located below
> > DRBD, I played around with eth settings. Cutting down the former MTU of
> > 9710 to default 1500 did fix the problem, as well as disabling
> > scatter-gather. So apparently big MTU and scatter-gather don't play
> > nicely on later kernels (or the updated nic driver)
My findings with that provided tcpdump where:
| tcpdump sees corrupted frames as well.
|
| There is a DRBD data packet,
| expecting a full 4096 byte write to sector 958552.
| Then expects the next header.
| But that frame looks corrupted
| ...
|
| supposedly 4096 byte, all looking roughly like this,
| probably some file system meta data, or raw rrd file,
| or other data block containing an array of related 64bit values:
|
| ... boring binary data
|
| but at byte offset 3856, that pattern suddenly changes to something completely unrelated,
| so likely a wrong page got mapped/linked somewhere:
|
| ... more boring data, but suddenly pattern changed to plain text / xml
|
| * HERE * is the supposed end of the 4096 byte data block,
| and the supposed start of the next DRBD header (with the "magic").
|
| Obviously there is no DRBD header magic here, though.
|
| ... more boring data with that same plain text / xml
> > I posted a kernel bug on this,
> > https://bugzilla.kernel.org/show_bug.cgi?id=198723
>
> Unfortunately, scatter-gather seems NOT to be the culprit. Changing all
> other receive and generic offload settings didn't help either, so only
> big mtu remains.
You will likely get more satisfying responses to that bugzilla,
if you find a reproducer for the data corruption that does not involve
"unusual" things like DRBD, but can be reproduced by "anyone",
using just "dd and netcat" maybe.
I suggested already in that private mail:
For fault isolation tests,
you could try with different offloading settings,
you could try with different mtu settings,
you could try to use "dd | netcat"
(not scp; it has encryption and thus strong checksums,
I'd expect it to transparently catch this kind of crap and resend)
for some data transfers,
and double check if you get corruption somewhere as well.
You could enable the "data integrity" option in DRBD
and see if that catches the corruption even earlier
(when receiving the corrupt data block,
not only when trying to parse the next header)
More information about the drbd-user
mailing list