[DRBD-user] DRBD 8.4.2: "drbdadm verify" just do not work

Markus Müller drbd2012 at priv.de
Mon Sep 24 16:46:45 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Florian,
>> Does anybody see real bennefits of drbd against a solution with nbd
>> and raid1?
>
> From Florian's blog from back in 2009: [...]
thanks for that hints. I think I have some arguments for this, see it at 
the end of the mail.
> I personally would like to see any possible issues fixed than cobble 
> together something like that. As such, we've been avoiding using 
> verify at all, because in the 8.3.x trees, using it often caused DRBD 
> to hang and require a full system reboot because the kernel module 
> would become completely unresponsive. I haven't had any problems with 
> it in 8.4 yet, but I'm still a little skittish around the command in 
> general.
>
> I've been invalidating hosts I know are bad. I too, would like to see 
> verify become reliable and useful long term.
>
If you have a bug, and fix it, this is comprehensible and okay. A 
productive ready software for data replication should not have often 
problems or bugs about inconsistent data, however it is still reasonable 
for free software. But I just cannot use a data replication software 
which has a buggy verification of the integrity of the data! This is not 
a question about if it is paying software or not, it is about if it is 
usable at all or not. If an operator get wrong informations about the 
integrity of its data when he explicit test about this, then this 
disqualifies the software at all in nearly any use case! You cannot rely 
on a software which maybe tell you everything is okay but this isn't for 
sure!

I think I will really switch to nbd and raid; causes:

Comparable features:
- The File system has the same damages with drbd(mode A or B) as with 
nbd+raid; if the primary fully crashes or gets immediate power off you 
have missing data! -> No benefit for drbd
- Only reading to local disc and writing to both sides is also available 
on md-raid -> No benefit for drbd
- Reading also on the secondary node if the local node gets faulty is 
also available on md-raid -> No benefit for drbd

Features only from drbd:
- Auto re sync/ management features / split brain: This is a realy 
benefit but with a bad feeling not able to verify this. You have to do 
always a full re sync if any problems might be there, if you want to be 
sure about your data. You don't have an "high availability cluster" if 
you have to shutdown everything and use a custom tool to verify if it is 
in sync, because the build in tool is completely broken. (Mode C might 
give you this ha re sync benefit today, but it unusable in most use 
cases because of its bad performance (2-3 M Byte/sec))

Features only from nbd+raid:
- Nbd+raid can verify it it is in sync (echo check >sync_action to 
/sys/devices/virtual/block/mdX/md/sync_action).
- Nbd+raid has much better performance than drbd.
- Nbd+raid is much easier to configure; you often do mdsetup anyway, now 
you just do a parameter to let nbd only write and add the remote device 
to your raid.

Regards,
Markus Mueller




More information about the drbd-user mailing list