[DRBD-user] repeatable, infrequent, loss of data with DRBD

Matthew Vernon mcv21 at cam.ac.uk
Fri Aug 21 01:33:32 CEST 2015

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On 21/08/15 00:05, Igor Cicimov wrote:
> On 20/08/2015 6:58 PM, "Matthew Vernon" <mcv21 at cam.ac.uk
> <mailto:mcv21 at cam.ac.uk>> wrote:

>  > > Are you sure LVM only uses the DRBD device to write data to and not the
>  > > backend disk? We've had this issue in the past and this was caused by
>  > > LVM which scans all the devices for PV's, VG's and LV's and sometimes
>  > > pick the wrong device. You can fix this by changing the filter in the
>  > > lvm.conf file. If you change this, don't forget to remove the LVM cache
>  > > file first and then to rescan everything.
>  >
>  > I'm reasonably sure, yes - I have LVM configured to use drbd devices
> thus:
>  >
>  >     preferred_names = [ "^/dev/drbd" ]
> And since you use dual primary mode you also use cLVM too and cluster
> aware file system like gfs or ocfs2 right?

In my test-script, I don't build a filesystem at all.

In production, the environment is a pacemaker/corosync cluster, and the 
Xen OCF resource agent handles the promotion and demotion of the 
underlying DRBD resource. But again, this bug is biting before all of 
that gets going, when I've only written data from one end.



More information about the drbd-user mailing list