[DRBD-user] drbd lost lvm configuration

EDV at bauerfunken.de EDV at bauerfunken.de
Tue Jan 19 15:21:16 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


>On Tue, Jan 19, 2016 at 11:02:35AM +0000, EDV at bauerfunken.de wrote:
>> >> >> I've setup a drbd cluster of 2 nodes. Both nodes use a physical 
>> >> >> partition type 8e (Linux LVM) and synced by drbd successfully.
>> >> >> The response of drbd-overview seems to be ok, because it say 
>> >> >> UpToDate/UpToDate on both nodes.
>> >> >>
>> >> >> Now, when I create a logical volume by pvcreate, vgcreate and 
>> >> >> lvcreate on this drbd-device it create this volume successfully 
>> >> >> too, but when I check the configuration by th following way:
>> >> >>
>> >> >> Primary node:
>> >> >>
>> >> >> vgchange -a n <vg>
>> >> >> drbdadm secondary <drbd>
>> >> >>
>> >> >> Secondary node:
>> >>
>> >> >You need to 'drbdadm primary res' before anything can read the 
>> >> >/dev/drbdX device.
>> >>
> >> Sorry,  but I did it before changing the state of this volumegroup.
>> >>
>> >> So I did:
>> >>
>> >> drbdadm primary <drbd>
>> >> vgchange -a y <vg>
>> >>
>> >> When I do a pvscan after changing the primary node, no pv's were 
>> >> found and the same, by changing the primary again.
>> >
>> >stale lvm meta data cache (daemon)?
>> >try a pvscan --cache,
>> >see the man page for details.
>> 
>> Thanks for your response.
>> 
>> When I create a new pv and test it with
>> 
>> pvscan --cache
>> 
>> the response on the primary node is:
>> 
>>   Found duplicate PV Trs11CMat90PaBJ2lm4YpzEKVXnDtx0P: using /dev/drbd0 not /dev/cciss/c0d0p4
>>   Using duplicate PV /dev/drbd0 from subsystem DRBD, ignoring 
>> /dev/cciss/c0d0p4
>
>You should have filtered out the cciss path.
>also check the "global_filter" in lvm.conf, not just the filter, and double check that your initramfs knows about lvm device filters as well.

I'm a big step forward. Now I've set global_filter to filter out cciss and accept drbd-devices.
The complete lvm configuration appear on the second node after

pvscan --cache

Without this command, vgchange didn't find anything.

But is this the right way ?
I want to configure this lvm in pacemaker to change primary node if the present primary node fails.

>> Erasing this new pv returns nothing on the same node.
>> 
>> In lvm.conf I've set
>> 
>> write_cache_state = 0
>> 
>> so I'm wondering why lvm use a cache.

>see lvmetad(8) and use_lvmetad in lvm.conf(5)

>> This configuration is used in a openSuSE 42.1 on both nodes.      

_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user



More information about the drbd-user mailing list