Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Have you altered the following option in your lvm.conf to have a list of your VGs that can be activated? Make sure you add any local VG required for your system to boot else your machine will fail to restart and you will only notice after a new kernel which creates a new initrd. volume_list=[] Eric Robinson wrote on 25/02/2016 14:49: > > Yes indeed., I am using Pacemaker. > > -- > > Eric Robinson > > Chief Information Officer > > Physician Select Management, LLC > > 775.885.2211 x 111 > > *From:* drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] *On Behalf Of *Ricardo Branco > *Sent:* Thursday, February 25, 2016 1:06 AM > *To:* drbd-user at lists.linbit.com > *Subject:* Re: [DRBD-user] Having Trouble with LVM on DRBD > > Are you using pacemaker? > > > > *From: *Eric Robinson > > *Sent: *Thursday, 25 February 2016 08:47 > > *To: *drbd-user at lists.linbit.com <mailto:drbd-user at lists.linbit.com> > > *Subject: *[DRBD-user] Having Trouble with LVM on DRBD > > I have a 2-node cluster, where each node is primary for one drbd volume and secondary for the other node’s drbd > volume. Replication is A->B for drbd0 and A<-B for drbd1. I have a logical volume and filesystem on each drbd device. > When I try to failover resources, the filesystem fails to mount because lvdisplay shows the logical volume is listed > as “not available” on the target node. Is there some trick to getting LVM on DRBD to fail over properly? > > -- > > Eric Robinson > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160225/5604d659/attachment.htm>