Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I should also mention that node1 and node2 are both virtual machines run under KVM and managed with libvirt - on the hosts: qemu-kvm 0.12.3+noroms-0ubuntu9.17 libvirt-bin 0.7.5-5ubuntu27.16 There are two ethernet ports on node1 and node2 - eth0 is for the shared network and eth1 is the replication link for DRBD (a crossover cable between the vm hosts). The NICs on the VMs are each connected to a bridge interface that connects to the vm host's physical NICs. Thanks, Andrew ----- Original Message ----- From: "Felix Frank" <ff at mpexnet.de> To: "Andrew Martin" <amartin at xes-inc.com> Cc: "drbd-user" <drbd-user at lists.linbit.com> Sent: Tuesday, January 31, 2012 2:44:47 AM Subject: Re: [DRBD-user] Removing DRBD Kernel Module Blocks Hi, On 01/30/2012 11:27 PM, Andrew Martin wrote: > this behavior is still present when failing over with the DRBD what's the scenario here? node2 is master for both drbd0 and drbd1 and you're trying to fail over using crm resource migrate? > device.What does the digit after the resource indicate, e.g. the :1 or > :1_stop_0 below: Pacemaker internals I believe. Can you post the versions of all involved software? Is it possible your pacemaker and/or cluster-agents are old/broken? Regards, Felix -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120131/6c6dda2f/attachment.htm>