Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
* Jean-Francois Malouin <Jean-Francois.Malouin at bic.mni.mcgill.ca> [20101001 02:22]:
> Hi,
>
> A repost from the xen-users list where I got some hints but nothing
> really conclusive.
Well, I'm very surprised no one as any opinion/idea on this as I've
spent most of this morning googling for 'Xen drbd vbd' and I've seen
numerous (among the noise) posts/blogs related to this and I can't
fathom what I'm doing wrong as I've seen more or less clones of this
setup that are reported as 'working'...
So anyone as an example on how to use a drbd virtual block device
in a Xen domU config so that live migration can be integrated in a
pacemaker cluster? I refer specifically to:
http://www.drbd.org/users-guide/s-xen-configure-domu.html
Any ideas?
thanks,
jf
>
> I can provide more info upon request but for now I'll try to be brief.
>
> Debian/Squeeze running 2.6.32-5-xen-amd64 (2.6.32-21)
> Xen hypervisor 4.0.1~rc6-1, drbd-8.3.8 and ocfs2-tools 1.4.4-3
> Both Dom0 and DomU are running 2.6.32-5-xen-amd64.
>
> 2 nodes are configured with the following layout:
>
> raid1 --> lv1 -> drbd (r1) -> ocfs2 (mount point /xen_cluster/r1)
> \-> lv2 -> drbd (r2) -> ocfs2 (mount point /xen_cluster/r2)
>
> The r1 drbd resource config (with the obvious changes for r2):
>
> resource r1 {
> device /dev/drbd1;
> disk /dev/xen_vg/xen_lv1;
> meta-disk internal;
> startup {
> degr-wfc-timeout 30;
> wfc-timeout 30;
> become-primary-on both;
> }
> net {
> allow-two-primaries;
> cram-hmac-alg sha1;
> shared-secret "lucid";
> after-sb-0pri discard-zero-changes;
> after-sb-1pri discard-secondary;
> after-sb-2pri disconnect;
> rr-conflict disconnect;
> }
> disk {
> fencing resource-only;
> on-io-error detach;
> }
> handlers {
> fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
> after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
> outdate-peer "/usr/lib/drbd/outdate-peer.sh";
> split-brain "/usr/lib/drbd/notify-split-brain.sh root";
> pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh root";
> pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh root";
> local-io-error "/usr/lib/drbd/notify-io-error.sh malin";
> }
> syncer {
> rate 24M;
> csums-alg sha1;
> al-extents 727;
> }
> on node1 {
> address 10.0.0.1:7789;
> }
> on node2 {
> address 10.0.0.2:7789;
> }
> }
>
> One domU configured, with file disk and swap image:
>
> root = '/dev/xvda2 ro'
> disk = [ 'file:/xen_cluster/r1/disk.img,xvda2,w',
> 'file:/xen_cluster/r2/swap.img,xvda1,w',
> ]
>
> /xen_cluster/r{1,2} are OCFS2 filesystems on top of 2 drbd resources,
> r1 and r2, primary on both nodes. drbd backing devices are LVs on top
> of a md raid1 mirror. With this I can create a DomU and do live
> migration between the 2 nodes. good.
>
> I want this to be ultimately running under pacemaker/corosync/openais.
> So following the drbd users guide (OCFS2 and Xen sections) I modified
> this to:
>
> root = '/dev/xvda2 ro'
> disk = [ 'drbd:r1,xvda2,w',
> 'drbd:r2,xvda1,w',
> ]
>
> Trying to recreate the DomU leads to:
>
> xm create -c xennode-1.cfg
> Using config file "/xen_cluster/r1/xennode-1.cfg".
> Error: Device 51714 (vbd) could not be connected. Hotplug scripts not working.
>
> Not quite sure what to do now.
>
> Thanks for the inputs.
> jf
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user