[DRBD-user] confused about using DRBD VBDs with Xen

Sauro Saltini saltini at shc.it
Tue Oct 5 04:15:51 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


As I can understand you're trying to replace "file:" type vbd with 
"drbd:" ... this simply can't work.

"drbd:" block devices are intended to replace "phy:" vbds with the 
addition of automatical management of promotion / demotion of the drbd 
resource when the corresponding Xen resources are migrated by the cluster.

with this xen configuration:

disk  = [ 'drbd:r1,xvda2,w',
           'drbd:r2,xvda1,w',
]

you're telling xen that his devices (xvda1 & 2) are the whole drbd 
resource....which is clearly false, as you've a filesystem on top of 
drbd, containing the xen "file:" vbds.

Leaving out the automatic promotion/demotion part is the same as writing:

disk  = [ 'phy:/dev/drbd1,xvda2,w',
           'phy:/dev/drbd2,xvda1,w',
]


You're mixing two different setups... you can go for :

1) drbd resource configured as a dual primary + OCFS (or other cluster 
filesystem) mounted as a file storage on both nodes + file vbds on the 
ocfs filesystem + xen using file: type resources.
In this configuration you need distinct drbd resource/s for each of your 
xen guests, as you can have some guests running on node1 and the others 
on node2: the migration of guests involves the promotion/demotion of 
related drbd resources.

2) drbd resource configured as primary/secondary (allowing dual primary 
only if you need live migration of xen guests and only during the 
migration) + physical vbds + xen using drbd: type resources (or phy: is 
you want to do the promotion/demotion step within your crm configuring 
distinct Xen and drbd resources at cluster level).
In this setup you clearly don't need distinct drbd resources ... simply 
go for a single drbd resource with a big OCFS filesystem on it 
contatining all your file images for the different guests.

My setup of choice would be n.2 (for performance reasons) but, as you've 
already done the whole thing as described in n.1, you only need to maxe 
xen guests managed by your crm adding them as ocf:Xen resources. No need 
to (in fact can't) use "drbd:" as your vbds are file based.

Sauro.


On 01/10/2010 17:55, Jean-Francois Malouin wrote:
> * Jean-Francois Malouin<Jean-Francois.Malouin at bic.mni.mcgill.ca>  [20101001 02:22]:
>    
>> Hi,
>>
>> A repost from the xen-users list where I got some hints but nothing
>> really conclusive.
>>      
> Well, I'm very surprised no one as any opinion/idea on this as I've
> spent most of this morning googling for 'Xen drbd vbd' and I've seen
> numerous (among the noise) posts/blogs related to this and I can't
> fathom what I'm doing wrong as I've seen more or less clones of this
> setup that are reported as 'working'...
>
> So anyone as an example on how to use a drbd virtual block device
> in a Xen domU config so that live migration can be integrated in a
> pacemaker cluster? I refer specifically to:
> http://www.drbd.org/users-guide/s-xen-configure-domu.html
>
> Any ideas?
> thanks,
> jf
>
>    
>> I can provide more info upon request but for now I'll try to be brief.
>>
>> Debian/Squeeze running 2.6.32-5-xen-amd64 (2.6.32-21)
>> Xen hypervisor 4.0.1~rc6-1, drbd-8.3.8 and ocfs2-tools 1.4.4-3
>> Both Dom0 and DomU are running 2.6.32-5-xen-amd64.
>>
>> 2 nodes are configured with the following layout:
>>
>> raid1 -->  lv1 ->  drbd (r1) ->  ocfs2 (mount point /xen_cluster/r1)
>>        \->  lv2 ->  drbd (r2) ->  ocfs2 (mount point /xen_cluster/r2)
>>
>> The r1 drbd resource config (with the obvious changes for r2):
>>
>> resource r1 {
>>     device /dev/drbd1;
>>     disk /dev/xen_vg/xen_lv1;
>>     meta-disk internal;
>>     startup {
>>         degr-wfc-timeout 30;
>>         wfc-timeout 30;
>>         become-primary-on both;
>>     }
>>     net {
>>         allow-two-primaries;
>>         cram-hmac-alg sha1;
>>         shared-secret "lucid";
>>         after-sb-0pri discard-zero-changes;
>>         after-sb-1pri discard-secondary;
>>         after-sb-2pri disconnect;
>>         rr-conflict disconnect;
>>     }
>>     disk {
>>         fencing resource-only;
>>         on-io-error detach;
>>     }
>>     handlers {
>>         fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
>>         after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
>>         outdate-peer "/usr/lib/drbd/outdate-peer.sh";
>>         split-brain "/usr/lib/drbd/notify-split-brain.sh root";
>>         pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh root";
>>         pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh root";
>>         local-io-error "/usr/lib/drbd/notify-io-error.sh malin";
>>     }
>>     syncer {
>>         rate 24M;
>>         csums-alg sha1;
>>         al-extents 727;
>>     }
>>     on node1 {
>>         address 10.0.0.1:7789;
>>     }
>>     on node2 {
>>         address 10.0.0.2:7789;
>>     }
>> }
>>
>> One domU configured, with file disk and swap image:
>>
>> root  = '/dev/xvda2 ro'
>> disk  = [ 'file:/xen_cluster/r1/disk.img,xvda2,w',
>>            'file:/xen_cluster/r2/swap.img,xvda1,w',
>> ]
>>
>> /xen_cluster/r{1,2} are OCFS2 filesystems on top of 2 drbd resources,
>> r1 and r2, primary on both nodes. drbd backing devices are LVs on top
>> of a md raid1 mirror. With this I can create a DomU and do live
>> migration between the 2 nodes. good.
>>
>> I want this to be ultimately running under pacemaker/corosync/openais.
>> So following the drbd users guide (OCFS2 and Xen sections) I modified
>> this to:
>>
>> root  = '/dev/xvda2 ro'
>> disk  = [ 'drbd:r1,xvda2,w',
>>            'drbd:r2,xvda1,w',
>> ]
>>
>> Trying to recreate the DomU leads to:
>>
>> xm create -c xennode-1.cfg
>> Using config file "/xen_cluster/r1/xennode-1.cfg".
>> Error: Device 51714 (vbd) could not be connected. Hotplug scripts not working.
>>
>> Not quite sure what to do now.
>>
>> Thanks for the inputs.
>> jf
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>      
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>    



More information about the drbd-user mailing list