[DRBD-user] trying to use the block-drbd script for xen backend

Tom Georgoulias tomg at mcclatchyinteractive.com
Tue Nov 6 15:56:29 CET 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I am new to the list and to drbd, so I ask for your patience if I my 
questions have been answered elsewhere or seem obvious.  I do not know 
if my configuration, actions or expectations are wrong and would like 
some guidance from those with more experience.

I am trying to setup a DRBD backend for a Xen virtual machine (VM) using 
the "block-drbd" script included in the 8.0.6 release, but find that my 
drbd device is moved to a secondary state on both servers after I finish 
the initial OS install of the VM on the drbd device.  As a result, I 
cannot startup the VM after the OS install and instead see this error:

[root at radm012d xen]# xm create rusr007d -c
Using config file "./rusr007d".
No handlers could be found for logger "xend"
Error: Disk isn't accessible

This is the disk line I have in /etc/xen/rusr007d

disk = [ 'drbd:r0,xvda,w', ]

and this is what I see in /var/log/messages immediately after the VM is 
installed (using kickstart) and ready for the post install reboot:

==> /var/log/messages <==
Nov  6 09:27:28 radm012d kernel: drbd0: role( Primary -> Secondary )
Nov  6 09:27:28 radm012d kernel: drbd0: Writing meta data super block now.

When I promote the server back to a primary role and try to start the 
VM, I get the same "Disk isn't accessible" error but the device stays in 
a primary state.

[root at radm012d ~]# drbdadm state r0
Secondary/Secondary
[root at radm012d ~]# drbdadm primary r0
[root at radm012d ~]# drbdadm state r0
Primary/Secondary
[root at radm012d ~]# xm create rusr007d -c
Using config file "/etc/xen/rusr007d".
No handlers could be found for logger "xend"
Error: Disk isn't accessible

I'm using two IBM bladeservers (8853 models) and RHEL5 with the latest 
errata kernel, 2.6.18-8.1.15.el5xen.  The blades are in the same 
bladecenter and have gigabit ethernet connections, so there is plenty of 
bandwidth available between them for network pings and sync.  I used the 
drbd 8.0.4 and kmod-drbd 8.0.4 spec files from the SRPMs in the CentOS 
project and the 8.0.6 tarball from drbd.org to make new 8.0.6 RPMs for 
my servers and everything compiled w/o any problems, so I think I have 
everything installed properly.  I do not have heartbeat installed or 
configured at this time, although I plan to after I get comfortable with 
DRBD and using it with my VMs.

The blades have two disks that are combined into a RAID1 device using 
software RAID, and the drbd device is directly on top of that.  I 
imagine it looks something like this:

  drbd0
   md2
sda3 sdb3

I created a drbd.conf using the example included with the software and 
didn't make many changes except where it was necessary for my 
environment.  I only have one resource (r0) and

Also, I monitored /proc/drbd on both servers while the VM OS was being 
installed (using watch cat /proc/drbd) and I saw network and disk 
activity on both ends, so I think my drbd device is working as it should.

Can anyone provide any tips or suggestions on why this isn't working for 
me?  I can provide more details from syslog or drbd.conf if needed.

Thanks in advance,

Tom



More information about the drbd-user mailing list