[DRBD-user] Notes on DRBD w/ HA on Xen guest (FC5) for NFS

pv vishnubhatt at gmail.com
Wed Jul 5 01:51:51 CEST 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


After a few trials, hiccups, I was able to install a HA NFS system between two 
guest doms. Effectively, it involved the following: 

1) Use FC5 as host/domU

2) Download, install, build Xen kernels from xen-unstable - base=2.6.16.13

3) I tried to build DRBD (v 0.7.19) module for guest domains (again, using FC5 
and the same kernel). Although I saw some posts that ARCH=xen would make it 
work, after a few trials, I gave up and install the sources and built it within 
the guest domain and installed it there as well (however, as a part of 
installation, I expected the drbd0 thru drbd8 or at least two devices to be 
created which I did not so I had to create them manually using mknod using 147 
as the id). For the data and metadata disks/devs, I created a vol-group within 
which I created 4 logical volumes - 2 of 3GI s and two of 150M ea. The two 
small ones were for the metadata for each guest domain, and the two larger ones 
were used for the nfs exports (data).

4) Installed heartbeat - v2.0 - again I tried to build this in the host and 
upload it on the guest domains - it did not quite work (looks like the ARCH=xen 
is a flag that was used, now it does not exist). Built, installed and 
configured the two guest domains - fedora1 and fedora2 as the two servers who 
also are wired up using drbd.

5) Downloaded nfs utils (not the one from RH), installed it in both the guest 
domains. Configured the exports to export the volume which would reside on the 
drbd block devices (which would internally use the log vols that were created 
in the host/domo).

6) In order to wire up the two guest domains in a cluster - I created vifs in 
Xen using two bridges and created routes between the two guests to enable the 
control/cluster management path.

7) Got the HA working w/ a cluster resource/IP address

8) Before doing the above, made sure drbd was up and running and relied on 
Heartbeat to start the nfs.

9) Simple writes, creates, worked fine - whichever guest domain (fedora1 or 
fedora2) had the cluster interface also had the nfs export visible each time 
there was a failover/failback. Did the failover-simulation using xm (un)pause 
<fedora1/fedora2> for either of the two guest domains.

10) Next to try migration...

--
ps: I figured someone might go thru these steps (although there are quite a few 
references w.r.t similar efforts in debian and using straight h/w servers, I 
figured that I'll preserve these steps that I went thru to bring up this sysetm 
in Xen for posterity and evidence that it did work!).





More information about the drbd-user mailing list