[DRBD-user] DRBD Xen LVM2 Heartbeat Ubuntu Gutsy HowTo

José E. Colón jose.colon at gae.cayey.upr.edu
Thu May 15 05:07:23 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi all.

 I'm planning on posting this at HowToForge, but first I would greatly
 appreciate feedback, comments, suggestions, and testing from the
 members of the DRBD-User list. Thanks in advance.

 ******** Begin HowTo **********

 This HowTo explains the setup of a redundant and highly available Xen
 cluster using DRBD amd Heartbeat atop Ubuntu Server 7.10 (Gutsy).
 After following these steps, you should have a configuration where, if
 one node has to go down, the other node will take over the Xen virtual
 machines (VMs) with no downtime thanks to live migration. No SAN, no
 hefty license fees, no clustering filesystem required at all!

 0. Pre-Requisites

 You will need two servers (physical hardware boxes, not virtual
 servers) with at least two NICs. One NIC connects to the network and
 the other has a crossover cable directly linking to the peer server.
 Preferably, these boxes should have multi-core processors from AMD or
 Intel with vrtualization hardware built-in.

 You'll also need the Ubuntu Server 7.10 Gutsy Gibbon ISO image burned
 on a CD for the OS install to bare metal.

 1. The Operating System - Ubuntu Server 7.10 "Gutsy Gibbon"

 Follow the steps in the HowTo "The Perfect Server - Ubuntu Gutsy
 Gibbon (Ubuntu 7.10)" up to step 9 on page 3 to have a minimal base
 Gutsy server running on your two hardware nodes.

 I'll call one node aragorn and the other legolas. aragorn has eth0
 with IP 10.0.0.1 and eth1 with IP 192.168.0.1 . legolas has eth0 with
 10.0.0.2 and eth1 with 192.168.0.2 . On both nodes, eth0 is the
 network NIC and eth1 is the crossover cable NIC.

 2. Prepare the Base System and Build Environment

 sudo apt-get update
 sudo aptitude update
 sudo apt-get install flex build-essential bridge-utils iproute udev \
 libssl-dev libx11-dev zlib1g-dev gettext lvm2 heartbeat-2 openssl \
 libncurses-dev libc6-xen mercurial gawk
 cd ~
 mkdir src
 cd src
 hg clone http://xenbits.xensource.com/xen-3.2-testing.hg
 hg clone http://xenbits.xensource.com/linux-2.6.18-xen.hg
 wget http://oss.linbit.com/drbd/8.2/drbd-8.2.5.tar.gz
 tar -xzf drbd-8.2.5.tar.gz

 3. Build and Install Xen

 cd xen-3.2-testing.hg
 make xen
 make kernels
 make tools

 You could modify the xen kernel configs to suit special hardware needs by doing

 make linux-2.6-xen-config CONFIGMODE=menuconfig
 make linux-2.6-xen-build

 and for Dom0 and DomU

 make linux-2.6-xen0-config CONFIGMODE=menuconfig
 make linux-2.6-xen0-build
 make linux-2.6-xenU-config CONFIGMODE=menuconfig
 make linux-2.6-xenU-build

 then run the first set of make commands shown above. Finally, install Xen.

 sudo make install-xen
 sudo make install-kernels
 sudo make install-tools

 4. Fine Tuning for the New Kernel

 cd /lib/modules
 sudo depmod
 sudo mkinitramfs -o /boot/initrd.img-2.6-xen 2.6.18.8-xen

 You may get some strange messages after the mkinitramfs but everything
 should still work fine. Now edit your GRUB boot loader.

 sudo vi /boot/grub/menu.lst

 and add the Xen kernel stanza (sample shown , modify to reflect your setup)

 title           Xen 3.2.1 / XenLinux 2.6
 kernel       /xen.gz console=vga
 module     /vmlinuz-2.6.18.8-xen root=/dev/sda2 ro console=tty0
 module     /initrd.img-2.6-xen

 5. Configure the Xen Hypervisor

 You need to modify the main xend configuration script to attach Xen's
 bridge to the right Ethernet device, configure relocation (live
 migration) settings, and control Dom0 cpu settings.

 sudo vi /etc/xen/xend-config.sxp

 Here are my relevant lines from the file

 (dom0-cpus 2)
 (network-script 'network-bridge netdev=eth0')
 (xend-relocation-server yes)
 (xend-relocation-port 8002)
 (xend-relocation-address '192.168.0.1')
 (xend-relocation-hosts-allow '^192.168.0.2$ ^localhost$
 ^localhost\\.localdomain$')

Note that last entry goes all in one line.

 6. Reboot and Test Xen

 sudo reboot

 [...cross fingers, breath, wait...]

 sudo /etc/init.d/xend start
 sudo xm list -v

 You should see how Dom0 is only using the number of cpus you specified
 in the config file. To pin Dom0 to only use specific CPUs (cores) you
 can run

 sudo xm vcpu-pin 0 0 0
 sudo xm vcpu-pin 0 1 0

 You can add these commands to your /etc/rc.local to pin the CPUs at each boot.

 7. Build and install DRBD

 We build DRBD as a kernel module with

 cd ~/src/drbd-8.2.5/drbd
 sudo make clean all
 cd ..
 sudo make install

 Maybe this is optional, but just in case:

 cd /lib/modules
 sudo depmod
 sudo rm /boot/initrd.img-2.6-xen
 sudo mkinitramfs -o /boot/initrd.img-2.6-xen 2.6.18.8-xen

 8. Configure Low-level Storage (LVM2)

 First you need a big disk or partition to serve as base for your LVM2
 playground.

 sudo fdisk /dev/sda

 Create a partition and set its type to 8e (Linux LVM). You may need to
 reboot after this step.

 Create a Physical Volume, Volume Group, and Logical Volumes for the
 test virtual machine.

 sudo pvcreate /dev/sda4
 sudo vgcreate vgxen /dev/sda4
 sudo lvcreate -n vm1hd -L 4G vgxen
 sudo lvcreate -n vm1sw -L 2G vgxen

 You can view LVM details with the pvdisplay, vgdisplay, and lvdisplay commands.

 9. Configure DRBD

 The file /etc/drbd.conf is the main configuration file for DRBD and
 should be identical on both nodes. Here's a sample:

 global {
  usage-count yes;
 }
 common {
  protocol C;
  syncer { rate 100M; }
  net {
    allow-two-primaries;
    after-sb-0pri discard-younger-primary;
    after-sb-1pri consensus;
    after-sb-2pri call-pri-lost-after-sb;
    cram-hmac-alg sha1;
    shared-secret "DRBDRocks";
  }
  handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
  }
 }
 resource vm1hd {
  device    /dev/drbd0;
  disk      /dev/vgxen/vm1hd;
  meta-disk internal;
  on ironman {
    address   192.168.0.1:7788;
  }
  on daredevil {
    address   192.168.0.2:7788;
  }
 }
 resource vm1sw {
  device    /dev/drbd1;
  disk      /dev/vgxen/vm1sw;
  meta-disk internal;
  on ironman {
   address   192.168.0.1:7789;
  }
  on daredevil {
    address   192.168.0.2:7789;
  }
 }

 There; That wasn't that bad, was it? ;) Remember to have this same
 file on both nodes.

 10. Starting up DRBD and Initializing the Resources

 As all steps before, but especially important to have some synchronous
 actions from now on, on both nodes run (you may need to answer yes if
 prompted):

 sudo /etc/init.d/drbd start
 sudo drbdadm create-md vm1hd
 sudo drbdadm create-md vm1sw
 sudo drbdadm up vm1hd
 sudo drbdadm up vm1sw

 NOW ON THE PRIMARY NODE ONLY:

 sudo drbdadm -- --overwrite-data-of-peer primary vm1hd
 sudo drbdadm -- --overwrite-data-of-peer primary vm1sw

 To check the initial synchronization you can run:

 sudo cat /proc/drbd

 Even though synchronization may still be incomplete, you can use the
 resources right away.

 11. Setup and Start a Xen Virtual Machine (a.k.a. guest or DomU)

 sudo mkfs.ext3 /dev/drbd0
 sudo mkswap /dev/drbd1
 sudo mount /dev/drbd0 /mnt

 Now you can install your OS of choice to /mnt . I used debootstrap to
 install a Gutsy VM. After whih you can:

 sudo umount /mnt

 Create a config file for the new VM. Here's a sample in /etc/xen/vm1.sxp

 name = "vm1"
 kernel = "/boot/vmlinuz-2.6.18.8-xen"
 ramdisk = "/boot/initrd.img-2.6-xen"
 root = "/dev/sda1 ro"
 memory = 1024
 disk = ['drbd:vm1hd,sda1,w','drbd:vm1sw,sda2,w']
 vcpus = 1
 cpus = "2,3"

 # Network.
 hostname = "vm1"
 vif = ['mac=YOUR_MAC_HERE']
 dhcp = "off"
 ip = "10.0.0.50"
 netmask = "255.255.255.0"
 gateway = "10.0.0.1"
 extra = 'xencons=tty'

 Note the usage of DRBD backed Virtual Block Devices (VBDs) on the line
 beginning with "disk=" . This is the key to the live migration Black
 Magic! The VM is also restricted to run on CPUs 2 and 3 only and n
 decides where to run it depending on load.

 And to start the VM automatically at each boot:

 cd /etc/xen/auto
 sudo ln -s ../vm1.sxp

 Boot it up! ON PRIMARY NODE:

 sudo xm create -c /etc/xen/auto/vm1.sxp

 Use Ctrl+] to exit the guest console and to view the status of Xen
 with the new VM running:

 sudo xm list -v

 also

 sudo xm top

 12. Setup Heartbeat for High Availability

 As the Heartbeat docs state, you need three files to make it all work.
 The ha.cf file is different on each node since it contains reciprocal
 settings that are specific to each machine's configuration. The
 remaining two files (authkeys and haresources) are identical on both
 nodes. Here's a sample /etc/ha.d/ha.cf file:

 autojoin none
 auto_failback off
 ucast eth2 10.0.0.2
 ucast eth1 192.168.0.2
 warntime 5
 deadtime 15
 initdead 60
 keepalive 2
 node aragorn
 node legolas

 And the /etc/ha.cf/authkeys file which has only two lines:

 auth 1
 1 sha1 SOME_VERY_LONG_SHA_HASHl

 You can generate the SHA1 HASH with:

 dd if=/dev/urandom bs=512 count=1 | openssl md5

 The authkeys file should be chmod 0600 and chown root:root for
 Heartbeat to be happy. It should list like this in ls -l :

 -rw------- 1 root root  [...snip...] /etc/ha.d/authkeys

 And the haresources file has only one short line:

 aragorn xenpeer

 Here, we have aragorn as the preferred primary for the xenpeer service
 (more on that below), but this isn't a crucial issue since we are not
 using auto failback. The xenpeer service mentioned here is a custom
 script I had to develop since the xendomains init script supplied with
 the Xen 3.2.1 distribution wasn't handling live migration correctly.
 (It was leaving both nodes primary with the VMs running on both at the
 same time!) I also tried dopd but it was rebooting both nodes when I
 stopped Heartbeat on one of them. Maybe my mistake somewhere, but as
 any old sysadmin would tell you, I don't have much time to investigate
 further and knew a custom script would do the trick.

 Here's the /etc/ha.d/resource.d/xenpeer script:

 #!/bin/bash

 DOMAIN="YOUR_DOMAIN.ORG"
 NODE1="aragorn"
 NODE2="legolas"
 PEER=$NODE2
 if [ `uname -n` = $NODE2 ] ; then PEER=$NODE1 ; fi

 case $1 in
  stop)
    echo -n "Migrating to $PEER : "
    for i in `xm list | awk '(FNR > 2) {print $1}'` ;
      do
        echo -n "($i) . "
        xm migrate --live $i $PEER
      done
    echo ""
    ;;
  start)
    for i in /etc/xen/auto/* ;
      do
        DEAD=1
        ping -c 1 -nq `dig +short $(basename $i .sxp).$DOMAIN` >
 /dev/null 2>&1 \
          && DEAD=0
        if [ $DEAD -eq 1 ]; then xm create $i ; fi
      done
    ;;
  *)
    xm list
    ;;
 esac

 exit 0

 I doubt this is the best init script to do this, but it works for the
 simple scenarios and recommendations are always welcome. ;)

 13. Start Heartbeat on Both Nodes

 Try to do this almost simultaneously on both nodes:

 sudo /etc/init.d/heartbeat start

 And to make sure Heartbeat starts at boot:

 sudo update-rc.d heartbeat defaults

 14. The Moment of Truth

 You can test the live migration failover with some simple steps. Log
 in via SSH to your Xen VM (vm1 in this case) and run an interactive
 application like top. Then, on the node that's currently primary and
 is running the VM, execute:

 sudo /etc/init.d/heartbeat stop

 The VM will migrate over to the other node and you should be still
 connected to your session and top still running as before. It's magic,
 and it's FREE LIBRE OPEN SOURCE magic! "Look Ma', No SAN, No ESX, No
 Clustered Filesystem!"

 Enjoy. It's all good. =)

****** End HowTo ******
-- 
------
José E. Colón Rodríguez
Academic Computing Coordinator
University of Puerto Rico at Cayey
E:  jose.colon at gae.cayey.upr.edu
W:  http://www.cayey.upr.edu
V:  (787)-738-2161 x. 2415 , 2532



More information about the drbd-user mailing list