[DRBD-user] DRBD 8.4 LVM setup

Reyes, David (GE Energy Management) David.Reyes at ge.com
Tue Jul 9 19:58:25 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Objective: To set up an inexpensive shared storage environment for Oracle Database Active/Passive failover configuration using DRBD.
DRBD Release = version: 8.4.3
Operating System = Red Hat Enterprise Linux Server release 5.9
DRBD RPMS were compiled for Kernel 2.6.18_348.6.1.el5-8.4.3-2.x86_64

Two Servers = nfs01 and nfs02
Each Server with an LVM

                      PV                VG        Fmt  Attr PSize   PFre
  /dev/cciss/c0d1p1 drbd-main lvm2 a--  136.70G     0
  /dev/cciss/c0d2p1 drbd-main lvm2 a--  136.70G     0

  VG        #PV #LV #SN Attr   VSize   VFree
  drbd-main   2   1   0 wz--n- 273.39G     0

  LV          VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  testlv      drbd-main -wi-a- 273.39G


Dedicated Network interface for DRDB Traffic (options bond1 mode=active-backup arp_interval=1000 arp_ip_target=10.1.1.2)


On both nodes executed:

/etc/drbd.d/global_common.conf

global { usage-count no; }

common {

syncer { rate 10M; }

}



/etc/drbd.d/main.res

resource main {

protocol C;

net { allow-two-primaries yes; }

startup { wfc-timeout 0; degr-wfc-timeout 120; become-primary-on both; }

disk { on-io-error detach; }

on nfs01 {

device /dev/drbd0;

disk /dev/drbd-main/testlv;

meta-disk internal;

address 10.1.1.1:7788;

}

on nfs02 {

device /dev/drbd0;

disk /dev/drbd-main/testlv;

meta-disk internal;

address 10.1.1.2:7788;

}



}


drbdadm create-md main
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.


service drbd start
Starting DRBD resources: [
     create res: main
   prepare disk: main
    adjust disk: main
     adjust net: main
]
outdated-wfc-timeout has to be shorter than degr-wfc-timeout
outdated-wfc-timeout implicitly set to degr-wfc-timeout (120s)
..


service drbd status
drbd driver loaded OK; device status:
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root at nfs01, 2013-06-24 12:46:34
m:res        cs               ro                                ds                                      p  mounted  fstype
0:main  Connected  Secondary/Secondary  Inconsistent/Inconsistent  C



ONLY ON NFS01
drbdadm primary --force main

cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root at nfs01, 2013-06-24 12:46:34
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:9216 nr:0 dw:0 dr:9216 al:0 bm:0 lo:7 pe:0 ua:7 ap:0 ep:1 wo:f oos:286652844
        [>....................] sync'ed:  0.1% (279932/279940)M
        finish: 33:10:38 speed: 2,304 (2,304) K/sec


After a few seconds

version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root at nfs01, 2013-06-24 12:46:34
0: cs:Connected ro:Primary/Secondary ds:Diskless/Inconsistent C r-----
    ns:27648 nr:0 dw:0 dr:28160 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0


/var/log/messages show:

Jul  9 13:36:23 nfs01 kernel: block drbd0: read: error=-5 s=53248s
Jul  9 13:36:23 nfs01 kernel: block drbd0: Resync aborted.
Jul  9 13:36:23 nfs01 kernel: block drbd0: conn( SyncSource -> Connected ) disk( UpToDate -> Failed )
Jul  9 13:36:23 nfs01 kernel: block drbd0: Local IO failed in drbd_endio_read_sec_final. Detaching...
Jul  9 13:36:23 nfs01 kernel: block drbd0: helper command: /sbin/drbdadm pri-on-incon-degr minor-0
Jul  9 13:36:23 nfs01 kernel: block drbd0: Can not satisfy peer's read request, no local data.
Jul  9 13:36:23 nfs01 kernel: block drbd0: Can not satisfy peer's read request, no local data.
Jul  9 13:36:23 nfs01 kernel: block drbd0: helper command: /sbin/drbdadm pri-on-incon-degr minor-0 exit code 0 (0x0)
Jul  9 13:36:23 nfs01 kernel: block drbd0: drbd_rs_complete_io() called, but extent not found
Jul  9 13:36:23 nfs01 kernel: block drbd0: Sending NegRSDReply. sector 53248s.
Jul  9 13:36:28 nfs01 kernel: block drbd0: drbd_rs_complete_io() called, but extent not found
Jul  9 13:36:28 nfs01 last message repeated 3 times
Jul  9 13:36:28 nfs01 kernel: block drbd0: bitmap WRITE of 1 pages took 0 jiffies
Jul  9 13:36:28 nfs01 kernel: block drbd0: 273 GB (71659115 bits) marked out-of-sync by on disk bit-map.
Jul  9 13:36:28 nfs01 kernel: block drbd0: disk( Failed -> Diskless )


We are trying to set the environment above for proof of concept and hopefully for customers to start using DRBD as a solution to Shared Storage. If a different configuration or approach should be taken, I am open to ideas. Any help would be greatly appreciated.

Regards,
DR
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130709/2193c024/attachment.htm>


More information about the drbd-user mailing list