Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Because only a couple of our colleagues really know the ins and outs of DRBD we are setting up a test environment which serves to learn how DRBD works and how to configure it. We are using the latest version of the DRBD user's guide (http://www.drbd.org/users-guide/) as a guideline for the configuration. The configuration file for R0 looks like this: resource r0 { on venvmdrbd001 { device /dev/drbd0; disk /dev/sdb1; address 192.168.5.1:7789; meta-disk internal; } on venvmdrbd002 { device /dev/drbd0; disk /dev/sdb1; address 192.168.5.2:7789; meta-disk internal; } } We are using sdb1 for the resource. This is working fine: 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 Because we are using LVM in our production environment we tried to setup DRBD over LVM, this would mean the following configuration file should be okay: resource r1 { on venvmdrbd001 { device /dev/drbd1; disk /dev/r1vg/r1lv; address 192.168.5.1:7790; meta-disk internal; } on venvmdrbd002 { device /dev/drbd1; disk /dev/r1vg/r1lv; address 192.168.5.2:7790; meta-disk internal; } } We compared this configuration file to the (working) configuration file in production. We also compared the LVM configuration, this should be okay. The problem we are having is: 1: cs:Connected ro:Secondary/Secondary ds:Diskless/Diskless C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 Both nodes are in the Diskless state. We are also seeing this message in the /var/log/messages file for drbd1: Sep 20 09:02:38 venvmdrbd001 kernel: block drbd1: Barriers not supported on meta data device - disabling Sep 20 09:02:38 venvmdrbd001 kernel: block drbd1: drbd_md_sync_page_io(,16449528s,WRITE) failed! Sep 20 09:02:38 venvmdrbd001 kernel: block drbd1: meta data update failed! Sep 20 09:02:38 venvmdrbd001 kernel: block drbd1: disk( Inconsistent -> Failed ) Sep 20 09:02:38 venvmdrbd001 kernel: block drbd1: Local IO failed in drbd_md_sync. Detaching... Sep 20 09:02:38 venvmdrbd001 kernel: block drbd1: disk( Failed -> Diskless ) [root at venvmdrbd001 drbd.d]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 12.91G LogVol01 VolGroup00 -wi-ao 1.97G r1lv r1vg -wi-a- 7.84G [root at venvmdrbd001 drbd.d]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 VolGroup00 lvm2 a- 14.88G 0 /dev/sdb2 r1vg lvm2 a- 7.84G 0 Differences between the systems you used in the course and our new systems: Course Systems: • Kernel: 2.6.18-194.8.1.el5 • drbd-udev-8.3.8.1-1 • drbd-xen-8.3.8.1-1 • drbd-km-2.6.18_194.8.1.el5-8.3.8.1-1 • drbd-pacemaker-8.3.8.1-1 • drbd-heartbeat-8.3.8.1-1 • drbd-utils-8.3.8.1-1 • drbd-8.3.8.1-1 • drbd-bash-completion-8.3.8.1-1 New test systems: • Kernel: 2.6.18-238.19.1.el5 • drbd-bash-completion-8.4.0-1 • drbd-udev-8.4.0-1 • drbd-xen-8.4.0-1 • drbd-heartbeat-8.4.0-1 • drbd-pacemaker-8.4.0-1 • drbd-8.4.0-1 • drbd-utils-8.4.0-1 • drbd-km-2.6.18_238.19.1.el5-8.4.0-1 We have checked the configuration of the course systems and compared those to the test systems. We couldn’t find a difference (except from the expected version differences). Our DRBD specialist couldn’t find out what the problem with the test setup is. Logically the problem should be in the combination LVM – DRBD (the resource without LVM is working fine). Are there known problems in the area with the versions we are using? Could you help us with this issue? If you need more information, don’t hesitate to ask. Thank you for your Kind regards, Mario Verhaeg -- View this message in context: http://old.nabble.com/DRBD-Diskless-with-LVM-tp32503717p32503717.html Sent from the DRBD - User mailing list archive at Nabble.com.