Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Fellow drbd's, I have just install two servers(both configured identically (with as the only diffrence the hostnames and ipadresses) running RHEL 4 and wanted to test Heart Beat with DRBD on them. The heartbead installation when well, only problem is the drdb installation. I have already create a VG and LV's on both servers and don't have space left on disk, only space i can get is from my Volgroup00. I created a lv call disk1 on both servers witch i wanted drdb to sync between them. Is this possible?? p.s. i didn't create a filesystem on the lv i want drdb to handle yet, and didn't mount yet. filesystem: (both servers the same) /dev/mapper/VolGroup00-LogVol00 7.7G 2.9G 4.5G 39% / /dev/sda1 99M 13M 81M 14% /boot none 501M 0 501M 0% /dev/shm /dev/mapper/VolGroup00-LogVol01 5.8G 45M 5.5G 1% /home /dev/mapper/VolGroup00-LogVol02 3.9G 54M 3.6G 2% /tmp /dev/mapper/VolGroup00-LogVol05 3.9G 40M 3.7G 2% /u01 /dev/mapper/VolGroup00-LogVol06 7.7G 50M 7.3G 1% /u02 /dev/mapper/VolGroup00-LogVol03 3.9G 118M 3.6G 4% /var Vgdisplay: --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 12 VG Access read/write VG Status resizable MAX LV 0 Cur LV 9 Open LV 7 Max PV 0 Cur PV 1 Act PV 1 VG Size 37.16 GB PE Size 32.00 MB Total PE 1189 Alloc PE / Size 1166 / 36.44 GB Free PE / Size 23 / 736.00 MB VG UUID UoAEEi-v9LE-aBEE-9JnC-hoRr-Rd39-EGBTtC --- Logical volume i created: LV Name /dev/VolGroup00/disk1 VG Name VolGroup00 LV UUID u5fWpd-ZZY1-yTO4-l4Lt-0xCb-ZWgt-WSOyYT LV Write Access read/write LV Status available # open 0 LV Size 1.00 GB Current LE 32 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:8 drdb conf: resource azv { protocol C; on azv-test01 { device /dev/drbd0; disk /dev/VolGroup00/disk1; address 192.168.50.195:7788; meta-disk internal; } on azv-test02 { device /dev/drbd0; disk /dev/VolGroup00/disk1; address 192.168.50.196:7788; meta-disk internal; } disk { on-io-error detach; What to do when the lower level device errors. } net { max-buffers 2048; ko-count 4; #on-disconnect reconnect; # Peer disconnected, try to reconnect. } syncer { rate 10M; #group 1; al-extents 257; } startup { wfc-timeout 0; degr-wfc-timeout 120; } } *Tha Jamaican Nebie* -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080229/20e6cf4e/attachment.htm>