Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I've just added a second hard disk to my Gentoo cluster nodes running a DRBD 0.7.25 backing store. I extended my LVM (2.x) volume group and extended my logical volume by 230GB: # pvcreate /dev/hdc3 # vgextend bigvg /dev/hdc3 # lvextend -L+230g /dev/bigvg/store Then I mounted up my newly extended LV and saw that it's space was still showing as 220GB as expected. I performed an online resize as required for JFS and the space increased to 450GB as expected, and all my content was present. I unmounted it ready for DRBD: # mount -t jfs /dev/bigvg/store /store # df -h /store # mount -o remount,resize /store # df -h /store # umount /store DRBD started up fine, however, when I tried to mount the drbd device I got # /etc/init.d/drbd start # cat /proc/drbd [snip] # drbdadm primary samba # mount -t jfs /dev/drbd1 /store mount: wrong fs type, bad option, bad superblock on /dev/drbd1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg shows the following message: attempt to access beyond end of device drbd1: rw=16, want=943652880, limit=461373440 which might suggest that the drbd device is still seeing the old size So I tried resizing (which incidently I didn't need to do in my identical VMware test environment): # drbdadm resize samba But I got the same result. I then tried fscking the filesystem and it suggested it was totally shafted: # jfs_fsck -n /dev/drbd1 jfs_fsck version 1.1.8, 03-May-2005 processing started: 9/23/2007 8.29.9 The current device is: /dev/drbd1 Superblock is corrupt and cannot be repaired since both primary and secondary copies are corrupt. CANNOT CONTINUE. But the underlying LV definately still fine as I can make the drbd resource secondary again, fsck and mount up /dev/bigvg/store ok and all the content is still there. So have I done something wrong in my process, and/or how can I fix the drbd resource so that it sees the new correct LV size. I should still be able to recover by reformatting the filesystem and copying the data over from the other node, but rather not have to, and I'm likely to have to do this again in the future (and also for the other LVs in the cluster) so I would like to know how to get it right next time. Tom