Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I've got a drbd partition that recently started logging errors after
trouble free operation for about a week :
Sep 10 22:57:27 ha2 kernel: attempt to access beyond end of device
Sep 10 22:57:27 ha2 kernel: drbd0: rw=0, want=6365608736,
limit=1381325576
Sep 10 22:57:27 ha2 kernel: attempt to access beyond end of device
Sep 10 22:57:27 ha2 kernel: drbd0: rw=0, want=14950835016,
limit=1381325576
Sep 10 22:57:27 ha2 kernel: attempt to access beyond end of device
Sep 10 22:57:27 ha2 kernel: drbd0: rw=0, want=13983521568,
limit=1381325576
Sep 10 22:57:27 ha2 kernel: attempt to access beyond end of device
Sep 10 22:57:27 ha2 kernel: drbd0: rw=0, want=6365080488,
limit=1381325576
Sep 10 22:57:27 ha2 kernel: attempt to access beyond end of device
The setup was pretty straight forward and I did not specify any size
when creating the drbd device on LVM or ext3 filesystem. The
physical disk is 750GB :
# fdisk -l /dev/sda
Disk /dev/sda: 749.9 GB, 749988741120 bytes
255 heads, 63 sectors/track, 91180 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 91180 732202537+ 8e Linux LVM
# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [39.06 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol02' [658.72 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [512.00 MB] inherit
In a recent thread, there was a discussion about configuring size in
drbd.conf. From the setup instructions, I don't recall this being a
requirement, but none the less I attempted to configure the size
using the information in that thread :
# df /home
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/drbd0 679824572 187169116 458122320 30% /home
# fdisk -s /dev/mapper/VolGroup00-LogVol02
690716672
# fdisk -s /dev/drbd0
690662788
I added the following to drbd.conf :
disk {
size 690G;
}
When I attempted to restart drbd, I received the following error :
# service drbd restart
Restarting all DRBD resources/dev/drbd0: Failure: (111) Low.dev.
smaller than requested DRBD-dev. size.
Command '/sbin/drbdsetup /dev/drbd0 disk
/dev/mapper/VolGroup00-LogVol02 /dev/mapper/VolGroup00-LogVol02
internal --set-defaults --create-device --size=690G' terminated with
exit code 10
.
The major alarm bell for me is that lvscan says the size is 658.72
GB, but the volume group and ext3 filesystem is reported by fdisk as
690662788. Did ext3 really create a filesystem beyond the limits of
the drbd device, or is this just a GB/GiB thing?. If the filesystem
was not written correctly, how did this happen when no size was ever
specified? Going from memory/history, I roughly did the following :
drbd.conf snippet :
resource data {
on ha1 {
device /dev/drbd0;
disk /dev/mapper/VolGroup00-LogVol02;
address 10.0.0.1:7789;
meta-disk internal;
}
on ha2 {
device /dev/drbd0;
disk /dev/mapper/VolGroup00-LogVol02;
address 10.0.0.2:7789;
meta-disk internal;
}
}
Commands :
# drbdadm create-md data
# mkfs.ext3 /dev/drbd0
# mount /dev/drbd0 /home
Second question is, if it's broke, how do I fix it? Seems like I
would need to reconfigure the other server's partition, resync the
data, and repeat on the other server. But how can I insure the
partition is sized correctly?
Side note, ha1 is currently offline and ha2 is acting as the primary.
Chris