[DRBD-user] onine resizing

Jason Joines support at bus.okstate.edu
Mon Oct 9 00:19:02 CEST 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


    I'm using SuSE 9.2 with kernel 2.6.10 and drbd "version: 0.7.10
(api:77/proto:74)" on a two node setup serving Samba and about to run
out of disk space on my second drbd device drbd1.  Samba and DRBD are
controlled via Heartbeat 1.2.3.  XFS is the filesytem.

    Short question is how do I use the resize feature of
drbdadm/drbdsetup?  What I've done so far follows.

    The disks in each node are exactly the same, each had one remaining
drive slot, I had two extra drives identical to those already in use.

    I took nodeB down, added the drive, configured it and the existing
drive as md0 with raid0, verified md0 was twice the size of the
individual disks, modified drbd.conf on each node to show /dev/md0 as
the disk behind the drbd device instead of /dev/sdc.
    Then I brought DRBD up on nodeB in StandAlone mode, invalidated the
drbd on nodeB, connected and allowed the Sync from nodeA to nodeB to run
to completion.  Next I failed Samba over to nodeB via Heartbeat's
hb_standby and verified everything was working OK, Samba and shares
accessible, data on drbd device OK, etc.
    At this point, df output still looks like it did with just one disk
behind the drbd device as I expected:
/dev/drbd1  xfs  137G  134G  3.3G  98%  /local/groups

    I then repeated on nodeA, took it down, added the drive, configured
md0 out of sdc and sda, modified drbd.conf to use md0, invalidated
locally on nodeA, connected the drbd device and allowed the Sync to run
from nodeB back to nodeA.  I then failed everything back over to nodeA
and again made sure all was working correctly.  It was and the disk
usage/size was still as above.

    Then I issued "drbdadm resize drbd1" on nodeA and on nodeB.  Didn't
see any difference in disk size/usage and thought it might require a
umount of the filesystem so I failed everything back over to nodeB. 
There was still no difference in disk size/usage so I repeated with
still no luck.

    Issuing the "drbdadm resize drbd1" command results in these log
messages on the local node when Secondary:
Oct  8 17:03:19 nodeA kernel: drbd1: I am(S):
1:00000003:00000002:000000aa:00000009:01
Oct  8 17:03:19 nodeA kernel: drbd1: Peer(P):
1:00000003:00000002:000000aa:00000009:11
Oct  8 17:03:19 nodeA kernel: drbd1: drbd1_receiver [7873]: cstate
Connected --> WFBitMapT
Oct  8 17:03:20 nodeA kernel: drbd1: drbd1_receiver [7873]: cstate
WFBitMapT --> SyncTarget
Oct  8 17:03:20 nodeA kernel: drbd1: Resync started as SyncTarget (need
to sync 0 KB [0 bits set]).
Oct  8 17:03:20 nodeA kernel: drbd1: Resync done (total 1 sec; paused 0
sec; 0 K/sec)
Oct  8 17:03:20 nodeA kernel: drbd1: drbd1_receiver [7873]: cstate
SyncTarget --> Connected

    and these on the local node when Primary:
Oct  8 17:06:44 nodeB kernel: drbd1: I am(P):
1:00000003:00000002:000000aa:00000009:11
Oct  8 17:06:44 nodeB kernel: drbd1: Peer(S):
1:00000003:00000002:000000aa:00000009:01
Oct  8 17:06:44 nodeB kernel: drbd1: drbd1_receiver [16338]: cstate
Connected --> WFBitMapS
Oct  8 17:06:44 nodeB kernel: drbd1: drbd1_receiver [16338]: cstate
WFBitMapS --> SyncSource
Oct  8 17:06:44 nodeB kernel: drbd1: Resync started as SyncSource (need
to sync 0 KB [0 bits set]).
Oct  8 17:06:44 nodeB kernel: drbd1: Resync done (total 1 sec; paused 0
sec; 0 K/sec)
Oct  8 17:06:44 nodeB kernel: drbd1: drbd1_receiver [16338]: cstate
SyncSource --> Connected

    No corresponding messages are generated on the other node.

    Now I have a 268 GB md device being used behind drbd1 on each node
but still only have 134 GB available for use.  Any ideas?


Thanks,

Jason Joines
================================



More information about the drbd-user mailing list