Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi! Do you mean the offline growing or online growing? About the online growing i don't know anything :-) I haven't tested it at all. Stefan cosmih schrieb: > Hi Stefan, > > Are you so kind to detail a little bit more the growing process > (because on the URL provided is vague) ? The steps are something likes > below ? > 1) put the DRBD resources into secondary mode (on each server) > 2) stop the DRBD service on each server > 3) resize the disk partition used for the DRBD devices on each server > (the disk partition used as meta-disk is not modified) > 4) start the DRBD service on each server (at this step the two DRBD > devices will be into secondary mode) > 5) put one DRBD device into primary mode > 6) resize the filesystem on top the primary DRBD device > > > Thank you, > > -- > cosmih > > On Tue, Dec 1, 2009 at 2:56 PM, Stefan Priebe - allied internet ag > <s.priebe at allied-internet.ag> wrote: >> Hi! >> >> I've done an offline migration cause - we don't use LVM and without LVM >> online migration is not possible. >> >> So i can only tell you that the offline one >> (http://www.drbd.org/users-guide/s-resizing.html) is working fine :-) >> >> Stefan >> >> cosmih schrieb: >>> Hi Stefan, >>> >>> Do you succeed on your XFS, on top of DRBD, growth attempt ? >>> I am curious on this because I also need to do a growth of my DRBD device. >>> Here is my setup: >>> 1) Heartbeat standby/active over DRBD secondary/primary setup >>> 2) DRBD 8.0.16 >>> 3) HW RAID 10 --> sda6, 350GB disk partition; sda7, 1GB disk partition >>> --> drbd0, DRBD device on sda6; meta-disk on sda7 --> LVM PV --> one >>> LVM VG --> two LVM LV --> ext3 (meaning that the DRBD device use as >>> its storage a disk partition and there is LVM on top of DRBD and there >>> is ext3 on top of LVM) >>> What I need ? >>> Basically, I need more space on DRBD device, meaning from 350GB to >>> 450GB and I have this free space on HW RAID 10 volume. >>> Because the DRBD setup is used by some applications who need a very >>> high uptime I am interested by a solution for this growth who use the >>> secondary/primary feature of DRBD and the standby/active feature of >>> heartbeat or, at least, need a very low downtime. >>> >>> I would appreciate any advices. >>> >>> Thank you, >>> -- >>> cosmih >>> >>>> Hi! >>>> >>>> Yes of cause HW Raid. >>>> >>>> I'll do a test today - i've already prepared testequipment yesterday. >>>> >>>> Stefan >>>> >>>>> Stefan Seifert schrieb: >>>>> On Tuesday 24 November 2009 11:03:11 you wrote: >>>>>>> No you haven't. Like newer versions of fdisk you can use partprobe to >>>>>>> tell the kernel to re-read partition tables. >>>>>>> >>>>>>> Regards, >>>>>>> Stefan >>>>>> Thanks Stefan for your answer. I know partprobe - but does it work on a >>>>>> mounted partition? >>>>> For all I know it should work with a mounted partition as well. If the >>>>> partition were not mounted or otherwise used, you wouldn't need partprobe. >>>>> >>>>> And I assume "mounted" in this context means "used as drbd storage device". >>>>> I also assume that by RAID you mean some hardware RAID, because partitioning >>>>> an MD RAID wouldn't make much sense. >>>>> >>>>> Like with all these things it's a very good idea to first test it on a test >>>>> system. Ideally identical machines, but if not available at least some VM (we >>>>> use qemu for that and though its pretty slow, its enough for such tests). Life >>>>> gets so much more relaxed, if you don't have to experiment around with your >>>>> production machines :) >>>>> >>>>> Regards, >>>>> Stefan >>> _______________________________________________ >>> drbd-user mailing list >>> drbd-user at lists.linbit.com >>> http://lists.linbit.com/mailman/listinfo/drbd-user