Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
it was kind of hard for me to follow what you are trying to do here,
but here is my shot at answering:
/I created a lv call disk1 on both servers witch i wanted drdb to sync
between them. Is this possible??/
-- yes, it is possible. I do it myself. I dont use Volgroup00, I
create a separate VG, but should still be ok.
I dont personally use internal metadata for the disks, so I cannot talk
about that.
I create another LV along with the data LV.
in your case (the way I do it) I would create:
/ LV Name /dev/VolGroup00/disk1 (yours is 1Gb)
LV Name /dev/VolGroup00/disk1_metadata (usually around
3Mb per 50Gb, but you can just make it 3Mb for yours too)
/
change the configurations from
/meta-disk internal;/
TO:
/flexible-meta-disk /dev/VolGroup00/disk1_metadata;/
then run the commands:
drbdadm create-md <resource_name> # where resource_name is your resource
name (looks to be "azv")
then:
stop/start drbd
at this point you are at the "dual secondary" state:
drbdadm state all
# both will show as "Secondary"
and on whatever server you want to be the source to sync from:
drbdsetup /dev/drbd0 primary --overwrite-data-of-peer # this will do a
full sync from here to your other server
you can check the sync status with:
cat /proc/drbd
then you need to create the FS. do this on one server using the
/dev/drbd0 device. it will propagate to the other
hope this helps you answer the questions you have..
-JPH
Earnie Panneflek wrote:
> why won't anybody even answer my questions?
> is this a 'ONLY friends kind a mailing list?? '
>
> ---------- Forwarded message ----------
> From: *Earnie Panneflek* <epanneflek at gmail.com
> <mailto:epanneflek at gmail.com>>
> Date: Fri, Feb 29, 2008 at 1:49 PM
> Subject: DRD and LVM matters..
> To: drbd-user at lists.linbit.com <mailto:drbd-user at lists.linbit.com>
>
>
> Fellow drbd's,
>
> I have just install two servers(both configured identically (with as
> the only diffrence the hostnames and ipadresses) running RHEL 4 and
> wanted to test Heart Beat with DRBD on them.
>
> The heartbeat installation when well, only problem is the drdb
> installation. I have already create a VG and LV's on both servers and
> don't have space left on disk, only space i can get is from my
> Volgroup00. I created a lv call disk1 on both servers witch i wanted
> drdb to sync between them. Is this possible??
>
> p.s. i didn't create a filesystem on the lv i want drdb to handle yet,
> and didn't mount yet.
>
> filesystem: (both servers the same)
>
> /dev/mapper/VolGroup00-LogVol00
> 7.7G 2.9G 4.5G 39% /
> /dev/sda1 99M 13M 81M 14% /boot
> none 501M 0 501M 0% /dev/shm
> /dev/mapper/VolGroup00-LogVol01
> 5.8G 45M 5.5G 1% /home
> /dev/mapper/VolGroup00-LogVol02
> 3.9G 54M 3.6G 2% /tmp
> /dev/mapper/VolGroup00-LogVol05
> 3.9G 40M 3.7G 2% /u01
> /dev/mapper/VolGroup00-LogVol06
> 7.7G 50M 7.3G 1% /u02
> /dev/mapper/VolGroup00-LogVol03
> 3.9G 118M 3.6G 4% /var
> Vgdisplay:
> --- Volume group ---
> VG Name VolGroup00
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 12
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 9
> Open LV 7
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 37.16 GB
> PE Size 32.00 MB
> Total PE 1189
> Alloc PE / Size 1166 / 36.44 GB
> Free PE / Size 23 / 736.00 MB
> VG UUID UoAEEi-v9LE-aBEE-9JnC-hoRr-Rd39-EGBTtC
>
> --- Logical volume i created:
>
> LV Name /dev/VolGroup00/disk1
> VG Name VolGroup00
> LV UUID u5fWpd-ZZY1-yTO4-l4Lt-0xCb-ZWgt-WSOyYT
> LV Write Access read/write
> LV Status available
> # open 0
> LV Size 1.00 GB
> Current LE 32
> Segments 1
> Allocation inherit
> Read ahead sectors 0
> Block device 253:8
>
> drdb conf:
>
> resource azv {
> protocol C;
> on azv-test01 {
> device /dev/drbd0;
> disk /dev/VolGroup00/disk1;
> address 192.168.50.195:7788 <http://192.168.50.195:7788>;
> meta-disk internal;
> }
> on azv-test02 {
> device /dev/drbd0;
> disk /dev/VolGroup00/disk1;
> address 192.168.50.196:7788 <http://192.168.50.196:7788>;
> meta-disk internal;
> }
> disk {
> on-io-error detach; What to do when the lower level device errors.
> }
> net {
> max-buffers 2048;
> ko-count 4;
> #on-disconnect reconnect; # Peer disconnected, try to reconnect.
> }
> syncer {
> rate 10M;
> #group 1;
> al-extents 257;
> }
> startup {
> wfc-timeout 0;
> degr-wfc-timeout 120;
> }
> }
>
> It won't work, could u please help me out?
>
> *Tha Jamaican Nebie*
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080306/9862c605/attachment.htm>