Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I read some threads about the possibility to put on drbd an already
created file sytem, using internal or external metadata.
I have another question that is a more general one:
I have an existing PV and have a VG composed by it. This PV is part of
a standalone server.
In this VG there are only some LVs that are configured as disks of
some guests of Qemu/KVM (the guests' disks are the LVs, no file system
created on host side, only at VM level)
The host is Fedora 11 x86_64 where I compiled drbd 8.3.2 and I have
clvmd sw but the VG now is not clustered
Now I would like to create a Qemu/KVM cluster based on this VG, using
drbd (with dual primary) and without destroying all what set up.
Is this possible? Are the steps below correct?
For semplicity imagine PV now is 50Gb+8Mb and I want to use the last
8Mb for creating external meta data (that should suffice for a drbd
device of at least 200Gb, so it is ok in size)
The second host at the moment does not exist.
- start from single user mode so that all the services accessing the
PV/VG are stoppped
- shrink the PV by 8MB so that I can use this space for creating
external drbd meta data
1) pvresize --setphysicalvolumesize 50G /dev/cciss/c0d0p5
QUESTION: no action at all for the VG, correct? Will it
automatically detect 2 fewer physical extents globally available (4Mb
each)?
2) fdisk /dev/cciss/c0d0
delete and create again the partition, but 8Mb smaller (in my
case it is the last one so it is simply manageable)
QUESTION: how to know at which cylinder set the end of the
partition? Do I need to round up cylinders based on 4Mb x number of
extents? Any smarter way?
create new 8Mb partition
save and exit
partprobe /dev/cciss/c0d0 so that the new partition is created
and available to be used right now
3) set up drbd.conf with these parts along the file
become-primary-on both;
resource r0
device /dev/drbd0;
disk /dev/cciss/c0d0p5
meta-disk /dev/cciss/c0d0p6
allow-two-primaries
ecc. (like described in docs)
4) set up lvm.conf so that I filter drbd and set up locking_type=3
for clvmd usage and then do a vgscan to regenerate the cache
filter = [ "a|drbd.*|", "r|.*|" ]
locking_type = 3
5) set up the VG as clustered
vgchange -cy /dev/VG
6) drbd create-md r0
QUESTION: do I need a "vgchange -an" on the VG before the command?
7) drbdadm up r0
8) setup the second node in similar way
10) set up RHCS on both nodes
eventually without services/resources but cman is required to
be active before I can start clvmd service
9) on first node issue
drbdadm -- --overwrite-data-of-peer primary r0
At the end of these steps one question arises:
actually I have not any GFS at all; I have a drbd-mirrored VG that is
a clustered one.....
So, can I assume that drbd on top of CLVMD is ok for clustering hosts
if my virtual machines use LVs of this VG as a whole for their disks?
One doubt: the VM see the host plain LV as its disk named for example
/dev/hda and then at installation creates a file system on top of it,
I imagine, that tipically is for linux ext3 and not GFS..... perhaps I
could mount the FS fro mwithin the host itself if the VM is powered
off?
Or the only way is that I am forced to create a GFS file system and to
use files of this FS as my virtual guests' disks (ala VMware ESX with
its cluster filesystem VMFS)?
I think that the former approach could speed up performace if
consistent as a whole....
If feasible, can I suppose this situation suitable also for live
migration of a VM?
Thanks for attention and comments
Gianluca