[DRBD-user] drbd+lvm no bueno
igorc at encompasscorporation.com
Fri Jul 27 02:28:02 CEST 2018
On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson <eric.robinson at psmnv.com>
> > On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> > > Hi Eric,
> > >
> > > Am Donnerstag, den 26.07.2018, 13:56 +0000 schrieb Eric Robinson:
> > > > Would there really be a PV signature on the backing device? I didn't
> > > > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn
> > > > the drbd disk into one (pvcreate /dev/drbd1).
> > Yes (please view in fixed with font):
> > | PV signature | VG extent pool |
> > |.... drbd1 .... .... .... .... | drbd metadata | .... md4 .... ....
> > |.... .... .... .... .... ...|md metadata|
> > |component|drives|.....|.....|...of...|md4......|.....|.....|
> > > both DRBD and mdraid put their metadata at the end of the block
> > > device, thus depending on LVM configuration, both mdraid backing
> > > devices as well as DRBD minors bcking VM disks with direct-on-disk PVs
> > > might be detected as PVs.
> > >
> > > It is very advisable to set lvm.conf's global_filter to allow only the
> > > desired devices as PVs by matching a strict regexp, and to ignore all
> > > other devices, e.g.:
> > >
> > > global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> > >
> > > or even more strict:
> > >
> > > global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
> > Uhm, no.
> > Not if he want DRBD to be his PV...
> > then he needs to exclude (reject) the backend, and only include (accept)
> > DRBD.
> > But yes, I very much recommend to put an explicit white list of the
> > used PVs into the global filter, and reject anything else.
> > Note that these are (by default unanchored) regexes, NOT glob patterns.
> > (Above examples get that one right, though r/./ would be enough...
> > but I have seen people get it wrong too many times, so I thought I'd
> > it here again)
> > > After editing the configuration, you might want to regenerate your
> > > distro's initrd/initramfs to reflect the changes directly at startup.
> > Yes, don't forget that step ^^^ that one is important as well.
> > But really, most of the time, you really want LVM *below* DRBD, and NOT
> > above it. Even though it may "appear" to be convenient, it is usually
> not what
> > you want, for various reasons, one of it being performance.
> I put MySQL databases on the drbd volume. To back them up, I pause them
> and do LVM snapshots (then rsync the snapshots to an archive server). How
> could I do that with LVM below drbd, since what I want is a snapshot of the
> filesystem where MySQL lives?
> How severely does putting LVM on top of drbd affect performance?
> > Cheers,
> > --
> > : Lars Ellenberg
It depends I would say it is not unusual to end up with a setup where dbrd
is sandwiched between top and bottom lvm due to requirements or
convenience. For example in case of master-master with GFS2:
iscsi,raid -> lvm -> drbd -> clvm -> gfs2
Apart from the clustered lvm on top of drbd (which is RedHat recommended)
you also get the benefit of easily extending the drbd device(s) due to the
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the drbd-user