<div dir="ltr"><div class="gmail_extra">On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson <span dir="ltr"><<a href="mailto:eric.robinson@psmnv.com" target="_blank">eric.robinson@psmnv.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">> On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:<br>
> > Hi Eric,<br>
> ><br>
> > Am Donnerstag, den 26.07.2018, 13:56 +0000 schrieb Eric Robinson:<br>
> > > Would there really be a PV signature on the backing device? I didn't<br>
> > > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn<br>
> > > the drbd disk into one (pvcreate /dev/drbd1).<br>
> <br>
> Yes (please view in fixed with font):<br>
> <br>
> | PV signature | VG extent pool |<br>
> |.... drbd1 .... .... .... .... | drbd metadata | .... md4 .... ....<br>
> |.... .... .... .... .... ...|md metadata|<br>
> |component|drives|.....|.....|<wbr>...of...|md4......|.....|.....<wbr>|<br>
> <br>
> > both DRBD and mdraid put their metadata at the end of the block<br>
> > device, thus depending on LVM configuration, both mdraid backing<br>
> > devices as well as DRBD minors bcking VM disks with direct-on-disk PVs<br>
> > might be detected as PVs.<br>
> ><br>
> > It is very advisable to set lvm.conf's global_filter to allow only the<br>
> > desired devices as PVs by matching a strict regexp, and to ignore all<br>
> > other devices, e.g.:<br>
> ><br>
> > global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]<br>
> ><br>
> > or even more strict:<br>
> ><br>
> > global_filter = [ "a|^/dev/md4$|", "r/.*/" ]<br>
> <br>
> Uhm, no.<br>
> Not if he want DRBD to be his PV...<br>
> then he needs to exclude (reject) the backend, and only include (accept) the<br>
> DRBD.<br>
> <br>
> But yes, I very much recommend to put an explicit white list of the to-be-<br>
> used PVs into the global filter, and reject anything else.<br>
> <br>
> Note that these are (by default unanchored) regexes, NOT glob patterns.<br>
> (Above examples get that one right, though r/./ would be enough...<br>
> but I have seen people get it wrong too many times, so I thought I'd mention<br>
> it here again)<br>
> <br>
> > After editing the configuration, you might want to regenerate your<br>
> > distro's initrd/initramfs to reflect the changes directly at startup.<br>
> <br>
> Yes, don't forget that step ^^^ that one is important as well.<br>
> <br>
> But really, most of the time, you really want LVM *below* DRBD, and NOT<br>
> above it. Even though it may "appear" to be convenient, it is usually not what<br>
> you want, for various reasons, one of it being performance.<br>
<br>
</div></div>Lars,<br>
<br>
I put MySQL databases on the drbd volume. To back them up, I pause them and do LVM snapshots (then rsync the snapshots to an archive server). How could I do that with LVM below drbd, since what I want is a snapshot of the filesystem where MySQL lives?<br>
<br>
How severely does putting LVM on top of drbd affect performance? <br>
<span class=""><br>
> <br>
> Cheers,<br>
> <br>
> --<br>
> : Lars Ellenberg</span></blockquote><div><br></div><div style="font-size:small" class="gmail_default">It depends I would say it is not unusual to end up with a setup where dbrd is sandwiched between top and bottom lvm due to requirements or convenience. For example in case of master-master with GFS2:</div><div style="font-size:small" class="gmail_default"><br></div><div style="font-size:small" class="gmail_default">iscsi,raid -> lvm -> drbd -> clvm -> gfs2</div></div></div><div class="gmail_extra"><br></div><div class="gmail_extra"><div style="font-size:small" class="gmail_default">Apart from the clustered lvm on top of drbd (which is RedHat recommended) you also get the benefit of easily extending the drbd device(s) due to the underlying lvm.<br></div><br></div></div>