Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Il 27-07-2017 03:52 Igor Cicimov ha scritto: > > I would recommend going through this lengthy post > http://lists.linbit.com/pipermail/drbd-user/2011-January/015236.html > [1] covering all pros and cons of several possible scenarios. > Hi Igor, thanks for the link, very interesting thread! > The easiest scenario for dual-primary DRBD would be a DRBD device per > VM, so something like this RAID -> PV -> VG -> LV -> DRBD -> VM, where > you don't even need LVM locking (since that layer is not even exposed > to the user) and is great for dual-primary KVM clusters. You get live > migration and also keep the resizing functionality too since you can > grow the underlying LV and then the DRBD it self to increase the VM > disk size lets say. The VM needs to be started on one node only of > course so you (or your software) need to make sure this is always the > case. One huge drawback of this approach though is the large number of > DRBD device to maintain in case of hundreds of VM's! Although since > you have already committed to different approach this might not be > possible at this point. > > Note: In this case though you don't even need dual primary since the > DRBD for each VM can be independently promoted to primary on any node > at any time. In case of single DRBD it is all-or-nothing so no > possibility of migrating individual VMs. Yeah, I considered such a solution in the past. Its strong appeal depends on be able to live-migrating without going for a dual-primary setup. However I would like to avoid creating/deleting DRBD devices for each added/removed virtual machine. One possibile solution is to use ganeti, which automates resource and VM creation, right? > Now adding LV on top of DRBD is bit more complicated. I guess your > current setup is something like this? > > LV1 -> VM1 > > RAID -> DRBD -> PV -> VG -> LV2 -> VM2 > LV3 -> VM3 > ... > > In this case when DRBD is in dual-primary the DLM/cLVM setup is > imperative so the LVMs know who has the write access. But then on VM > migration "something" needs to shutdown the VM on Node1 to release the > LVM lock and start the VM on Node2. Same as above, as long as each VM > is running on *only one* node you should be fine, the moment you start > it on both you will probably corrupt your VM. Software like Proxmox > should be able to help you on both points. > > Which brings me to an important question I should have asked at the > very beginning actually: what do you use to manage the cluster?? (if > anything) Pacemaker surely is a possibility, but for manual, non automated live migration, I should be fine with libvirt integrated locking: https://libvirt.org/locking.html. It avoid (at application level) the concurrent starting/running of any virtual machine. Thank you very much, Igor. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8