<p>I do need live migration but not as a failover. So the migration will be manual and not automatic. In that case I would at some point have to run in primary/primary. Is the cluster fs only for shared storage configurations? Also what ensures that the VMs disk is only being accesses by one hypervisor node at a time? Would libvirt or drbd controll/ensurs that or is it up to me? Thats my main concern once I know this is feasible and the best route.</p>
<p>Thanks<br>
- Trey</p>
<div class="gmail_quote">On Oct 30, 2011 2:41 PM, "Bart Coninckx" <<a href="mailto:bart.coninckx@telenet.be">bart.coninckx@telenet.be</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 10/30/11 20:34, Trey Dockendorf wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
Hi,<br>
<br>
a cluster software like Pacemaker could serve your purpose very<br>
well. Do incorporate STONITH though, as you will need dual primary DRBD.<br>
Mind you Pacemaker is NOT easy, requires a lot of study and reading.<br>
<br>
Share image files could be done with OCFS2 for example (or any<br>
clustering file system). You then need to create a Pacemaker<br>
resource handling this. Personally I never used OCFS2, but there are<br>
lots of eaxmples around.<br>
<br>
HTH,<br>
<br>
B.<br>
<br>
Looking at the man page for STONITH I'm not sure I understand how to<br>
incorporate that into this particular situation. When I put DRBD in<br>
dual primary that will be only for the purpose of live migration, but I<br>
don't really plan to use live migrations at first for failover. My<br>
initial purpose is for maintenance, to allow me to reboot one node<br>
(after kernel update) and move all the VMs to the other node. Once I've<br>
got all that working smoothly then I'd move to having it serve a<br>
failover function.<br>
</blockquote>
<br>
If you don't need live migration, you can safely forget about dual primary. It is actually even better since it is (or rather can be) a can or worms.<br>
<br>
No reason to go for OCFS2 either, unless you want shared storage for your config files (though NFS might serve that purpose too).<br>
LVM with a regular file system is fine, very good for backup.<br>
<br>
But do keep in mind that a master/slave system (or primary/secondary DRBD) will in no way allow you to live migrate. You can however always add it later, though converting from ext4 to OCFS2 probably will require backup and restore.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Would OCFS2 be used in conjunction with DRBD? From a few articles I've<br>
found, I thought that what could be done is create a LVM that stores all<br>
the qcow2 images (part of my current deployment already) named<br>
lv_vmstore. Since I'm running CentOS 6 I've been formatting that ext4.<br>
Would I not then use DRBD to replicate lv_vmstore across both nodes?<br>
<br>
One catch to all this I think I forgot to include in my initial email is<br>
I have no shared storage. I only have 2 physical hosts with<br>
approximately 1TB isolated for lv_vmstore. Someday once budgets allow I<br>
may have a SAN, but for now I am trying to facilitate live migration<br>
without one.<br>
</blockquote>
<br>
DRBD is a technology that would allow you to be without a SAN.<br>
But do you or don't you need live migration (as you now mention you do)?<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks<br>
- Trey<br>
</blockquote>
<br>
B.<br>
<br>
<br>
</blockquote></div>