<div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 27, 2017 at 9:04 PM, Igor Cicimov <span dir="ltr"><<a href="mailto:igorc@encompasscorporation.com" target="_blank">igorc@encompasscorporation.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-size:small">Hey Gionatan,<br></div><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Thu, Jul 27, 2017 at 7:04 PM, Gionatan Danti <span dir="ltr"><<a href="mailto:g.danti@assyoma.it" target="_blank">g.danti@assyoma.it</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_-422817498603911508gmail-">Il 27-07-2017 10:23 Igor Cicimov ha scritto:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
When in cluster mode LVM will not use local cache that's part of the<br>
configuration you need to do during setup.<br>
<br>
</blockquote>
<br></span>
Hi Igor, I am not referring to LVM's metadata cache. I speak about the kernel I/O buffers (ie: the one you can see from "free -m" under the buffer column) which, in some case, work similarly to a "real" pagecache.<div class="m_-422817498603911508gmail-HOEnZb"><div class="m_-422817498603911508gmail-h5"><br></div></div></blockquote></span><div><div>Well don't see how is this directly related to dual-primary setup since even with single primary what ever is not yet committed to disk is not replicated to the secondary as well. So in case you loose the primary what ever was in its buffers at the time is gone as well.<br></div> </div><div><div>But the rule-of-thumb lets say would be to have as less cache layers as possible without impact on the performance and retain the data consistency in the same time. With VMs you have additional cache layer in the guest as well as the one in the host. There are many documents discussing cache modes like these <a href="https://www.suse.com/documentation/sles11/book_kvm/data/sect1_1_chapter_book_kvm.html" target="_blank">https://www.suse.com/<wbr>documentation/sles11/book_kvm/<wbr>data/sect1_1_chapter_book_kvm.<wbr>html</a>, <a href="https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatbpkvmguestcache.htm" target="_blank">https://www.ibm.com/support/<wbr>knowledgecenter/en/linuxonibm/<wbr>liaat/liaatbpkvmguestcache.htm</a><wbr>, <a href="https://pve.proxmox.com/wiki/Performance_Tweaks" target="_blank">https://pve.proxmox.com/wiki/<wbr>Performance_Tweaks</a> for example.<br><br></div><div>So which write cache mode you will use really depends on the specific hardware you use, the system amount of RAM, the OS sysctl settings (ie how often you flush to dis, params like vm.dirty_ratio, vm.dirty_background_ratio etc.), the disk types/speed, the HW RAID controller (for example with battery backed cache or not) ie DRBD has some tuning parameters like:<br><br> disk-flushes no;<br> md-flushes no;<br> disk-barrier no;<br><br></div><div>which makes it possible to use the write-back caching on the <b>BBU-backed</b> RAID controller instead of flushing directly to disk. So many factors are in play but the main idea is to reduce the number of caches (or their caching time) between the data and the disk as much as possible without loosing data or performance.<br></div></div></div></div></div>
</blockquote></div><br><div style="font-size:small" class="gmail_default">And in case of live migration I'm sure the tool you decide to use will freeze the guest and make sync() call to flush the os cache *before* stopping and starting the guest on the other node.</div><br></div></div>