Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 20/10/15 12:25 PM, Joeri Casteels wrote: > >> On 20 Oct 2015, at 17:47, Lionel Sausin <ls at numerigraphe.com> wrote: >> >> Le 20/10/2015 17:30, Joeri Casteels a écrit : >>>> Protocol A an async, C is sync. So you're probably hitting the network >>>> limit on C. >>> I don’t think so since it has a direct 20G trunk in-between with perf i get 19.6Gbit/s on single threat. With A (async when i monitor the network it also hits 2x the network speed as C does) >> "sync" means the round-trip time (down to the remote disks) is what counts, not the bandwidth. > So what causes the 1/2 difference then if i read on forum’s most people don’t even see a speed difference between protocol A and C… btw both’s primary and slave node are identical hardware wise so it’s not that the disks are the limiting factor. It could easily be the hardware, or configuration, or any number of things. To diagnose, you'll want to test each node's storage on its own, test the network links in isolation, etc. You need to find the location of the performance issue (or confirm there is no issue) before you burn time debugging higher-level applications like DRBD. Protocol A says to call the write complete when it's on the local node's network send buffers, so the slow-down could be on the peer's network receive buffers. Try testing Protocol B. That calls the write complete when it's received by the peer, but not yet committed to persistent storage. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education?