Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hmm, I just reread that article... it sounds like it's just paranoid about split brain. Wouldn't protocol C cause the I/O (or at least write I/O) to fail if the link went down? or at least have that as an option of how to configure drbd? From the manual for protocol C: Synchronous replication protocol. Local write operations on the primary node are considered completed only after both the local and the remote disk write have been confirmed. So split brain shouldn't be possible if network loss between the two active nodes, or else protocol C is being violated. You should have to manually tell drbd that the other node is down. Of course that means there can be no automatic fail-over beyond the link quality of the two nodes as your volume would become read-only until network was restored. Andreas, what happens if you block your two nodes from talking directly to each other, but allow the client to talk to both? ----- Original Message ----- From: "Andreas Hofmeister" <andi at collax.com> To: drbd-user at lists.linbit.com Sent: Tuesday, November 29, 2011 5:44:08 PM Subject: Re: [DRBD-user] Cluster filesystem question On 29.11.2011 21:17, Florian Haas wrote: > On Mon, Nov 28, 2011 at 9:26 PM, Lars Ellenberg > As this sort of issue currently pops up on IRC every other day, I've > just posted this rant: > > http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-multipath-dont-do-that/ Yes Florian, I did get that from Lars' response. I'm still want to understand what the actual problems are. And no, google does not actually help in this regard. If I knew what questions to ask and where to look at, I can beg or bribe the right people or even find the right code to patch. Ciao Andi _______________________________________________ drbd-user mailing list drbd-user at lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user