[DRBD-user] Ask some questions about the splitting of the dual master device
Robert Altnoeder
robert.altnoeder at linbit.com
Tue Dec 11 10:38:46 CET 2018
On 12/10/18 9:55 AM, Su Hua wrote:
> The arbitration service I am talking about means that you can decide
> to have only one device to provide services before the brain split occurs.
That does not make sense.
The cause of a split brain is that cluster nodes are unable to detect
wheter other cluster nodes are inoperative (e.g., the hardware is
unpowered) or just unreachable (e.g., the network connections are
interrupted). Now, if you have a 2 node cluster, nodes A and B, and you
pre-select node A to run services whenever the cluster cannot see the
status of the other node, what happens if node A goes down? Node B can
not figure out whether node A is inoperative or unreachable, therefore,
everything stops, because Node B is not allowed to run services. That
defeats the purpose of a cluster.
> Therefore, in order to prevent data loss or IO tear, a stricter
> arbitration
> strategy is needed to ensure that all IOs can only select one controller
> to provide services after brain splitting.
Yes, that is called fencing, and it happens to prevent the split brain,
not after the split brain. If it's a multi-node (>= 3 nodes) cluster,
then there is also quorum, so that only the partition that still has a
quorum will fence the partition that does not (to avoid deathmatch
situations).
Recent versions of DRBD have a quorum feature as well, so I/O will stop
if the quorum is lost.
> I found in the test that there will be a 10 second heartbeat wait
> after the
> drbd synchronization network port is down. During this time, the new
> data IO will not be synchronized to the other end.
> [...]
> But another controller does not have this data, which leads to data loss.
DRBD performs synchronous replication, which means that an application
I/O request is completed only after the data has been written to all nodes.
Data that is in flight is also covered by DRBD's activity log, so that
the data will be consistent whenever the resource is resynced.
Configuring fencing handlers in DRBD itself will trigger fencing by the
cluster resource manager (Pacemaker in this case) if the replication
link fails. As far as I remember, that happens synchronously as well, so
that I/O freezes until the handler returns.
br,
Robert
More information about the drbd-user
mailing list