[DRBD-user] Replication problems constants with DRBD 8.3.10

Luca Fornasari luca.fornasari at gmail.com
Sun Jun 16 10:47:21 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Sun, Jun 16, 2013 at 6:44 AM, cesar <brain at click.com.py> wrote:

> Right digimar, Red Hat did the fence software.
>
> Only for comment, please see this link of Red Hat, and you will see that
> "fence_ack_manual" is supported:
> https://access.redhat.com/site/articles/27136
>

In my opinion "fence_ack_manual" is supported to escape from situations
where the fencing mechanism is not working as expected e.g. (from the
article you just cited) "*site-to-site link failure would prevent fencing
from working between the sites"*
So as Digimar already told: *you need fencing*!! I think it is the last
time some one tell you: *you need fencing*!!


> And about DRBD over a direct connection in mode round robin, can you give
> me
> links or comments about this case? (This is very important for me because I
> will lose connection speed if I change of balance-rr to active-backup).
>

https://www.kernel.org/doc/Documentation/networking/bonding.txt

balance-rr: This mode is the only mode that will permit a single
	TCP/IP connection to stripe traffic across multiple
	interfaces. It is therefore the only mode that will allow a
	single TCP/IP stream to utilize more than one interface's
	worth of throughput.  This comes at a cost, however: the
	striping generally results in peer systems receiving packets out
	of order, causing TCP/IP's congestion control system to kick
	in, often by retransmitting segments.

The problem is you have out of order packets and it doesn't help if you
start to play around with net.ipv4.tcp_reordering sysctl parameter
because there will always be a chance to have out of order packets. Ordered
packets are indeed fundamental to DRBD.
In DRBD world the bonding driver is used to achieve HA using active/backup
or 802.3ad. Neither of which will boost your performance (802.3ad can
improve performance if and only if you have a great number of TCP
connections but that's not the case with your DRBD scenario).

In either case, I am grateful to you for your kind attention, your time and
> your information.
>
> Best regards
> Cesar


I *think* balance-rr bonding mode (and out of order packets) could be the
source of your specific problem:
Jun 14 08:50:12 kvm5 kernel: block drbd0: Digest mismatch, buffer modified by
upper layers during write: 21158352s +4096
Jun 14 08:50:12 kvm6 kernel: block drbd0: Digest integrity check
FAILED: 21158352s
+4096
So just try to go active/backup bonding mode and let's see what happens but
please do remember: You need fancing!

Cheers,
Luca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130616/6021471a/attachment.htm>


More information about the drbd-user mailing list