[DRBD-user] Timed out waiting for missing ack packets; disconnecting

Lars Ellenberg lars.ellenberg at linbit.com
Fri Jun 6 22:33:35 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Jun 05, 2014 at 11:01:39AM -0700, Adam Randall wrote:
> I'd like to preface this by saying that I'm not overly experienced with
> DRBD, and I've being piecing a lot of information together from various
> documentation sources, including those on the DRBD site.
> 
> I have two servers, both the same hardware configuration, connected to two
> identical MD1200's. I sync these two servers in primary/primary mode using
> DRBD with OCFS2. Though they are in primary/primary mode, one of the
> servers is the primary, and the other is the secondary. Some tasks are run
> on the secondary that modify the drbd/ocfs2 volume, but it is kept pretty
> light work. The servers each have two ethernet ports, one for their LAN
> connection, the other for a direct connection from one to the other that is
> used solely for DRBD and OCFS2 communication.
> 
> The primary server, named bellerophon, is a web, database and file storage
> server, with the database (postgresql) and documents stored on the
> drbd/ocfs2 volume. The secondary server, named bia, is mainly used as a hot
> backup.
> 
> Up until May 29th, we've had very good success with the setup, and have
> even deployed another server pair with the same configuration, though with
> a different purpose. On the 29th of May, however, we started seeing
> timeouts:
> 
> Feb 28 15:13:01 bellerophon kernel: block drbd1: Timed out waiting for
> > missing ack packets; disconnecting

The timeout used is the config parameter "ping-timeout",
unit is centiseconds, default value is 5, that is 0.5 seconds.

DRBD uses two TCP connections per replication link,
one for the bulk data, one for ACKs and some other stuff.

For data integrity reasons especially in multi-primary mode,
because of how we handle the potential concurrent writes
to the same or overlapping blocks, we must still "serialize"
sometimes over both sockets, and process data requests and ACKs
in the order they have been sent, not as they have been received.

This message means that we received some data request, noticed
(sequence number) there should have been more packets
on the "meta" socket, and wait for those for the arbitrarily chosen
"ping-timeout" (so we won't need yet an other config parameter.
But we did not receive them in time, for whatever reason.

See that your phyical replication link is healthy and not saturated,
maybe increase the ping-timeout slightly, and observe.

Though I strongly recommend against multi-primary setups,
if you do not really *need* them.
Don't do them because they "feel cool".
Only do them with good technical reason.

The additional latency and the distributed locks of the cluster
filesystems, as well as the added complexity in dealing with
and recovering from failure scenarios, are usually not worth it.

Especially if you are trying to get aways without fencing.
That may sometimes be an option in single-primary setups,
within certain constraints.

But multi-primary without fencing is an absolute no-go.
You appear to not have fencing configured.
That *will* cause data loss.

As a side note, DRBD with fencing, as is required for multi-primary,
will trigger fencing on a flaky replication link,
and node reboots (possibly perceived as "spurious").


-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com



More information about the drbd-user mailing list