[DRBD-user] "PingAck not received" messages

Lars Ellenberg lars.ellenberg at linbit.com
Fri May 18 16:04:33 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, May 16, 2012 at 09:11:05PM +0100, Matthew Bloch wrote:
> I'm trying to understand a symptom for a client who uses drbd to run
> sets of virtual machines between three pairs of servers (v1a/v1b,
> v2a/v2b, v3a/v3b), and I wanted to understand a bit better how DRBD I/O
> is buffered depending on what mode is chosen, and buffer settings.
> 
> Firstly, it surprised me that even in replication mode "A", the system
> still seemed limited by by the bandwidth between nodes.  I found this
> out when the customer's bonded interface had flipped over to its 100Mb
> backup connection, and suddenly they had I/O problems.  While I was
> investigating this and running tests, I noticed that switching to mode A
> didn't help, even when measuring short transfers that I'd expect would
> fit into reasonable-sized buffers.  What kind of buffer size can I
> expect from an "auto-tuned" DRBD?  It seems important to be able to
> cover bursts without leaning on the network, so I'd like to know whether
> that's possible with some special tuning.

Uhm, well,
we have invented the DRBD Proxy specifically for that purpose.

> The other problem is the "PingAck not received" messages that have been
> littering the logs of the v3a/v3b servers for the last couple of weeks,
> e.g. this has been happening every few hours for one DRBD or another:
> 
> May 14 08:21:45 v3b kernel: [661127.869500] block drbd10: PingAck did
> not arrive in time.

Increase ping timeout?

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com



More information about the drbd-user mailing list