[DRBD-user] "PingAck not received" messages

Pascal BERTON pascal.berton3 at free.fr
Mon May 21 08:25:36 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Matthew!

I've recently experienced the very same behavior, with two bonded 10GbE
direct links between nodes for replication. The nodes  host 4 resources
under DRBD 8.3.11 using protocol B, and just like you, disconnections are
intermittent and on any resources, no logical rule for victim election
obviously. One of the resources hosts a CIFS share that I use for VMware
DataRecovery backups. Although my SMB config was looking good, I found
various errors at the datarecovery client and smb server that led me to dig
further. After spending lots of times searching for SMB tuning and such, I
finally observed that there's a link between these errors and the PingAck
errors : Since datarecovery was complaining, I did various experiments to
force it issue a complete recatalog of the restore points. This recatalog
operation issues lots of CIFS IOs. Each time, the 1st 30 minutes or so are
Ok, then the first errors occur, from times to times only at the beginning,
more often as time goes on. When it enters the "error phase", I see high io
wait activity, typically 70% or more, meaning lasting IOs... And that's
during these phases that the PingAck errors occur. I'm still unsure whether
it's network related or disk related. I feel like there's a buffer somewhere
that fills up progressively so the "correct" first 30 minutes, then this
buffer seems to get full, IO delays begin to rise, and then the CIFS errors,
the PingAck's that do not help much, and so on... Thus, it looks like these
PingAck errors occur because of the rising wait times. 
Next step will be to (try to) identify whether it's a network or disk buffer
that fills up, and whether it's disk activity or network activity that is
the real problem, don't clearly know how I will do that... If that's disk
related, may be a caching solution (dm-cache or whatever) would help, don't
know... Else, I'm afraid that only DRBD proxy would effectively fix that
trick.
If you find other clues, I'm interested!

Best regards,

Pascal.

-----Message d'origine-----
De : drbd-user-bounces at lists.linbit.com
[mailto:drbd-user-bounces at lists.linbit.com] De la part de Matthew Bloch
Envoyé : vendredi 18 mai 2012 18:50
À : drbd-user at lists.linbit.com
Objet : Re: [DRBD-user] "PingAck not received" messages

On 18/05/12 15:04, Lars Ellenberg wrote:
> On Wed, May 16, 2012 at 09:11:05PM +0100, Matthew Bloch wrote:
>> I'm trying to understand a symptom for a client who uses drbd to run 
>> sets of virtual machines between three pairs of servers (v1a/v1b, 
>> v2a/v2b, v3a/v3b), and I wanted to understand a bit better how DRBD 
>> I/O is buffered depending on what mode is chosen, and buffer settings.
>>
>> Firstly, it surprised me that even in replication mode "A", the 
>> system still seemed limited by by the bandwidth between nodes.  I 
>> found this out when the customer's bonded interface had flipped over 
>> to its 100Mb backup connection, and suddenly they had I/O problems.  
>> While I was investigating this and running tests, I noticed that 
>> switching to mode A didn't help, even when measuring short transfers 
>> that I'd expect would fit into reasonable-sized buffers.  What kind 
>> of buffer size can I expect from an "auto-tuned" DRBD?  It seems 
>> important to be able to cover bursts without leaning on the network, 
>> so I'd like to know whether that's possible with some special tuning.
>
> Uhm, well,
> we have invented the DRBD Proxy specifically for that purpose.

That's useful to know - so the kernel buffering, however it's configured,
isn't really set up for handling longer delays?  I don't think that's my
problem, as the ICMP ping time between the servers is <1ms, doesn't drop out
even while DRBD reports it hasn't seen its own pings.  It's gigabit ethernet
all the way, and on a private LAN.

>> The other problem is the "PingAck not received" messages that have 
>> been littering the logs of the v3a/v3b servers for the last couple of 
>> weeks, e.g. this has been happening every few hours for one DRBD or
another:
>>
>> May 14 08:21:45 v3b kernel: [661127.869500] block drbd10: PingAck did 
>> not arrive in time.
>
> Increase ping timeout?

I did that (now at 3s, from 0.5s) but I still get reconnections.

I set up a two pairs of VMs to write 1MB to the DRBD every second, and time
it.  On the problematic machines, I saw lots of times where the write took
more than 10s, and a couple of those corresponded with DRBD reconnections.
On the normal machines, only two of the writes took more than 0.1s!

So I'm still hunting for what might be going wrong, even though the software
versions are the same, the drbd links aren't hitting the ceiling, they're
doing no more I/O than the "good" pairs.  I think next will be to take some
packet dumps to see if there is anything odd going on at the TCP layer.

If nobody else on the list has seen this sort of behaviour, and Linbit have
a day rate :-) please get in touch privately, I'd rather get you guys to fix
this for our customer.

Best wishes,

-- 
Matthew Bloch                             Bytemark Hosting
                                 http://www.bytemark.co.uk/
                                   tel: +44 (0) 1904 890890
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user




More information about the drbd-user mailing list