[DRBD-user] r0 ok, r1 PingAck did not arrive in time

Gerald Brandt gbr at majentis.com
Thu Jun 27 13:42:59 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Cesar, 

I'm running DRBD from Ubuntu 12.10 on two standalone servers. I'm using two software RAIDs(6 and 1) with DRBD over top, and exposing the DRBD volumes over iSCSI to Citrix XenServer. 

The sync network are two straight cable e1000's. 

include "drbd.d/global_common.conf"; 
# include "drbd.d/*.res"; 

resource iscsi.target.0 { 
protocol C; 

handlers { 
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f"; 
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f"; 
local-io-error "echo o > /proc/sysrq-trigger ; halt -f"; 
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5"; 
before-resync-target /usr/local/bin/resync-start-RAID6.sh; 
after-resync-target /usr/local/bin/resync-end-RAID6.sh; 
split-brain "/usrlib/drbd/notify-split-brain.sh root"; 
} 

startup { 
degr-wfc-timeout 120; 
} 

disk { 
on-io-error detach; 
} 

net { 
cram-hmac-alg sha1; 
shared-secret "password"; 
after-sb-0pri discard-zero-changes; 
after-sb-1pri discard-secondary; 
after-sb-2pri disconnect; 
rr-conflict disconnect; 
sndbuf-size 0; 
max-buffers 8000; 
max-epoch-size 8000; 
} 

syncer { 
rate 30M; 
verify-alg sha1; 
# al-extents 257; 
al-extents 3389; 
} 

on iscsi-filer-1 { 
device /dev/drbd0; 
disk /dev/md0; 
address 192.168.10.1:7789; 
flexible-meta-disk internal; 
} 

on iscsi-filer-2 { 
device /dev/drbd0; 
disk /dev/md0; 
address 192.168.10.2:7789; 
flexible-meta-disk internal; 
} 
} 

resource iscsi.target.1 { 
protocol C; 

handlers { 
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f"; 
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f"; 
local-io-error "echo o > /proc/sysrq-trigger ; halt -f"; 
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5"; 
before-resync-target /usr/local/bin/resync-start-RAID1.sh; 
after-resync-target /usr/local/bin/resync-end-RAID1.sh; 
split-brain "/usrlib/drbd/notify-split-brain.sh root"; 
} 

startup { 
degr-wfc-timeout 120; 
} 

disk { 
on-io-error detach; 
} 

net { 
cram-hmac-alg sha1; 
shared-secret "password"; 
after-sb-0pri discard-zero-changes; 
after-sb-1pri discard-secondary; 
after-sb-2pri disconnect; 
rr-conflict disconnect; 
sndbuf-size 0; 
max-buffers 8000; 
max-epoch-size 8000; 
} 

syncer { 
rate 30M; 
verify-alg sha1; 
# al-extents 257; 
al-extents 3389; 
} 

on iscsi-filer-1 { 
device /dev/drbd1; 
disk /dev/md1; 
address 192.168.10.1:7790; 
flexible-meta-disk internal; 
} 

on iscsi-filer-2 { 
device /dev/drbd1; 
disk /dev/md1; 
address 192.168.10.2:7790; 
flexible-meta-disk internal; 
} 
} 

----- Original Message -----

> From: "Cesar Peschiera" <brain at click.com.py>
> To: "Gerald Brandt" <gbr at majentis.com>
> Sent: Wednesday, June 26, 2013 10:24:52 PM
> Subject: Re: [DRBD-user] r0 ok, r1 PingAck did not arrive in time

> 
> Please Gerald, answer this question:
> What are the model and brand of your NICs for use with DRBD on each
> PVE Node?
> If you don't know, you can remove the NIC of the mainboard and see in
> the chipset that said

> And suggestions:
> Please follow the steps in this order:
> 1- If you have configured "HA" for the VMs with Virtual disks on the
> DRBD resource , disable "HA" for these VMs.
> 2- Power off your VMs that have his virtual disks on the DRBD
> resource

> 3- Make backup of all your VMs that have his virtual disks on the
> DRBD resource ( just as a precaution)
> 4- Run on each PVE Node : " service drbd stop && chkconfig drbd off "
> 5- Run on each PVE node : " aptitude update && aptitude full-upgrade
> "
> 6- Reboot the PVE Nodes with have the DRBD resources
> 7- Run on each PVE Node : aptitude install pve-headers-`uname -r` #
> do not forget the punctuation marks
> 7- With this you will have the latest PVE version in your system, and
> his new kernel need DRBD 8.3.13, so you must be compile and install
> it
> (I think that you know how do it)
> 8- Configure the new directives of DRBD if this is necessary

> 9- Run on all Nodes with have the DRBD service :
> cp /var/log/kern.log /var/log/kern.log.bak && cat /dev/null
> >/var/log/kern.log

> 10- Run on each PVE Node: " service drbd start && chkconfig drbd on "
> (with this, the DRBD resources will make updates in the other PVE
> Node), and wait until that DRBD resource has been updated ( for
> this, you can run for see the progress with this command : watch cat
> /proc/drbd )
> 11- When the DRBD resources has been updated, If "HA" was enabled,
> now you can enable it.

> After of follow this steps, shows the result of: " cat
> /var/log/kern.log " and " cat /proc/drbd "

> Best regards
> Cesar

> ----- Original Message -----
> From: "Gerald Brandt" < gbr at majentis.com >
> To: "cesar" < brain at click.com.py >
> Cc: < drbd-user at lists.linbit.com >
> Sent: Wednesday, June 26, 2013 10:12 PM
> Subject: Re: [DRBD-user] r0 ok, r1 PingAck did not arrive in time

> > Hi,
> >
> > I'm running Intel e1000 with crossover cable, and I just noticed in
> > my log files I'm getting this:
> >
> > [3535162.766591] e1000e: eth1 NIC Link is Down
> > [3535168.243278] e1000e: eth1 NIC Link is Up 10 Mbps Full Duplex,
> > Flow Control: Rx/Tx
> > [3535168.243282] e1000e 0000:02:00.0: eth1: 10/100 speed: disabling
> > TSO
> > [3535176.574432] e1000e: eth1 NIC Link is Down
> > [3535178.495165] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex,
> > Flow Control: Rx/Tx
> > [3535214.602022] e1000e: eth1 NIC Link is Down
> > [3535214.602465] e1000e 0000:02:00.0: eth1: Reset adapter
> > [3535239.602540] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex,
> > Flow Control: Rx/Tx
> >
> > Interesting.
> >
> > Gerald
> >
> >
> > ----- Original Message -----
> >> From: "cesar" < brain at click.com.py >
> >> To: drbd-user at lists.linbit.com
> >> Sent: Wednesday, June 26, 2013 5:56:17 PM
> >> Subject: Re: [DRBD-user] r0 ok, r1 PingAck did not arrive in time
> >>
> >> Hi Lukas Gradl-4
> >>
> >> I have PVE 2.3 with the same problem
> >>
> >> But my NICs are Realtek, and soon i will change of brand for
> >> Intel,
> >> and i
> >> have connected in crossover-cable mode, because a "Switch" in the
> >> middle can
> >> change the data of the packets.
> >>
> >> Please answer this questions:
> >>
> >> What are the model of your NICs for use with DRBD on each PVE
> >> Node?
> >>
> >> Do you have the link in crossover-cable mode (ie link NIC to
> >> NIC)?,
> >> if not,
> >> just do it !!! (DRBD suggested highly do NIC to NIC)
> >>
> >> The latest "PVE 2.3" and "PVE 3.x" version have the kernel for
> >> support DRBD
> >> 8.3.13, update your PVE and your DRBD and after you must do test.
> >> After,
> >> please tell me about of this test.
> >>
> >> When i have my new NICs Intel, i will can tell you about of my
> >> experiences
> >>
> >> Best regards
> >> Cesar
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >> http://drbd.10923.n7.nabble.com/r0-ok-r1-PingAck-did-not-arrive-in-time-tp17953p17960.html
> >> Sent from the DRBD - User mailing list archive at Nabble.com.
> >> _______________________________________________
> >> drbd-user mailing list
> >> drbd-user at lists.linbit.com
> >> http://lists.linbit.com/mailman/listinfo/drbd-user
> >>
> >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130627/1de5bbc2/attachment.htm>


More information about the drbd-user mailing list