Hi, <div><br></div><div>I'm getting kind of ashamed here. I already mentioned its an old setup, but, yeah - the whole process is to get it updated to latest kernel + upgrade the hardware... I'm just curious about the specific issue, to know if its a flaw in the design. </div>
<div><br></div><div>So, GNU/Linux distribution is Debian-4.0, running under kernel 2.6.18-6-686-bigmem. The 'fileserver' doesn't have XEN installed, but I'm pretty sure my exported raw device from AoE are equivalent to your disks as files on NFS from I/O point of view. My export are through a single ethernet 1gbps link with no bonding installed (yet). </div>
<div><br></div><div>P.</div><div><br><br><div class="gmail_quote">On Tue, Aug 30, 2011 at 6:30 AM, Martin Rusko <span dir="ltr"><<a href="mailto:martin.rusko@gmail.com">martin.rusko@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Pascal,<br>
<br>
what is the kernel and distribution you're running there, please? I'm<br>
just curious, as I see somewhat similar behavior with two nodes<br>
running drbd, ocfs2, corosync+pacemaker and xen to host couple of<br>
virtual guests. As a proof-of-concept, I have some guests having disks<br>
as files on NFS mounted directory from external NFS server. If there<br>
is heavy IO in these virtual machines, I can observe very short drbd<br>
disconnections and also corosync complains about being paused for two<br>
long (up to 16seconds!, normally it sends some traffic over the<br>
network 3 times per second). When corosync is paused for as long as<br>
those 16 seconds, that node gets "stonithed" by remaining cluster<br>
members.<br>
<br>
My setup is Debian/Squeeze with packages from official repositories,<br>
with kernel 2.6.32-5-xen-amd64. I'm still running around like headless<br>
chicken, trying different things, right now to run kernel with<br>
CONFIG_PREEMPT=y or maybe a different kernel version. Having some<br>
experience with linux kernel tracing, maybe it would be possible what<br>
blocks execution of drbd or corosync processes making them to start<br>
failing.<br>
<br>
Best Regards,<br>
Martin<br>
<div><div></div><div class="h5"><br>
<br>
<br>
On Sun, Aug 28, 2011 at 3:59 PM, Pascal Charest<br>
<<a href="mailto:pascal.charest@labsphoenix.com">pascal.charest@labsphoenix.com</a>> wrote:<br>
> Hi,<br>
> It always `worked` - it doesn't crash. Only the communication seem to get<br>
> interrupted for a few seconds while backup are being taken. Backup are valid<br>
> and the setup can survive with a few seconds where redundancy is not<br>
> available.<br>
> I should have asked that question when I build the setup 4 years ago, but...<br>
> yeah... and now I'm trying to fix everything up for that client.<br>
> The broken communication seems to happen only when I'm mounting the backup<br>
> snapshot and taking RAR from it. Might be a problem on the AoE side of<br>
> things along with a LVM snapshot.<br>
><br>
> P.<br>
><br>
> On Sun, Aug 28, 2011 at 9:18 AM, Pascal BERTON <<a href="mailto:pascal.berton3@free.fr">pascal.berton3@free.fr</a>><br>
> wrote:<br>
>><br>
>> Pascal,<br>
>><br>
>><br>
>><br>
>> One thing is unclear : did it used to work in the past (and if yes what<br>
>> has changed lately that could explain this behavior) or is it a new feature<br>
>> you’ve just added to your customer’s config ?<br>
>><br>
>> Furthermore, I suspect you have scripted all this process haven’t you ? If<br>
>> so, have you identified which step induces this communication disruption?<br>
>> Have you tried to execute manually this sequence and then at what step does<br>
>> it happen ?<br>
>><br>
>><br>
>><br>
>> Best regards,<br>
>><br>
>><br>
>><br>
>> Pascal.<br>
>><br>
>><br>
>><br>
>> De : <a href="mailto:drbd-user-bounces@lists.linbit.com">drbd-user-bounces@lists.linbit.com</a><br>
>> [mailto:<a href="mailto:drbd-user-bounces@lists.linbit.com">drbd-user-bounces@lists.linbit.com</a>] De la part de Pascal Charest<br>
>> Envoyé : samedi 27 août 2011 22:52<br>
>> À : <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
>> Objet : [DRBD-user] Frequent disconnect when doing backup.<br>
>><br>
>><br>
>><br>
>> Hi,<br>
>><br>
>><br>
>><br>
>> I have a small issue with one of my DRBD setup. When my backup is running<br>
>> (-see lower for setup and backup details), i`m getting those errors:<br>
>><br>
>><br>
>><br>
>> Aug 27 10:24:18 pig-two -- MARK --<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: peer( Secondary -> Unknown ) conn(<br>
>> Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: asender terminated<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: Terminating asender thread<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: sock was reset by peer<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: _drbd_send_page: size=4096 len=3064<br>
>> sent=-32<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: Creating new current UUID<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: tl_clear()<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: Connection closed<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: conn( NetworkFailure -> Unconnected<br>
>> )<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: receiver terminated<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: receiver (re)started<br>
>><br>
>> Aug 27 10:27:26 pig-two kernel: drbd0: conn( Unconnected -> WFConnection )<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Handshake successful: Agreed<br>
>> network protocol version 88<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Peer authenticated using 20 bytes<br>
>> of 'sha1' HMAC<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: conn( WFConnection -><br>
>> WFReportParams )<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Starting asender thread (from<br>
>> drbd0_receiver [3066])<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: data-integrity-alg: md5<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: peer( Unknown -> Secondary ) conn(<br>
>> WFReportParams -> WFBitMapS ) pdsk( DUnknown -> UpToDate )<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: conn( WFBitMapS -> SyncSource )<br>
>> pdsk( UpToDate -> Inconsistent )<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Began resync as SyncSource (will<br>
>> sync 2160 KB [540 bits set]).<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Resync done (total 1 sec; paused 0<br>
>> sec; 2160 K/sec)<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: conn( SyncSource -> Connected )<br>
>> pdsk( Inconsistent -> UpToDate )<br>
>><br>
>> Aug 27 10:27:27 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 10:44:19 pig-two -- MARK --<br>
>><br>
>><br>
>><br>
>> and<br>
>><br>
>><br>
>><br>
>> Aug 27 11:04:19 pig-two -- MARK --<br>
>><br>
>> Aug 27 11:20:36 pig-two kernel: drbd0: _drbd_send_page: size=4096 len=4096<br>
>> sent=-104<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: peer( Secondary -> Unknown ) conn(<br>
>> Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Creating new current UUID<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: asender terminated<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Terminating asender thread<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: sock was shut down by peer<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: tl_clear()<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Connection closed<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: conn( NetworkFailure -> Unconnected<br>
>> )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: receiver terminated<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: receiver (re)started<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: conn( Unconnected -> WFConnection )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Handshake successful: Agreed<br>
>> network protocol version 88<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Peer authenticated using 20 bytes<br>
>> of 'sha1' HMAC<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: conn( WFConnection -><br>
>> WFReportParams )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Starting asender thread (from<br>
>> drbd0_receiver [3066])<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: data-integrity-alg: md5<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: peer( Unknown -> Secondary ) conn(<br>
>> WFReportParams -> WFBitMapS ) pdsk( DUnknown -> UpToDate )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: conn( WFBitMapS -> SyncSource )<br>
>> pdsk( UpToDate -> Inconsistent )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Began resync as SyncSource (will<br>
>> sync 5788 KB [1447 bits set]).<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Resync done (total 1 sec; paused 0<br>
>> sec; 5788 K/sec)<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: conn( SyncSource -> Connected )<br>
>> pdsk( Inconsistent -> UpToDate )<br>
>><br>
>> Aug 27 11:20:37 pig-two kernel: drbd0: Writing meta data super block now.<br>
>><br>
>> Aug 27 11:44:19 pig-two -- MARK --<br>
>><br>
>><br>
>><br>
>> Analysis: it look like the network is failing, then everything - under a<br>
>> second - re-connect, resync and work again. There are no impact on the<br>
>> 'production'. Anyone got some kind of idea, why ? Is it an error in my<br>
>> setup/design (see lower).<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> Some background on the setup:<br>
>><br>
>><br>
>><br>
>> It's an old version. Very old in fact - roadmap to upgrade has been<br>
>> drafted and submitted to client - I`m just wondering about the specific<br>
>> issue here... I want to be sure it's not an infrastructure design problem.<br>
>><br>
>> pig-two:~# cat /proc/drbd<br>
>><br>
>> version: 8.2.6 (api:88/proto:86-88)<br>
>><br>
>> GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by root@pig-two,<br>
>> 2008-08-19 15:02:28<br>
>><br>
>> 0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r---<br>
>><br>
>> ns:650469968 nr:0 dw:648856776 dr:16725553 al:5463958 bm:22571 lo:0<br>
>> pe:0 ua:0 ap:0 oos:0<br>
>><br>
>><br>
>><br>
>> We are speaking, of:<br>
>><br>
>> - 4x SAS 15k drives in a hardware raid-5 array (DELL<br>
>> Perc5)... presented to the OS as /dev/sda.<br>
>><br>
>> - /dev/sda is the back-end device for DRBD... presented to the OS as<br>
>> /dev/drbd0<br>
>><br>
>> - /dev/drbd0 is a lone "physical volume" in a volume group (called SAN)<br>
>> from which Logical Volume are created. Those are NOT locally mounted.<br>
>><br>
>> - those logical volumes are exported with vblade (AoE protocol, layer<br>
>> 2) to some other physical system (Xen dom0) where they are used as backend<br>
>> device (/dev/etherd/e0.1) for root volume of virtual system<br>
>><br>
>><br>
>><br>
>> Everything work fine, but when I do backup, I follow this process:<br>
>><br>
>> - mount a CIFS exported share over the network<br>
>><br>
>> - take a LV snapshot, mount it, and copy everything to the CIFS share.<br>
>><br>
>> - unmount snapshot, delete it... do for all LV.<br>
>><br>
>> - unmount network share<br>
>><br>
>><br>
>><br>
>> The backup are consistent and valid (tested)... What have I missed ?<br>
>> Should I move away from AoE to a Linux based iSCSI ?<br>
>><br>
>><br>
>><br>
>> P.<br>
>><br>
>><br>
>><br>
>> --<br>
>><br>
>> Pascal Charest - Cutting-edge technology consultant<br>
>> <a href="https://www.labsphoenix.com" target="_blank">https://www.labsphoenix.com</a><br>
><br>
><br>
> --<br>
> --<br>
> Pascal Charest - Cutting-edge technology consultant<br>
> Les Laboratoires Phoenix<br>
><br>
</div></div>> _______________________________________________<br>
> drbd-user mailing list<br>
<div class="im">> <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
</div>> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
><br>
><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>--<div>Pascal Charest -<i> Cutting-edge technology consultant</i></div><div><a href="https://labsphoenix.com" target="_blank">Les Laboratoires Phoenix</a> </div>
<br>
</div>