Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I just tested the same thing with libvirt connection and libvirtd shut down. The traces looks exactly the same, just now there is one another line (marked with <<<<<). Call Trace: <IRQ> [<ffffffff8023d16d>] skb_checksum+0x123/0x271 [<ffffffff8040a3d9>] skb_checksum_help+0x71/0xd0 [<ffffffff8831b33e>] :iptable_nat:ip_nat_fn+0x56/0x1c3 [<ffffffff882f750d>] :ip_conntrack:ip_conntrack_in+0x374/0x46a [<ffffffff8831b6cf>] :iptable_nat:ip_nat_local_fn+0x32/0xb7 [<ffffffff802351ae>] nf_iterate+0x41/0x7d [<ffffffff80428040>] dst_output+0x0/0xe [<ffffffff802588e4>] nf_hook_slow+0x58/0xbc [<ffffffff80428040>] dst_output+0x0/0xe [<ffffffff80230fd2>] dev_queue_xmit+0x2f2/0x313 <<<<< [<ffffffff80235662>] ip_queue_xmit+0x431/0x4a1 [<ffffffff80222990>] tcp_transmit_skb+0x64a/0x682 [<ffffffff804320f4>] tcp_retransmit_skb+0x53d/0x638 [<ffffffff8043362a>] tcp_write_timer+0x0/0x699 [<ffffffff80433aa2>] tcp_write_timer+0x478/0x699 [<ffffffff80292b1e>] run_timer_softirq+0x13f/0x1c6 [<ffffffff802127c7>] __do_softirq+0x62/0xde [<ffffffff80260da0>] call_softirq+0x1c/0x27c [<ffffffff8026dcd2>] do_softirq+0x31/0x98 [<ffffffff8026db4d>] do_IRQ+0xec/0xf5 [<ffffffff803a0a98>] evtchn_do_upcall+0x86/0xe0 [<ffffffff802608d2>] do_hypervisor_callback+0x1e/0x2c <EOI> [<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 [<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 [<ffffffff8026f139>] raw_safe_halt+0x84/0xa8 [<ffffffff8026c683>] xen_idle+0x38/0x4a [<ffffffff8024aa45>] cpu_idle+0x97/0xba >Hi Florian, > >I don't think we use anything special (at least you could see we have tried various versions of DRBD). What just came into my mind is (as you mentioned libvirtd), we are having open libvirt connection from DomU into Dom0 (to monitor all the other VMs) using "xen+ssh" protocol. I think I could give a try to shut it down and test it again. >You can see from attached files, libvirt and qemu settings are just defaults. > >Thanks. > >> Maroš, >> >> just so we can see whether you are using any "unusual" DRBD config >> options, can you post your drbd.conf please? Also, one of your Xen domU >> config files (or libvirt domain config files, if using libvirtd) would >> be helpful. >> >> Thanks. >> >> Cheers, >> Florian >> >> On 12/22/2008 01:33 PM, Maros TIMKO wrote: >> > Hi all! >> > >> > We are testing a setup of Xen virtualisation platform using CentOS distribution DRBD 8.2.6. We are having kernel panics and reboots of the primary node just seconds after we plug out the dedicated DRBD (crossover) connection. The failure is occuring all the time when we pull out the cable if DRBD devices > are primary and Xen VMs are running. I thought upgrade/downgrade could solve it, but 8.0.13, 8.0.14, 8.2.7, 8.3 are acting exactly the same way. So it seems like the failure is not DRBD-related but more into Xen/xenified kernel. >> > However, I would like to ask the audience if anyone has the same experience or if there are some hints, how to solve such issue. >> > >> > Our setup uses PV -> LVM -> DRBD -> Xen hierarchy. >> > Do you think we could solve it if we would change it into PV -> DRBD -> LVM -> Xen? >> > >> > Dell PowerEdge 1950 with 2 Broadcom bnx2 NICs >> > CentOS 5.2: Linux 2.6.18-92.1.18.el5xen #1 SMP Wed Nov 12 09:48:10 EST 2008 x86_64 x86_64 x86_64 GNU/Linux >> > >> > The console output using DRBD 8.3: >> > [...] >> >> >> -- >> : Florian G. Haas >> : LINBIT Information Technologies GmbH >> : Vivenotgasse 48, A-1120 Vienna, Austria >> >> When replying, there is no need to CC my personal address. >> I monitor the list on a daily basis. Thank you. >> >> LINBIT® and DRBD® are registered trademarks of LINBIT.