Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
OK. Thanks for the debug instructions. Will wait for the next event and report back. Regards ----- Original Message ----- From: "Lars Ellenberg" <lars.ellenberg at linbit.com> To: drbd-user at lists.linbit.com Sent: Thursday, 7 August, 2008 11:05:51 AM GMT +00:00 GMT Britain, Ireland, Portugal Subject: Re: [DRBD-user] DRBD hangs Xen VMs and won't disconnect without pulling plug On Thu, Aug 07, 2008 at 10:21:01AM +0100, simon at onepointltd.com wrote: > Would appreciate some help debugging this problem, and hopefully solving it. > > I am running Paravirutalized 64-bit CentOS 5.x VMs on 64-bit CentOS 5.x Dom0 on > DRBD partitions shared between two Dell 2590s. The DRBD connections are shared > between two dedicated GB network ports using crossover cables. The DRBD > partitions are logical volumes used as virtual disks for the actual VMs and as > mounted pre-formated ext3 partitions for their data partitions. > > Occationally, the VMs will lock up, usually (I think) unable to access their > data partition. In this fault condition "drbdadm disconnect <resourcename>" > times out on both nodes. I can only resolve the situation by breaking the > network connection with an "ifdown ethn" command. The VM is then able to carry > on working and I can reconnect DRBD and carry on. > > Under fault condition I have had a VM where I could still log in via SSH but > not able to access the data partition and another case this morning where SSH > was not working. So I am not 100% sure yet if it is solely the data partitions > of the VMs that is the problem. > > I can't see anything strange in /var/log/messages other than the expected > time-outs that occur when I disconnect the network. > > Running kernel on both Dom0 machines is 2.6.18-92.1.6.el5xen. > DRBD rpms are > kmod-drbd82-xen-8.2.6-18.104.22.168_92.1.6.el5 > drbd82-8.2.6-1.el5.centos > > Here is a sample VM and it's drbd.conf entries. Although I am allowing dual > primary, this mode is not normally used. This is for live migrating VMs as a > (currently) manual operation from one machine to the other. get the cluster into that situation again. log in on both Dom0 where DRBD is running try to figure out what is going on using top, netstat, vmstat, free, dmesg watch -n1 cat /proc/drbd cat /proc/meminfo ps -eo pid,state,wchan:30,cmd | grep -e drbd -e D ... -- : Lars Ellenberg http://www.linbit.com : : DRBD/HA support and consulting sales at linbit.com : : LINBIT Information Technologies GmbH Tel +43-1-8178292-0 : : Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 : __ please don't Cc me, but send to list -- I'm subscribed _______________________________________________ drbd-user mailing list drbd-user at lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080807/ed4537f3/attachment.htm>