Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Digimer. Thanks for the help. I went back and partitioned the drive
for lvm and then it didn't show up at all. I removed the partitions
and suddenly everything went according to plan. I'm guessing this
just got everything back to a known state. Not sure. The changes on
the first machine would always show up on the second machine. So, who
knows...
Thanks Again.
Ken
Quoting listslut at outofoptions.net:
> Quoting Digimer <linux at alteeve.com>:
>
>> On 07/19/2011 11:46 AM, listslut at outofoptions.net wrote:
>>> Quoting Digimer <linux at alteeve.com>:
>>>
>>>> On 07/19/2011 11:13 AM, listslut at outofoptions.net wrote:
>>>>> Quoting Digimer <linux at alteeve.com>:
>>>>>
>>>>>> On 07/19/2011 10:55 AM, listslut at outofoptions.net wrote:
>>>>>>> I can create a Volume Group and the data gets replicated from one
>>>>>>> machine to the other. This command fails though:
>>>>>>>
>>>>>>> [root at thing1 lvm]# /usr/sbin/lvcreate -n vmdata -l 69972 thing0
>>>>>>> Logging initialised at Tue Jul 19 10:53:21 2011
>>>>>>> Set umask to 0077
>>>>>>> Setting logging type to disk
>>>>>>> Finding volume group "thing0"
>>>>>>> Archiving volume group "thing0" metadata (seqno 15).
>>>>>>> Creating logical volume vmdata
>>>>>>> Creating volume group backup "/etc/lvm/backup/thing0" (seqno 16).
>>>>>>> Error locking on node thing1.eyemg.com: device-mapper: create ioctl
>>>>>>> failed: Device or resource busy
>>>>>>> Error locking on node thing2.eyemg.com: device-mapper: create ioctl
>>>>>>> failed: Device or resource busy
>>>>>>> Failed to activate new LV.
>>>>>>> Creating volume group backup "/etc/lvm/backup/thing0" (seqno 17).
>>>>>>> Wiping internal VG cache
>>>>>>>
>>>>>>> I don't know if this is a drbd issue or an lvm. I can't seem to find
>>>>>>> anything on it.
>>>>>>>
>>>>>>> Thanks
>>>>>>> Ken Lowther
>>>>>>
>>>>>> Obvius question first; Is the DRBD in Primary?
>>>>>
>>>>> It was primary/primary but this gave me an idea. I changed the 'other'
>>>>> node to secondary and it did change the error message some in that it
>>>>> gives a 'uuid not found' error instead on the second resource instead of
>>>>> the lock error.
>>>>>
>>>>> [root at thing1 lvm]# /usr/sbin/lvcreate -n vmdata -l 69972 thing0
>>>>> Logging initialised at Tue Jul 19 11:09:02 2011
>>>>> Set umask to 0077
>>>>> Setting logging type to disk
>>>>> Finding volume group "thing0"
>>>>> Archiving volume group "thing0" metadata (seqno 19).
>>>>> Creating logical volume vmdata
>>>>> Creating volume group backup "/etc/lvm/backup/thing0" (seqno 20).
>>>>> Error locking on node thing1.eyemg.com: device-mapper: create ioctl
>>>>> failed: Device or resource busy
>>>>> Error locking on node thing2.eyemg.com: Volume group for uuid not
>>>>> found: hqcys8c9fDoBtX4UGLV0lmAbTZ7FMW8516YBHLfh64TzKNxRBqDH1wYg7IQHMRul
>>>>> Failed to activate new LV.
>>>>> Creating volume group backup "/etc/lvm/backup/thing0" (seqno 21).
>>>>> Wiping internal VG cache
>>>>>
>>>>>
>>>>>>
>>>>>> Please share more details about your config so that folks can better
>>>>>> help you, rather than making wild stabs in the dark. :)
>>>>>>
>>>>> Using locking 3 in lvm with clvmd running.
>>>>> resource drbd0 {
>>>>> on thing1.eyemg.com {
>>>>> disk /dev/cciss/c0d1;
>>>>> device /dev/drbd0;
>>>>> meta-disk internal;
>>>>> address 192.168.244.1:7788;
>>>>> }
>>>>> on thing2.eyemg.com {
>>>>> disk /dev/cciss/c0d1;
>>>>> device /dev/drbd0;
>>>>> meta-disk internal;
>>>>> address 192.168.244.2:7788;
>>>>> }
>>>>> Ken
>>>>
>>>> Can I assume then that the cluster itself is running and quorate?
>>>
>>> [root at thing1 ~]# cman_tool nodes
>>> Node Sts Inc Joined Name
>>> 1 M 392 2011-07-18 15:35:22 thing1.eyemg.com
>>> 2 M 592 2011-07-18 15:37:02 thing2.eyemg.com
>>> [root at thing1 ~]#
>>>
>>>> Also,
>>>> is anything else (trying to) use the backing devices?
>>>
>>> Not that I know of. I'm setting up a new cluster. What types of things
>>> might I look for that I don't know about?
>>>
>>>> Does this error
>>>> occur on both nodes?
>>>
>>> Yes.
>>>
>>> Thanks
>>> Ken
>>
>> Can you paste the following:
>>
>> /etc/cluster/cluster.conf (only sanitize the PWs please)
>
> <?xml version="1.0"?>
> <cluster config_version="8" name="thing0">
> <fence_daemon post_fail_delay="0" post_join_delay="3"/>
> <clusternodes>
> <clusternode name="thing1.eyemg.com" nodeid="1" votes="1">
> <fence/>
> </clusternode>
> <clusternode name="thing2.eyemg.com" nodeid="2" votes="1">
> <fence/>
> </clusternode>
> </clusternodes>
> <cman expected_votes="1" two_node="1"/>
> <fencedevices>
> <fencedevice agent="fence_manual" name="thing1-ilo.eyemg.com"/>
> <fencedevice agent="fence_manual" name="thing2-ilo.eyemg.com"/>
> </fencedevices>
> <rm>
> <failoverdomains/>
> <resources>
> <script file="/etc/init.d/httpd" name="Apache"/>
> <ip address="10.0.8.177" monitor_link="1"/>
> <script file="/etc/init.d/mysqld" name="mysqld"/>
> <ip address="10.0.8.178" monitor_link="1"/>
> </resources>
> <service autostart="0" name="httpd" recovery="disable">
> <ip ref="10.0.8.177"/>
> <script ref="Apache"/>
> </service>
> <service autostart="0" name="mysqld" recovery="disable">
> <ip ref="10.0.8.178"/>
> <script ref="mysqld"/>
> </service>
> </rm>
> </cluster>
>
>
>> cman_tool status
>
> [root at thing1 software]# cman_tool status
> Version: 6.2.0
> Config Version: 8
> Cluster Name: thing0
> Cluster Id: 6910
> Cluster Member: Yes
> Cluster Generation: 720
> Membership state: Cluster-Member
> Nodes: 3
> Expected votes: 1
> Total votes: 2
> Quorum: 1
> Active subsystems: 9
> Flags: 2node Dirty
> Ports Bound: 0 11 177
> Node name: thing1.eyemg.com
> Node ID: 1
> Multicast addresses: 239.192.26.25
> Node addresses: 10.0.8.201
>
>
>> cat /proc/drbd
> root at thing1 software]# cat /proc/drbd
> version: 8.3.11 (api:88/proto:86-96)
> GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by
> root at thing1.eyemg.com, 2011-07-15 09:53:15
> 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
> ns:0 nr:0 dw:0 dr:344 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
>
>
>>
>> Also, is there anything of interest in the log files of note starting
>> with the cluster coming online?
>
> I do notice that I can't seem to use lvm tools to do much. I
> rebooted to a clean state and started over. Since then:
>
> [root at thing2 backup]# pvdisplay
> Skipping clustered volume group thing0
> Skipping volume group thing0
> [root at thing2 backup]# lvm pvs
> Skipping clustered volume group thing0
> Skipping volume group thing0
> [root at thing2 backup]# vgdisplay
> Skipping clustered volume group thing0
> [root at thing2 backup]# lvm vgs
> Skipping clustered volume group thing0
> [root at thing2 backup]# lvdisplay
> Skipping clustered volume group thing0
> Skipping volume group thing0
> [root at thing2 backup]# lvm lvs
> Skipping clustered volume group thing0
> Skipping volume group thing0
>
> I appreciate your help. I can't even wipe and start over using the
> tools because it skips the clustered volume......
>
> Ken
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>