<HTML><HEAD>
<META content="text/html; charset=utf-8" http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 9.00.8112.16448"></HEAD>
<BODY style="MARGIN: 4px 4px 1px; FONT: 10pt Segoe UI">
<DIV>Hi,</DIV>
<DIV> </DIV>
<DIV>We are setting a brand new cluster with dual primary + pacemaker + xen. Here's the current conf:</DIV>
<DIV> </DIV>
<DIV>- global_common.conf</DIV>
<DIV>drbd.resglobal {<BR> dialog-refresh 1;<BR> minor-count 5;<BR>}<BR>common {<BR>}</DIV>
<DIV> </DIV>
<DIV>- drbd0.res</DIV>
<DIV>resource drbd0 {<BR> protocol C;<BR> disk {<BR> on-io-error detach;<BR> fencing resource-and-stonith;</DIV>
<DIV> </DIV>
<DIV> }<BR> syncer {<BR> rate 33M;<BR> al-extents 3389;<BR> }</DIV>
<DIV> </DIV>
<DIV> handlers {<BR> fence-peer "/usr/lib/drbd/stonith_admin-fence-peer.sh";<BR> }</DIV>
<DIV> </DIV>
<DIV><BR> net {<BR> allow-two-primaries yes; # Enable this *after* initial testing<BR> cram-hmac-alg sha1;<BR> shared-secret "a6a0680c40bca2439dbe48343ddddcf4";<BR> after-sb-0pri discard-zero-changes;<BR> after-sb-1pri discard-secondary;<BR> after-sb-2pri disconnect;<BR> }<BR> startup {<BR># become-primary-on both;<BR> }<BR> on xs02 {<BR> disk /dev/sdb;<BR> device /dev/drbd0;<BR> meta-disk internal;<BR> address 10.1.1.136:7780;<BR> }<BR> on xs01 {<BR> disk /dev/sdb;<BR> device /dev/drbd0;<BR> meta-disk internal;<BR> address 10.1.1.135:7780;<BR> }<BR>}<BR></DIV>
<DIV> </DIV>
<DIV>- crm configuration</DIV>
<DIV>node xs01<BR>node xs02<BR>primitive dlm ocf:pacemaker:controld \<BR> operations $id="dlm-operations" \<BR> op monitor interval="10" timeout="20" start-delay="0"<BR>primitive drbd0 ocf:linbit:drbd \<BR> operations $id="drbd0-operations" \<BR> op monitor interval="20" role="Slave" timeout="20" \<BR> op monitor interval="10" role="Master" timeout="20" \<BR> params drbd_resource="drbd0"<BR>primitive o2cb ocf:ocfs2:o2cb \<BR> operations $id="o2cb-operations" \<BR> op monitor interval="10" timeout="20" \<BR> meta target-role="Started"<BR>primitive stonith-ipmi-xs01 stonith:external/ipmi \<BR> meta target-role="Started" is-managed="true" \<BR> operations $id="stonith-ipmi-xs01-operations" \<BR> op monitor interval="3600" timeout="20" \<BR> params hostname="xs01" ipaddr="125.1.254.135" userid="radmin" passwd="xxxxxx" interface="lan"<BR>primitive stonith-ipmi-xs02 stonith:external/ipmi \<BR> meta target-role="Started" is-managed="true" \<BR> operations $id="stonith-ipmi-xs02-operations" \<BR> op monitor interval="3600" timeout="20" \<BR> params hostname="xs02" ipaddr="125.1.254.136" userid="radmin" passwd="xxxxx" interface="lan"<BR>primitive vmdisk-pri ocf:heartbeat:Filesystem \<BR> operations $id="vmdisk-pri-operations" \<BR> op monitor interval="20" timeout="40" \<BR> params device="/dev/drbd/by-disk/sdb" directory="/vmdisk" fstype="ocfs2" options="rw,noatime"<BR>group init dlm o2cb \<BR> meta is-managed="true"<BR>ms ms_drbd0 drbd0 \<BR> meta master-max="2" clone-max="2" notify="true" target-role="Started"<BR>clone init-clone init \<BR> meta interleave="true" target-role="Started" is-managed="true"<BR>clone vmdisk-clone vmdisk-pri \<BR> meta target-role="Started"<BR>location fence-xs01 stonith-ipmi-xs01 -inf: xs01<BR>location fence-xs02 stonith-ipmi-xs02 -inf: xs02<BR>colocation colocacion : init-clone vmdisk-clone ms_drbd0:Master<BR>order ordenamiento : ms_drbd0:promote init-clone:start vmdisk-clone:start<BR>property $id="cib-bootstrap-options" \<BR> dc-version="1.1.7-77eeb099a504ceda05d648ed161ef8b1582c7daf" \<BR> cluster-infrastructure="openais" \<BR> expected-quorum-votes="2" \<BR> batch-limit="1" \<BR> no-quorum-policy="ignore" \<BR> last-lrm-refresh="1352468954" \<BR> default-resource-stickiness="1000"<BR>op_defaults $id="op_defaults-options" \<BR> record-pending="false"<BR></DIV>
<DIV> </DIV>
<DIV>Problem is that when we want to start the master drbd0 resource in pacemaker, it fails and then xs02 powers off. Soon after that, the resource promotes to Master in xs01. I go to the XS02 server, boot up and the resource is also promoted to Master becoming both in Master state.</DIV>
<DIV> </DIV>
<DIV>We have already tune the eth1 card setting the MTU to 9000 as it is documented in the site.</DIV>
<DIV> </DIV>
<DIV>Is this the normal behaviour?</DIV>
<DIV><BR>Regards,</DIV>
<DIV>Daniel</DIV></BODY></HTML>