<div dir="ltr"><span style="font-family:arial,sans-serif;font-size:13px">&gt; You should *not* start DRBD from the init script.</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">&gt;  # chkconfig drbd off</span><br style="font-family:arial,sans-serif;font-size:13px">
<br><div>*** OK remove start on the boot<div><br></div><div><br><span style="font-family:arial,sans-serif;font-size:13px">&gt; You should *NOT* configure &quot;no-disk-drain&quot;.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">&gt; It is likely to corrupt your data.</span><br style="font-family:arial,sans-serif;font-size:13px">** OK removed the disk drain from postgresql.res</div><div><br>
</div><div><div># cat postgresql.res</div><div>resource postgresql {</div><div>  startup {</div><div>    wfc-timeout 15;</div><div>    degr-wfc-timeout 60;</div><div>  }</div><div><br></div><div>  syncer {</div><div>    rate 150M;</div>
<div>    verify-alg md5;</div><div>  }</div><div><br></div><div>  on ha-master {</div><div>     device /dev/drbd0;</div><div>     disk /dev/sdb1;</div><div>     address <a href="http://172.70.65.210:7788">172.70.65.210:7788</a>;</div>
<div>     meta-disk internal;</div><div>  }</div><div><br></div><div>  on ha-slave {</div><div>     device /dev/drbd0;</div><div>     disk /dev/sdb1;</div><div>     address <a href="http://172.70.65.220:7788">172.70.65.220:7788</a>;</div>
<div>     meta-disk internal;</div><div> }</div><div><br></div><div>}</div></div><div><br></div><div><br><span style="font-family:arial,sans-serif;font-size:13px">&gt; You should configure monitoring ops for DRBD.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">&gt; One each for Master and Slave role, with different intervals.</span><br style="font-family:arial,sans-serif;font-size:13px"><br></div><div>** how i can do that ??</div>
<div><br></div><div>from: <a href="http://www.drbd.org/users-guide-9.0/s-pacemaker-crm-drbd-backed-service.html">http://www.drbd.org/users-guide-9.0/s-pacemaker-crm-drbd-backed-service.html</a></div><div><pre class="" style="background-color:rgb(247,245,242);border-top-left-radius:15px;border-top-right-radius:15px;border-bottom-right-radius:15px;border-bottom-left-radius:15px;padding:5px 5px 5px 2px;margin-left:2px;border-left-width:1px;border-left-style:solid;border-left-color:rgb(255,102,0);color:rgb(0,0,0);font-size:13.333333969116211px">
crm(live)configure# primitive drbd_mysql ocf:linbit:drbd \
                    params drbd_resource=&quot;mysql&quot; \
                    op monitor interval=&quot;29s&quot; role=&quot;Master&quot; \
                    op monitor interval=&quot;31s&quot; role=&quot;Slave&quot;</pre><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">&gt; You probably need to &quot;crm resource cleanup ...&quot; a bit.</span><br style="font-family:arial,sans-serif;font-size:13px">
** remake the crm configure</div><div><br></div><div><div>crm(live)# configure</div><div>crm(live)configure# show</div><div>node ha-master</div><div>node ha-slave</div><div>primitive drbd_postgresql ocf:heartbeat:drbd \</div>
<div>        params drbd_resource=&quot;postgresql&quot; \</div><div>        op monitor interval=&quot;29s&quot; role=&quot;Master&quot; \</div><div>        op monitor interval=&quot;31s&quot; role=&quot;Slave&quot;</div>
<div>primitive fs_postgresql ocf:heartbeat:Filesystem \</div><div>        params device=&quot;/dev/drbd0&quot; directory=&quot;/mnt&quot; fstype=&quot;ext4&quot;</div><div>primitive postgresqld lsb:postgresql</div><div>primitive vip_cluster ocf:heartbeat:IPaddr2 \</div>
<div>        params ip=&quot;172.70.65.200&quot; nic=&quot;eth0:1&quot;</div><div>group postgresql fs_postgresql vip_cluster postgresqld</div><div>ms ms_drbd_postgresql drbd_postgresql \</div><div>        meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot; clone-node-max=&quot;1&quot; notify=&quot;true&quot;</div>
<div>colocation postgresql_on_drbd inf: postgresql ms_drbd_postgresql:Master</div><div>order postgresql_after_drbd inf: ms_drbd_postgresql:promote postgresql:start</div><div>property $id=&quot;cib-bootstrap-options&quot; \</div>
<div>        dc-version=&quot;1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c&quot; \</div><div>        cluster-infrastructure=&quot;openais&quot; \</div><div>        expected-quorum-votes=&quot;2&quot; \</div><div>        no-quorum-policy=&quot;ignore&quot; \</div>
<div>        stonith-enabled=&quot;false&quot;</div><div>rsc_defaults $id=&quot;rsc-options&quot; \</div><div>        resource-stickiness=&quot;100&quot;</div><div><br></div><div><br></div><span style="font-family:arial,sans-serif;font-size:13px">&gt; You may need to manually remove fencing constraints (if DRBD finished</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">&gt; the resync when no pacemaker was running yet, it would not have been</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">&gt; able to remove it from its handler).</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div>** how i do that ???</div><div>** only in the ha-master ??</div><div># drbdadmin create-md postgresql</div><div># drbdadmin up postgresql </div>
<div># drbdadmin -- --overwrite-data-of-peer primary postgresql <br style="font-family:arial,sans-serif;font-size:13px"><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">You may need to *read the logs*.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">The answer will be in there.</span></div><div><br></div><div><span style="font-size:13px;font-family:arial,sans-serif">&gt; You have:</span><br style="font-size:13px;font-family:arial,sans-serif">
<div class="im" style="font-size:13px;font-family:arial,sans-serif">&gt; Failed actions:<br>&gt;     drbd_postgresql:0_start_0 (node=ha-slave, call=14, rc=1,<br>&gt; status=complete): unknown error<br><br></div><span style="font-size:13px;font-family:arial,sans-serif">&gt; So start looking for that, and see what it complains about.</span></div>
<div><br></div><div><font face="arial, sans-serif"># cat /var/log/syslog | grep drbd_postgresql </font><br style="font-family:arial,sans-serif;font-size:13px">*** Syslog</div><div><br></div><div><div>Oct 14 11:10:08 ha-master pengine: [786]: debug: unpack_rsc_op: drbd_postgresql:1_last_failure_0 on ha-slave returned 1 (unknown error) instead of the expected value: 0 (ok)</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: WARN: unpack_rsc_op: Processing failed op drbd_postgresql:1_last_failure_0 on ha-slave: unknown error (1)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: debug: unpack_rsc_op: drbd_postgresql:0_last_failure_0 on ha-master returned 1 (unknown error) instead of the expected value: 0 (ok)</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: WARN: unpack_rsc_op: Processing failed op drbd_postgresql:0_last_failure_0 on ha-master: unknown error (1)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: clone_print:  Master/Slave Set: ms_drbd_postgresql [drbd_postgresql]</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: info: short_print:      Stopped: [ drbd_postgresql:0 drbd_postgresql:1 ]</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: get_failcount: ms_drbd_postgresql has failed INFINITY times on ha-slave</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: WARN: common_apply_stickiness: Forcing ms_drbd_postgresql away from ha-slave after 1000000 failures (max=1000000)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: get_failcount: ms_drbd_postgresql has failed INFINITY times on ha-slave</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: WARN: common_apply_stickiness: Forcing ms_drbd_postgresql away from ha-slave after 1000000 failures (max=1000000)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: get_failcount: ms_drbd_postgresql has failed INFINITY times on ha-master</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: WARN: common_apply_stickiness: Forcing ms_drbd_postgresql away from ha-master after 1000000 failures (max=1000000)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: get_failcount: ms_drbd_postgresql has failed INFINITY times on ha-master</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: WARN: common_apply_stickiness: Forcing ms_drbd_postgresql away from ha-master after 1000000 failures (max=1000000)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: rsc_merge_weights: ms_drbd_postgresql: Rolling back scores from fs_postgresql</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: debug: native_assign_node: All nodes for resource drbd_postgresql:0 are unavailable, unclean or shutting down (ha-master: 1, -1000000)</div><div>Oct 14 11:10:08 ha-master pengine: [786]: debug: native_assign_node: Could not allocate a node for drbd_postgresql:0</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: info: native_color: Resource drbd_postgresql:0 cannot run anywhere</div><div>Oct 14 11:10:08 ha-master pengine: [786]: debug: native_assign_node: All nodes for resource drbd_postgresql:1 are unavailable, unclean or shutting down (ha-master: 1, -1000000)</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: debug: native_assign_node: Could not allocate a node for drbd_postgresql:1</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: native_color: Resource drbd_postgresql:1 cannot run anywhere</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: debug: clone_color: Allocated 0 ms_drbd_postgresql instances of a possible 2</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: rsc_merge_weights: ms_drbd_postgresql: Rolling back scores from fs_postgresql</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: debug: master_color: drbd_postgresql:0 master score: 0</div><div>Oct 14 11:10:08 ha-master pengine: [786]: debug: master_color: drbd_postgresql:1 master score: 0</div><div>Oct 14 11:10:08 ha-master pengine: [786]: info: master_color: ms_drbd_postgresql: Promoted 0 instances of a possible 1 to master</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: debug: master_create_actions: Creating actions for ms_drbd_postgresql</div><div>Oct 14 11:10:08 ha-master pengine: [786]: notice: LogActions: Leave   drbd_postgresql:0 (Stopped)</div>
<div>Oct 14 11:10:08 ha-master pengine: [786]: notice: LogActions: Leave   drbd_postgresql:1 (Stopped)</div><div><br></div><br><br style="font-family:arial,sans-serif;font-size:13px"><br></div></div></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Fri, Oct 11, 2013 at 7:20 PM, Lars Ellenberg <span dir="ltr">&lt;<a href="mailto:lars.ellenberg@linbit.com" target="_blank">lars.ellenberg@linbit.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On Fri, Oct 11, 2013 at 05:08:04PM -0300, Thomaz Luiz Santos wrote:<br>
&gt; I&#39;m trying to make a sample cluster, in virtual machine, and after migrate<br>
&gt; to a physical machine, however i have problems to configure the pacemaker (<br>
&gt; crm ),  to startup the resources and failover.<br>
&gt;<br>
&gt; I cant mount the device /dev/drbd0 in the primary node and start postgresql<br>
&gt; manually, but use in crm resource,  dont can mount the device, and start de<br>
&gt; postgresql.<br>
<br>
</div>You should *not* start DRBD from the init script.<br>
 # chkconfig drbd off<br>
<br>
You should *NOT* configure &quot;no-disk-drain&quot;.<br>
It is likely to corrupt your data.<br>
<br>
You should configure monitoring ops for DRBD.<br>
One each for Master and Slave role, with different intervals.<br>
<br>
You probably need to &quot;crm resource cleanup ...&quot; a bit.<br>
<br>
You may need to manually remove fencing constraints (if DRBD finished<br>
the resync when no pacemaker was running yet, it would not have been<br>
able to remove it from its handler).<br>
<br>
You may need to *read the logs*.<br>
The answer will be in there.<br>
<br>
You have:<br>
<div class="im">&gt; Failed actions:<br>
&gt;     drbd_postgresql:0_start_0 (node=ha-slave, call=14, rc=1,<br>
&gt; status=complete): unknown error<br>
<br>
</div>So start looking for that, and see what it complains about.<br>
<br>
Cheers,<br>
        Lars<br>
<div><div class="h5"><br>
<br>
&gt; I reboot the virtual machines, and not have successful.<br>
&gt; the DRBD not start the primary, and not mount the /dev/drbd0 and stard the<br>
&gt; postgresql  :-(<br>
&gt;<br>
&gt;<br>
&gt; DRBD Version: 8.3.11 (api:88)<br>
&gt; Corosync Cluster Engine, version &#39;1.4.2&#39;<br>
&gt; Pacemaker 1.1.6<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; **** after reboot the virtual machine. *****<br>
&gt;<br>
&gt; ha-slave:<br>
&gt;<br>
&gt; version: 8.3.13 (api:88/proto:86-96)<br>
&gt; srcversion: 697DE8B1973B1D8914F04DB<br>
&gt;  0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----<br>
&gt;     ns:0 nr:28672 dw:28672 dr:0 al:0 bm:5 lo:0 pe:0 ua:0 ap:0 ep:1 wo:n<br>
&gt; oos:0<br>
&gt;<br>
&gt;<br>
&gt; ha-master:<br>
&gt; version: 8.3.13 (api:88/proto:86-96)<br>
&gt; srcversion: 697DE8B1973B1D8914F04DB<br>
&gt;  0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----<br>
&gt;     ns:28672 nr:0 dw:0 dr:28672 al:0 bm:5 lo:0 pe:0 ua:0 ap:0 ep:1 wo:n<br>
&gt; oos:0<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; crm(live)# configure<br>
&gt; crm(live)configure# show<br>
&gt; node ha-master<br>
&gt; node ha-slave<br>
&gt; primitive drbd_postgresql ocf:heartbeat:drbd \<br>
&gt;         params drbd_resource=&quot;postgresql&quot;<br>
&gt; primitive fs_postgresql ocf:heartbeat:Filesystem \<br>
&gt;         params device=&quot;/dev/drbd/by-res/postgresql&quot; directory=&quot;/mnt&quot;<br>
&gt; fstype=&quot;ext4&quot;<br>
&gt; primitive postgresqld lsb:postgresql<br>
&gt; primitive vip_cluster ocf:heartbeat:IPaddr2 \<br>
&gt;         params ip=&quot;172.70.65.200&quot; nic=&quot;eth0:1&quot;<br>
&gt; group postgresql fs_postgresql vip_cluster postgresqld \<br>
&gt;         meta target-role=&quot;Started&quot;<br>
&gt; ms ms_drbd_postgresql drbd_postgresql \<br>
&gt;         meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;<br>
&gt; clone-node-max=&quot;1&quot; notify=&quot;true&quot;<br>
&gt; colocation postgresql_on_drbd inf: postgresql ms_drbd_postgresql:Master<br>
&gt; order postgresql_after_drbd inf: ms_drbd_postgresql:promote postgresql:start<br>
&gt; property $id=&quot;cib-bootstrap-options&quot; \<br>
&gt;         dc-version=&quot;1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c&quot; \<br>
&gt;         cluster-infrastructure=&quot;openais&quot; \<br>
&gt;         expected-quorum-votes=&quot;2&quot; \<br>
&gt;         stonith-enabled=&quot;false&quot; \<br>
&gt;         no-quorum-policy=&quot;ignore&quot;<br>
&gt; rsc_defaults $id=&quot;rsc-options&quot; \<br>
&gt;         resource-stickiness=&quot;100&quot;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; crm(live)# resource<br>
&gt; crm(live)resource# list<br>
&gt;  Master/Slave Set: ms_drbd_postgresql [drbd_postgresql]<br>
&gt;      Stopped: [ drbd_postgresql:0 drbd_postgresql:1 ]<br>
&gt;  Resource Group: postgresql<br>
&gt;      fs_postgresql      (ocf::heartbeat:Filesystem) Stopped<br>
&gt;      vip_cluster        (ocf::heartbeat:IPaddr2) Stopped<br>
&gt;      postgresqld        (lsb:postgresql) Stopped<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; ============<br>
&gt; Last updated: Fri Oct 11 14:22:50 2013<br>
&gt; Last change: Fri Oct 11 14:11:06 2013 via cibadmin on ha-slave<br>
&gt; Stack: openais<br>
&gt; Current DC: ha-slave - partition with quorum<br>
&gt; Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c<br>
&gt; 2 Nodes configured, 2 expected votes<br>
&gt; 5 Resources configured.<br>
&gt; ============<br>
&gt;<br>
&gt; Online: [ ha-slave ha-master ]<br>
&gt;<br>
&gt;<br>
&gt; Failed actions:<br>
&gt;     drbd_postgresql:0_start_0 (node=ha-slave, call=14, rc=1,<br>
&gt; status=complete): unknown error<br>
&gt;     drbd_postgresql:0_start_0 (node=ha-master, call=18, rc=1,<br>
&gt; status=complete): unknown error<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; **** that is my global_common on drbd  ****<br>
&gt;<br>
&gt; global {<br>
&gt;         usage-count yes;<br>
&gt;         # minor-count dialog-refresh disable-ip-verification<br>
&gt; }<br>
&gt;<br>
&gt; common {<br>
&gt;         protocol C;<br>
&gt;<br>
&gt;         handlers {<br>
&gt;                 pri-on-incon-degr<br>
&gt; &quot;/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/not<br>
&gt;<br>
&gt;            ify-emergency-reboot.sh; echo b &gt; /proc/sysrq-trigger ; reboot<br>
&gt; -f&quot;;<br>
&gt;                 pri-lost-after-sb<br>
&gt; &quot;/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/not<br>
&gt;<br>
&gt;            ify-emergency-reboot.sh; echo b &gt; /proc/sysrq-trigger ; reboot<br>
&gt; -f&quot;;<br>
&gt;                 local-io-error &quot;/usr/lib/drbd/notify-io-error.sh;<br>
&gt; /usr/lib/drbd/notify-emergenc<br>
&gt;                                                        y-shutdown.sh; echo<br>
&gt; o &gt; /proc/sysrq-trigger ; halt -f&quot;;<br>
&gt;                 fence-peer &quot;/usr/lib/drbd/crm-fence-peer.sh&quot;;<br>
&gt;                 after-resync-target &quot;/usr/lib/drbd/crm-unfence-peer.sh&quot;;<br>
&gt;                 # split-brain &quot;/usr/lib/drbd/notify-split-brain.sh root&quot;;<br>
&gt;                 # out-of-sync &quot;/usr/lib/drbd/notify-out-of-sync.sh root&quot;;<br>
&gt;                 # before-resync-target<br>
&gt; &quot;/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c<br>
&gt;<br>
&gt;         16k&quot;;<br>
&gt;                 # after-resync-target<br>
&gt; /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;<br>
&gt;         }<br>
&gt;<br>
&gt;         startup {<br>
&gt;                  # wfc-timeout 15;<br>
&gt;                  # degr-wfc-timeout 60;<br>
&gt;                  # outdated-wfc-timeout wait-after-sb<br>
&gt;         }<br>
&gt;<br>
&gt;         disk {<br>
&gt;                 # on-io-error fencing use-bmbv no-disk-barrier<br>
&gt; no-disk-flushes<br>
&gt;                 # no-disk-drain no-md-flushes max-bio-bvecs<br>
&gt;         }<br>
&gt;<br>
&gt;         net {<br>
&gt;                 # cram-hmac-alg sha1;<br>
&gt;                 # shared-secret &quot;secret&quot;;<br>
&gt;                 # sndbuf-size rcvbuf-size timeout connect-int ping-int<br>
&gt; ping-timeout max-buffers<br>
&gt;                 # max-epoch-size ko-count allow-two-primaries cram-hmac-alg<br>
&gt; shared-secret<br>
&gt;                 # after-sb-0pri after-sb-1pri after-sb-2pri<br>
&gt; data-integrity-alg no-tcp-cork<br>
&gt;         }<br>
&gt;<br>
&gt;         syncer {<br>
&gt;                 # rate 150M;<br>
&gt;                 # rate after al-extents use-rle cpu-mask verify-alg<br>
&gt; csums-alg<br>
&gt;         }<br>
&gt; }<br>
&gt;<br>
&gt;<br>
&gt; **** that is my postgresql.res ****<br>
&gt;<br>
&gt; resource postgresql {<br>
&gt;   startup {<br>
&gt;     wfc-timeout 15;<br>
&gt;     degr-wfc-timeout 60;<br>
&gt;   }<br>
&gt;<br>
&gt;   syncer {<br>
&gt;     rate 150M;<br>
&gt;     verify-alg md5;<br>
&gt;  }<br>
&gt;<br>
&gt;  disk {<br>
&gt;    on-io-error detach;<br>
&gt;    no-disk-barrier;<br>
&gt;    no-disk-flushes;<br>
&gt;    no-disk-drain;<br>
&gt;    fencing resource-only;<br>
&gt;  }<br>
&gt;<br>
&gt;  on ha-master {<br>
&gt;     device /dev/drbd0;<br>
&gt;     disk /dev/sdb1;<br>
&gt;     address <a href="http://172.70.65.210:7788" target="_blank">172.70.65.210:7788</a>;<br>
&gt;     meta-disk internal;<br>
&gt;  }<br>
&gt;<br>
&gt;  on ha-slave {<br>
&gt;     device /dev/drbd0;<br>
&gt;     disk /dev/sdb1;<br>
&gt;     address <a href="http://172.70.65.220:7788" target="_blank">172.70.65.220:7788</a>;<br>
&gt;     meta-disk internal;<br>
&gt;  }<br>
&gt;<br>
&gt;<br>
&gt; }<br>
&gt;<br>
&gt;<br>
&gt; **** that is my corosync.conf ****<br>
&gt;<br>
&gt;<br>
&gt; compatibility: whitetank<br>
&gt;<br>
&gt; totem {<br>
&gt;         version: 2<br>
&gt;         secauth: off<br>
&gt;         threads: 0<br>
&gt;         interface {<br>
&gt;                 ringnumber: 0<br>
&gt;                 bindnetaddr: <a href="tel:172.70.65.200" value="+551727065200">172.70.65.200</a><br>
&gt;                 mcastaddr: 226.94.1.1<br>
&gt;                 mcastport: 5405<br>
&gt;                 ttl: 1<br>
&gt;         }<br>
&gt; }<br>
&gt;<br>
&gt; logging {<br>
&gt;         fileline: off<br>
&gt;         to_stderr: yes<br>
&gt;         to_logfile: yes<br>
&gt;         to_syslog: yes<br>
&gt;         logfile: /var/log/cluster/corosync.log<br>
&gt;         debug: on<br>
&gt;         timestamp: on<br>
&gt;         logger_subsys {<br>
&gt;                 subsys: AMF<br>
&gt;                 debug: off<br>
&gt;         }<br>
&gt; }<br>
&gt;<br>
&gt; amf {<br>
&gt;         mode: disabled<br>
&gt; }<br>
&gt;<br>
&gt; aisexec{<br>
&gt;   user : root<br>
&gt;   group : root<br>
&gt; }<br>
&gt;<br>
&gt; service{<br>
&gt;   # Load the Pacemaker Cluster Resource Manager<br>
&gt;   name : pacemaker<br>
&gt;   ver : 0<br>
&gt; }<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; DRBD, postgresql, manually start :<br>
&gt;<br>
&gt;<br>
&gt; version: 8.3.13 (api:88/proto:86-96)<br>
&gt; srcversion: 697DE8B1973B1D8914F04DB<br>
&gt;  0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----<br>
&gt;     ns:0 nr:0 dw:0 dr:664 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:n oos:0<br>
&gt;<br>
&gt;<br>
&gt; version: 8.3.13 (api:88/proto:86-96)<br>
&gt; srcversion: 697DE8B1973B1D8914F04DB<br>
&gt;  0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----<br>
&gt;     ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:n oos:0<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; root@ha-master:/mnt# df -hT<br>
&gt; Sist. Arq.     Tipo      Tam. Usado Disp. Uso% Montado em<br>
&gt; /dev/sda1      ext4      4,0G  1,8G  2,1G  47% /<br>
&gt; udev           devtmpfs  473M  4,0K  473M   1% /dev<br>
&gt; tmpfs          tmpfs     193M  264K  193M   1% /run<br>
&gt; none           tmpfs     5,0M  4,0K  5,0M   1% /run/lock<br>
&gt; none           tmpfs     482M   17M  466M   4% /run/shm<br>
&gt; /dev/drbd0     ext4      2,0G   69M  1,9G   4% /mnt<br>
&gt;<br>
&gt;<br>
&gt; root@ha-master:/mnt# service postgresql status<br>
&gt; Running clusters: 9.1/main<br>
<br>
<br>
--<br>
</div></div>: Lars Ellenberg<br>
: LINBIT | Your Way to High Availability<br>
: DRBD/HA support and consulting <a href="http://www.linbit.com" target="_blank">http://www.linbit.com</a><br>
<br>
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.<br>
__<br>
please don&#39;t Cc me, but send to list   --   I&#39;m subscribed<br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>------------------------------<br>Thomaz Luiz Santos<br>Linux User: #359356<br><a href="http://thomaz.santos.googlepages.com/">http://thomaz.santos.googlepages.com/</a>
</div>