[DRBD-user] DRBD compatibility issue with pacemaker in UBUNTU 12.04

kamal kishi kamal.kishi at gmail.com
Fri May 16 06:52:48 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi all,

I'm trying to configure DRBD+Pacemaker+XEN+OCFS2 for ACTIVE/ACTIVE VM
failover.

I'm stuck at the first step itself, installing and configuring Pacemaker to
handle DRBD.

There are 2 issues I'm facing :
1. After installing DRBD using "apt-get install drbd8-utils", the
/dev/drbd/by-res/r0 or /dev/drbd/by-disk/disk_name folders are not created
by itself (Few months back I had followed same procedure to install DRBD,
it created the folders mentioned above)
As this /dev/drbd/by-res/r0 used for configuring Pacemaker, putting this
query

2. Although the above mentioned folders are not created, running DRBD
manually works fine.
When integrated with pacemaker to handle DRBD, DRBD doesn't start at all.
Find the attached logs for DRBD and Pacemaker configuration used and the
logs from 2 servers

Request :
Can anyone please suggest best UBUNTU( or any other OS suggestion also
fine), DRBD and Pacemaker versions which works fine to be used in
production level

Looking for a solution as same is to be deployed in Production Servers :(

-- 
Regards,
Kamal Kishore B V
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140516/ea61eefe/attachment.htm>
-------------- next part --------------
global { usage-count no; }
common { syncer { rate 15M; } }
resource r0 {
protocol C;
startup {
become-primary-on both;
}
disk {
fencing resource-only;
}
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
}
net {
allow-two-primaries;
#cram-hmac-alg sha1;
shared-secret "kalki";
}
on server1 {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.0.92:7788;
meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.0.93:7788;
meta-disk internal;
}
}
-------------- next part --------------
crm configure
property no-quorum-policy=ignore
property stonith-enabled=false
property default-resource-stickiness=1000
commit
bye

crm configure
primitive Cluster-FS-DRBD ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="29s" role="Master" \
op monitor interval="31s" role="Slave" \
meta target-role="started"

ms Cluster-FS-DRBD-Master Cluster-FS-DRBD \
meta resource-stickines="100" master-max="2" notify="true" interleave="true"

primitive Cluster-FS-Mount ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/cluster" fstype="ocfs2" \
meta target-role="started"

clone Cluster-FS-Mount-Clone Cluster-FS-Mount \
meta interleave="true" ordered="true"

order Cluster-FS-After-DRBD inf: \
Cluster-FS-DRBD-Master:promote \
Cluster-FS-Mount-Clone:start
commit
-------------- next part --------------
May 15 12:25:42 server1 corosync[1145]:   [MAIN  ] Corosync Cluster Engine ('1.4.2'): started and ready to provide service.
May 15 12:25:42 server1 corosync[1145]:   [MAIN  ] Corosync built-in features: nss
May 15 12:25:42 server1 corosync[1145]:   [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
May 15 12:25:42 server1 corosync[1145]:   [TOTEM ] Initializing transport (UDP/IP Multicast).
May 15 12:25:42 server1 corosync[1145]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
May 15 12:25:42 server1 logd: [1166]: info: logd started with /etc/logd.cf.
May 15 12:25:42 server1 openhpid: ERROR: (init.c, 76, OpenHPI is not configured.  See openhpi.conf file.)
May 15 12:25:42 server1 openhpid: ERROR: (openhpid.cpp, 270, There was an error initializing OpenHPI)
May 15 12:25:42 server1 kernel: [   16.022552] Event-channel device installed.
May 15 12:25:42 server1 xenstored: Checking store ...
May 15 12:25:42 server1 xenstored: Checking store complete.
May 15 12:25:42 server1 kernel: [   16.054731] XENBUS: Unable to read cpu state
May 15 12:25:42 server1 kernel: [   16.054834] XENBUS: Unable to read cpu state
May 15 12:25:42 server1 kernel: [   16.054931] XENBUS: Unable to read cpu state
May 15 12:25:42 server1 kernel: [   16.055013] XENBUS: Unable to read cpu state
May 15 12:25:42 server1 corosync[1145]:   [TOTEM ] The network interface [192.168.0.92] is now up.
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: process_ais_conf: Reading configure
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: config_find_init: Local handle: 4730966301143465987 for logging
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: config_find_next: Processing additional logging options...
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Found 'off' for option: debug
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Found 'no' for option: to_logfile
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Found 'yes' for option: to_syslog
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Found 'daemon' for option: syslog_facility
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: config_find_init: Local handle: 7739444317642555396 for quorum
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: config_find_next: No additional configuration supplied for: quorum
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: No default for option: provider
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: config_find_init: Local handle: 5650605097994944517 for service
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: config_find_next: Processing additional service options...
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Found '0' for option: ver
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_logd
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_startup: CRM: Initialized
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] Logging: Initialized pcmk_startup
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_startup: Service: 9
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_startup: Local hostname: server1
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_update_nodeid: Local node id: 1543547072
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: update_member: Creating entry for node 1543547072 born on 0
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: update_member: 0xf7d6b0 Node 1543547072 now known as server1 (was: (null))
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: update_member: Node server1 now has 1 quorum votes (was 0)
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: update_member: Node 1543547072/server1 is now: member
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: spawn_child: Forked child 1209 for process stonith-ng
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: spawn_child: Forked child 1210 for process cib
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: spawn_child: Forked child 1211 for process lrmd
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: spawn_child: Forked child 1212 for process attrd
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: spawn_child: Forked child 1213 for process pengine
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: spawn_child: Forked child 1214 for process crmd
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: Pacemaker Cluster Manager 1.1.6
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: corosync extended virtual synchrony service
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: corosync configuration service
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: corosync cluster config database access v1.01
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: corosync profile loading service
May 15 12:25:42 server1 corosync[1145]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
May 15 12:25:42 server1 corosync[1145]:   [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 56: memb=0, new=0, lost=0
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 56: memb=1, new=1, lost=0
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_peer_update: NEW:  server1 1543547072
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_peer_update: MEMB: server1 1543547072
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: update_member: Node server1 now has process list: 00000000000000000000000000111312 (1118994)
May 15 12:25:42 server1 stonith-ng: [1209]: info: Invoked: /usr/lib/heartbeat/stonithd 
May 15 12:25:42 server1 stonith-ng: [1209]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
May 15 12:25:42 server1 stonith-ng: [1209]: info: get_cluster_type: Cluster type is: 'openais'
May 15 12:25:42 server1 stonith-ng: [1209]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:42 server1 stonith-ng: [1209]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
May 15 12:25:42 server1 cib: [1210]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May 15 12:25:42 server1 cib: [1210]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
May 15 12:25:42 server1 cib: [1210]: info: validate_with_relaxng: Creating RNG parser context
May 15 12:25:42 server1 cib: [1210]: info: startCib: CIB Initialization completed successfully
May 15 12:25:42 server1 cib: [1210]: info: get_cluster_type: Cluster type is: 'openais'
May 15 12:25:42 server1 cib: [1210]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:42 server1 cib: [1210]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
May 15 12:25:42 server1 lrmd: [1211]: info: enabling coredumps
May 15 12:25:42 server1 lrmd: [1211]: info: Started.
May 15 12:25:42 server1 attrd: [1212]: info: Invoked: /usr/lib/heartbeat/attrd 
May 15 12:25:42 server1 attrd: [1212]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:42 server1 pengine: [1213]: info: Invoked: /usr/lib/heartbeat/pengine 
May 15 12:25:42 server1 crmd: [1214]: info: Invoked: /usr/lib/heartbeat/crmd 
May 15 12:25:42 server1 crmd: [1214]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May 15 12:25:42 server1 crmd: [1214]: info: main: CRM Hg Version: 9971ebba4494012a93c03b40a2c58ec0eb60f50c
May 15 12:25:42 server1 crmd: [1214]: info: crmd_init: Starting crmd
May 15 12:25:42 server1 corosync[1145]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
May 15 12:25:42 server1 corosync[1145]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.0.92) ; members(old:0 left:0)
May 15 12:25:42 server1 corosync[1145]:   [MAIN  ] Completed service synchronization, ready to provide service.
May 15 12:25:42 server1 stonith-ng: [1209]: info: init_ais_connection_classic: AIS connection established
May 15 12:25:42 server1 cib: [1210]: info: init_ais_connection_classic: AIS connection established
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xf86e80 for stonith-ng/1209
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xf8b1e0 for cib/1210
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_ipc: Sending membership update 56 to cib
May 15 12:25:42 server1 corosync[1145]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xf8f9e0 for attrd/1212
May 15 12:25:42 server1 stonith-ng: [1209]: info: get_ais_nodeid: Server details: id=1543547072 uname=server1 cname=pcmk
May 15 12:25:42 server1 stonith-ng: [1209]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
May 15 12:25:42 server1 stonith-ng: [1209]: info: crm_new_peer: Node server1 now has id: 1543547072
May 15 12:25:42 server1 stonith-ng: [1209]: info: crm_new_peer: Node 1543547072 is now known as server1
May 15 12:25:42 server1 stonith-ng: [1209]: info: main: Starting stonith-ng mainloop
May 15 12:25:42 server1 cib: [1210]: info: get_ais_nodeid: Server details: id=1543547072 uname=server1 cname=pcmk
May 15 12:25:42 server1 cib: [1210]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
May 15 12:25:42 server1 cib: [1210]: info: crm_new_peer: Node server1 now has id: 1543547072
May 15 12:25:42 server1 cib: [1210]: info: crm_new_peer: Node 1543547072 is now known as server1
May 15 12:25:42 server1 cib: [1210]: info: cib_init: Starting cib mainloop
May 15 12:25:42 server1 cib: [1210]: info: ais_dispatch_message: Membership 56: quorum still lost
May 15 12:25:42 server1 cib: [1210]: info: crm_update_peer: Node server1: id=1543547072 state=member (new) addr=r(0) ip(192.168.0.92)  (new) votes=1 (new) born=0 seen=56 proc=00000000000000000000000000111312 (new)
May 15 12:25:42 server1 attrd: [1212]: notice: main: Starting mainloop...
May 15 12:25:42 server1 dbus[689]: [system] Activating service name='org.freedesktop.Accounts' (using servicehelper)
May 15 12:25:42 server1 dbus[689]: [system] Successfully activated service 'org.freedesktop.Accounts'
May 15 12:25:42 server1 accounts-daemon[1236]: started daemon version 0.6.15
May 15 12:25:42 server1 dbus[689]: [system] Activating service name='org.freedesktop.ConsoleKit' (using servicehelper)
May 15 12:25:42 server1 dbus[689]: [system] Successfully activated service 'org.freedesktop.ConsoleKit'
May 15 12:25:42 server1 dbus[689]: [system] Activating service name='org.freedesktop.UPower' (using servicehelper)
May 15 12:25:42 server1 dbus[689]: [system] Successfully activated service 'org.freedesktop.UPower'
May 15 12:25:43 server1 anacron[1465]: Anacron 2.3 started on 2014-05-15
May 15 12:25:43 server1 anacron[1465]: Normal exit (0 jobs run)
May 15 12:25:43 server1 dbus[689]: [system] Activating service name='org.freedesktop.ColorManager' (using servicehelper)
May 15 12:25:43 server1 dbus[689]: [system] Successfully activated service 'org.freedesktop.ColorManager'
May 15 12:25:43 server1 crmd: [1214]: info: do_cib_control: CIB connection established
May 15 12:25:43 server1 crmd: [1214]: info: get_cluster_type: Cluster type is: 'openais'
May 15 12:25:43 server1 crmd: [1214]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:43 server1 crmd: [1214]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
May 15 12:25:43 server1 crmd: [1214]: info: init_ais_connection_classic: AIS connection established
May 15 12:25:43 server1 corosync[1145]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xf94210 for crmd/1214
May 15 12:25:43 server1 corosync[1145]:   [pcmk  ] info: pcmk_ipc: Sending membership update 56 to crmd
May 15 12:25:43 server1 crmd: [1214]: info: get_ais_nodeid: Server details: id=1543547072 uname=server1 cname=pcmk
May 15 12:25:43 server1 crmd: [1214]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
May 15 12:25:43 server1 crmd: [1214]: info: crm_new_peer: Node server1 now has id: 1543547072
May 15 12:25:43 server1 crmd: [1214]: info: crm_new_peer: Node 1543547072 is now known as server1
May 15 12:25:43 server1 crmd: [1214]: info: ais_status_callback: status: server1 is now unknown
May 15 12:25:43 server1 crmd: [1214]: info: do_ha_control: Connected to the cluster
May 15 12:25:43 server1 crmd: [1214]: info: do_started: Delaying start, no membership data (0000000000100000)
May 15 12:25:43 server1 crmd: [1214]: info: crmd_init: Starting crmd's mainloop
May 15 12:25:43 server1 crmd: [1214]: info: ais_dispatch_message: Membership 56: quorum still lost
May 15 12:25:43 server1 crmd: [1214]: notice: crmd_peer_update: Status update: Client server1/crmd now has status [online] (DC=<null>)
May 15 12:25:43 server1 crmd: [1214]: info: ais_status_callback: status: server1 is now member (was unknown)
May 15 12:25:43 server1 crmd: [1214]: info: crm_update_peer: Node server1: id=1543547072 state=member (new) addr=r(0) ip(192.168.0.92)  (new) votes=1 (new) born=0 seen=56 proc=00000000000000000000000000111312 (new)
May 15 12:25:43 server1 crmd: [1214]: info: do_started: Delaying start, Config not read (0000000000000040)
May 15 12:25:43 server1 crmd: [1214]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
May 15 12:25:43 server1 crmd: [1214]: info: config_query_callback: Checking for expired actions every 900000ms
May 15 12:25:43 server1 crmd: [1214]: info: config_query_callback: Sending expected-votes=2 to corosync
May 15 12:25:43 server1 crmd: [1214]: info: do_started: The local CRM is operational
May 15 12:25:43 server1 crmd: [1214]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
May 15 12:25:43 server1 dbus[689]: [system] Activating service name='org.freedesktop.RealtimeKit1' (using servicehelper)
May 15 12:25:43 server1 dbus[689]: [system] Successfully activated service 'org.freedesktop.RealtimeKit1'
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully called chroot.
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully dropped privileges.
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully limited resources.
May 15 12:25:43 server1 rtkit-daemon[1562]: Running.
May 15 12:25:43 server1 rtkit-daemon[1562]: Watchdog thread running.
May 15 12:25:43 server1 rtkit-daemon[1562]: Canary thread running.
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully made thread 1558 of process 1558 (n/a) owned by '104' high priority at nice level -11.
May 15 12:25:43 server1 rtkit-daemon[1562]: Supervising 1 threads of 1 processes of 1 users.
May 15 12:25:43 server1 NetworkManager[788]: <info> Unmanaged Device found; state CONNECTED forced. (see http://bugs.launchpad.net/bugs/191889)
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully made thread 1569 of process 1558 (n/a) owned by '104' RT at priority 5.
May 15 12:25:43 server1 rtkit-daemon[1562]: Supervising 2 threads of 1 processes of 1 users.
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully made thread 1571 of process 1558 (n/a) owned by '104' RT at priority 5.
May 15 12:25:43 server1 rtkit-daemon[1562]: Supervising 3 threads of 1 processes of 1 users.
May 15 12:25:43 server1 rtkit-daemon[1562]: Successfully made thread 1576 of process 1576 (n/a) owned by '104' high priority at nice level -11.
May 15 12:25:43 server1 rtkit-daemon[1562]: Supervising 4 threads of 2 processes of 1 users.
May 15 12:25:43 server1 pulseaudio[1576]: [pulseaudio] pid.c: Daemon already running.
May 15 12:25:43 server1 kernel: [   17.574026] init: plymouth-stop pre-start process (1638) terminated with status 1
May 15 12:25:44 server1 crmd: [1214]: info: ais_dispatch_message: Membership 56: quorum still lost
May 15 12:25:44 server1 crmd: [1214]: info: te_connect_stonith: Attempting connection to fencing daemon...
May 15 12:25:45 server1 crmd: [1214]: info: te_connect_stonith: Connected
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 60: memb=1, new=0, lost=0
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: pcmk_peer_update: memb: server1 1543547072
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 60: memb=2, new=1, lost=0
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: Creating entry for node 1560324288 born on 60
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: Node 1560324288/unknown is now: member
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: pcmk_peer_update: NEW:  .pending. 1560324288
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: pcmk_peer_update: MEMB: server1 1543547072
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: pcmk_peer_update: MEMB: .pending. 1560324288
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: send_member_notification: Sending membership update 60 to 2 children
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: 0xf7d6b0 Node 1543547072 ((null)) born on: 60
May 15 12:25:46 server1 corosync[1145]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
May 15 12:25:46 server1 cib: [1210]: info: ais_dispatch_message: Membership 60: quorum still lost
May 15 12:25:46 server1 cib: [1210]: info: crm_new_peer: Node <null> now has id: 1560324288
May 15 12:25:46 server1 cib: [1210]: info: crm_update_peer: Node (null): id=1560324288 state=member (new) addr=r(0) ip(192.168.0.93)  votes=0 born=0 seen=60 proc=00000000000000000000000000000000
May 15 12:25:46 server1 crmd: [1214]: info: ais_dispatch_message: Membership 60: quorum still lost
May 15 12:25:46 server1 crmd: [1214]: info: crm_new_peer: Node <null> now has id: 1560324288
May 15 12:25:46 server1 crmd: [1214]: info: crm_update_peer: Node (null): id=1560324288 state=member (new) addr=r(0) ip(192.168.0.93)  votes=0 born=0 seen=60 proc=00000000000000000000000000000000
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: 0xf93f40 Node 1560324288 (server2) born on: 60
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: 0xf93f40 Node 1560324288 now known as server2 (was: (null))
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: Node server2 now has process list: 00000000000000000000000000111312 (1118994)
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: update_member: Node server2 now has 1 quorum votes (was 0)
May 15 12:25:46 server1 corosync[1145]:   [pcmk  ] info: send_member_notification: Sending membership update 60 to 2 children
May 15 12:25:46 server1 cib: [1210]: notice: ais_dispatch_message: Membership 60: quorum acquired
May 15 12:25:46 server1 crmd: [1214]: notice: ais_dispatch_message: Membership 60: quorum acquired
May 15 12:25:46 server1 cib: [1210]: info: crm_get_peer: Node 1560324288 is now known as server2
May 15 12:25:46 server1 crmd: [1214]: info: crm_get_peer: Node 1560324288 is now known as server2
May 15 12:25:46 server1 cib: [1210]: info: crm_update_peer: Node server2: id=1560324288 state=member addr=r(0) ip(192.168.0.93)  votes=1 (new) born=60 seen=60 proc=00000000000000000000000000111312 (new)
May 15 12:25:46 server1 crmd: [1214]: info: ais_status_callback: status: server2 is now member
May 15 12:25:46 server1 crmd: [1214]: notice: crmd_peer_update: Status update: Client server2/crmd now has status [online] (DC=<null>)
May 15 12:25:46 server1 crmd: [1214]: info: crm_update_peer: Node server2: id=1560324288 state=member addr=r(0) ip(192.168.0.93)  votes=1 (new) born=60 seen=60 proc=00000000000000000000000000111312 (new)
May 15 12:25:46 server1 corosync[1145]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.0.92) ; members(old:1 left:0)
May 15 12:25:46 server1 corosync[1145]:   [MAIN  ] Completed service synchronization, ready to provide service.
May 15 12:25:50 server1 ntpdate[948]: step time server 91.189.94.4 offset -0.450620 sec
May 15 12:25:50 server1 kernel: [   25.352825] xenbr0: no IPv6 routers present
May 15 12:25:51 server1 kernel: [   25.528827] eth0: no IPv6 routers present
May 15 12:25:58 server1 kernel: [   32.804978] usb 2-1.3: new low-speed USB device number 4 using ehci_hcd
May 15 12:25:58 server1 mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3"
May 15 12:25:58 server1 kernel: [   32.908665] input: Dell Dell USB Entry Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input10
May 15 12:25:58 server1 kernel: [   32.908791] generic-usb 0003:413C:2107.0002: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Entry Keyboard] on usb-0000:00:1d.0-1.3/input0
May 15 12:25:58 server1 mtp-probe: bus: 2, device: 4 was not an MTP device
May 15 12:26:03 server1 crmd: [1214]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
May 15 12:26:03 server1 crmd: [1214]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
May 15 12:26:03 server1 crmd: [1214]: info: do_te_control: Registering TE UUID: bab7803e-0cb3-44da-bd05-d09fe2f8f3af
May 15 12:26:03 server1 crmd: [1214]: info: set_graph_functions: Setting custom graph functions
May 15 12:26:03 server1 crmd: [1214]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
May 15 12:26:03 server1 crmd: [1214]: info: do_dc_takeover: Taking over DC status for this partition
May 15 12:26:03 server1 cib: [1210]: info: cib_process_readwrite: We are now in R/W mode
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/5, version=0.10.1): ok (rc=0)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/6, version=0.10.2): ok (rc=0)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/8, version=0.10.3): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: join_make_offer: Making join offers based on membership 60
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.10.4): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
May 15 12:26:03 server1 crmd: [1214]: info: ais_dispatch_message: Membership 60: quorum retained
May 15 12:26:03 server1 crmd: [1214]: info: crmd_ais_dispatch: Setting expected votes to 2
May 15 12:26:03 server1 crmd: [1214]: info: update_dc: Set DC to server1 (3.0.5)
May 15 12:26:03 server1 crmd: [1214]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
May 15 12:26:03 server1 crmd: [1214]: info: config_query_callback: Checking for expired actions every 900000ms
May 15 12:26:03 server1 crmd: [1214]: info: config_query_callback: Sending expected-votes=2 to corosync
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/13, version=0.10.5): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: ais_dispatch_message: Membership 60: quorum retained
May 15 12:26:03 server1 crmd: [1214]: info: crmd_ais_dispatch: Setting expected votes to 2
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/16, version=0.10.6): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
May 15 12:26:03 server1 crmd: [1214]: info: do_dc_join_finalize: join-1: Syncing the CIB from server1 to the rest of the cluster
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/17, version=0.10.6): ok (rc=0)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/18, version=0.10.7): ok (rc=0)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.10.8): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: update_attrd: Connecting to attrd...
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='server1']/transient_attributes (origin=local/crmd/20, version=0.10.9): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: erase_xpath_callback: Deletion of "//node_state[@uname='server1']/transient_attributes": ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: do_dc_join_ack: join-1: Updating node state to member for server1
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='server1']/lrm (origin=local/crmd/21, version=0.10.10): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: erase_xpath_callback: Deletion of "//node_state[@uname='server1']/lrm": ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: do_dc_join_ack: join-1: Updating node state to member for server2
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='server2']/transient_attributes (origin=server2/crmd/7, version=0.10.12): ok (rc=0)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='server2']/lrm (origin=local/crmd/23, version=0.10.13): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: erase_xpath_callback: Deletion of "//node_state[@uname='server2']/lrm": ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
May 15 12:26:03 server1 crmd: [1214]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
May 15 12:26:03 server1 crmd: [1214]: info: crm_update_quorum: Updating quorum status to true (call=27)
May 15 12:26:03 server1 crmd: [1214]: info: abort_transition_graph: do_te_invoke:167 - Triggered transition abort (complete=1) : Peer Cancelled
May 15 12:26:03 server1 crmd: [1214]: info: do_pe_invoke: Query 28: Requesting the current CIB: S_POLICY_ENGINE
May 15 12:26:03 server1 attrd: [1212]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/25, version=0.10.15): ok (rc=0)
May 15 12:26:03 server1 cib: [1210]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/27, version=0.10.17): ok (rc=0)
May 15 12:26:03 server1 crmd: [1214]: info: do_pe_invoke_callback: Invoking the PE: query=28, ref=pe_calc-dc-1400136963-11, seq=60, quorate=1
May 15 12:26:03 server1 pengine: [1213]: notice: unpack_config: On loss of CCM Quorum: Ignore
May 15 12:26:03 server1 pengine: [1213]: notice: RecurringOp:  Start recurring monitor (130s) for Cluster-FS-DRBD:0 on server1
May 15 12:26:03 server1 pengine: [1213]: notice: RecurringOp:  Start recurring monitor (130s) for Cluster-FS-DRBD:1 on server2
May 15 12:26:03 server1 pengine: [1213]: notice: RecurringOp:  Start recurring monitor (130s) for Cluster-FS-DRBD:0 on server1
May 15 12:26:03 server1 pengine: [1213]: notice: RecurringOp:  Start recurring monitor (130s) for Cluster-FS-DRBD:1 on server2
May 15 12:26:03 server1 pengine: [1213]: notice: LogActions: Start   Cluster-FS-DRBD:0#011(server1)
May 15 12:26:03 server1 pengine: [1213]: notice: LogActions: Start   Cluster-FS-DRBD:1#011(server2)
May 15 12:26:03 server1 pengine: [1213]: notice: LogActions: Start   Cluster-FS-Mount:0#011(server1)
May 15 12:26:03 server1 pengine: [1213]: notice: LogActions: Start   Cluster-FS-Mount:1#011(server2)
May 15 12:26:03 server1 crmd: [1214]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
May 15 12:26:03 server1 crmd: [1214]: info: unpack_graph: Unpacked transition 0: 19 actions in 19 synapses
May 15 12:26:03 server1 crmd: [1214]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1400136963-11) derived from /var/lib/pengine/pe-input-18.bz2
May 15 12:26:03 server1 crmd: [1214]: info: te_rsc_command: Initiating action 7: monitor Cluster-FS-DRBD:0_monitor_0 on server1 (local)
May 15 12:26:03 server1 crmd: [1214]: info: do_lrm_rsc_op: Performing key=7:0:7:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:0_monitor_0 )
May 15 12:26:03 server1 lrmd: [1211]: info: rsc:Cluster-FS-DRBD:0 probe[2] (pid 1689)
May 15 12:26:03 server1 crmd: [1214]: info: te_rsc_command: Initiating action 4: monitor Cluster-FS-DRBD:1_monitor_0 on server2
May 15 12:26:03 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
May 15 12:26:03 server1 crmd: [1214]: info: te_rsc_command: Initiating action 8: monitor Cluster-FS-Mount:0_monitor_0 on server1 (local)
May 15 12:26:03 server1 crmd: [1214]: info: do_lrm_rsc_op: Performing key=8:0:7:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-Mount:0_monitor_0 )
May 15 12:26:03 server1 lrmd: [1211]: info: rsc:Cluster-FS-Mount:0 probe[3] (pid 1690)
May 15 12:26:03 server1 crmd: [1214]: info: te_rsc_command: Initiating action 5: monitor Cluster-FS-Mount:1_monitor_0 on server2
May 15 12:26:03 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
May 15 12:26:03 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
May 15 12:26:03 server1 pengine: [1213]: notice: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-18.bz2
May 15 12:26:03 server1 Filesystem[1690]: WARNING: Couldn't find device [/dev/drbd/by-res/r0]. Expected /dev/??? to exist
May 15 12:26:03 server1 lrmd: [1211]: info: operation monitor[3] on Cluster-FS-Mount:0 for client 1214: pid 1690 exited with return code 7
May 15 12:26:03 server1 crmd: [1214]: info: process_lrm_event: LRM operation Cluster-FS-Mount:0_monitor_0 (call=3, rc=7, cib-update=29, confirmed=true) not running
May 15 12:26:03 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-Mount:0_monitor_0 (8) confirmed on server1 (rc=0)
May 15 12:26:04 server1 crm_attribute: [1775]: info: Invoked: crm_attribute -N server1 -n master-Cluster-FS-DRBD:0 -l reboot -D 
May 15 12:26:04 server1 lrmd: [1211]: info: operation monitor[2] on Cluster-FS-DRBD:0 for client 1214: pid 1689 exited with return code 7
May 15 12:26:04 server1 crmd: [1214]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_monitor_0 (call=2, rc=7, cib-update=30, confirmed=true) not running
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:0_monitor_0 (7) confirmed on server1 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 6: probe_complete probe_complete on server1 (local) - no waiting
May 15 12:26:04 server1 attrd: [1212]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
May 15 12:26:04 server1 attrd: [1212]: notice: attrd_perform_update: Sent update 4: probe_complete=true
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-Mount:1_monitor_0 (5) confirmed on server2 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:1_monitor_0 (4) confirmed on server2 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on server2 - no waiting
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 9: start Cluster-FS-DRBD:0_start_0 on server1 (local)
May 15 12:26:04 server1 crmd: [1214]: info: do_lrm_rsc_op: Performing key=9:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:0_start_0 )
May 15 12:26:04 server1 lrmd: [1211]: info: rsc:Cluster-FS-DRBD:0 start[4] (pid 1808)
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 11: start Cluster-FS-DRBD:1_start_0 on server2
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 15 12:26:04 server1 kernel: [   38.544843] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96)
May 15 12:26:04 server1 kernel: [   38.544846] drbd: srcversion: 71955441799F513ACA6DA60 
May 15 12:26:04 server1 kernel: [   38.544848] drbd: registered as block device major 147
May 15 12:26:04 server1 kernel: [   38.544850] drbd: minor_table @ 0xffff8800a15f8600
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:start:stderr) Could not connect to 'drbd' generic netlink family
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:start:stderr) Command 'drbdsetup new-resource r0' terminated with exit code 20
May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Called drbdadm -c /etc/drbd.conf new-resource r0
May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Exit code 20
May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Command output: 
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 15 12:26:04 server1 lrmd: [1211]: info: operation start[4] on Cluster-FS-DRBD:0 for client 1214: pid 1808 exited with return code 1
May 15 12:26:04 server1 crmd: [1214]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_start_0 (call=4, rc=1, cib-update=31, confirmed=true) unknown error
May 15 12:26:04 server1 crmd: [1214]: WARN: status_from_rc: Action 11 (Cluster-FS-DRBD:1_start_0) on server2 failed (target: 0 vs. rc: 1): Error
May 15 12:26:04 server1 crmd: [1214]: WARN: update_failcount: Updating failcount for Cluster-FS-DRBD:1 on server2 after failed start: rc=1 (update=INFINITY, time=1400136964)
May 15 12:26:04 server1 crmd: [1214]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=Cluster-FS-DRBD:1_last_failure_0, magic=0:1;11:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af, cib=0.10.25) : Event failed
May 15 12:26:04 server1 crmd: [1214]: info: update_abort_priority: Abort priority upgraded from 0 to 1
May 15 12:26:04 server1 crmd: [1214]: info: update_abort_priority: Abort action done superceeded by restart
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:1_start_0 (11) confirmed on server2 (rc=4)
May 15 12:26:04 server1 crmd: [1214]: WARN: status_from_rc: Action 9 (Cluster-FS-DRBD:0_start_0) on server1 failed (target: 0 vs. rc: 1): Error
May 15 12:26:04 server1 crmd: [1214]: WARN: update_failcount: Updating failcount for Cluster-FS-DRBD:0 on server1 after failed start: rc=1 (update=INFINITY, time=1400136964)
May 15 12:26:04 server1 crmd: [1214]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=Cluster-FS-DRBD:0_last_failure_0, magic=0:1;9:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af, cib=0.10.26) : Event failed
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:0_start_0 (9) confirmed on server1 (rc=4)
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 49: notify Cluster-FS-DRBD:0_post_notify_start_0 on server1 (local)
May 15 12:26:04 server1 crmd: [1214]: info: do_lrm_rsc_op: Performing key=49:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:0_notify_0 )
May 15 12:26:04 server1 lrmd: [1211]: info: rsc:Cluster-FS-DRBD:0 notify[5] (pid 1877)
May 15 12:26:04 server1 attrd: [1212]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-Cluster-FS-DRBD:0 (INFINITY)
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 50: notify Cluster-FS-DRBD:1_post_notify_start_0 on server2
May 15 12:26:04 server1 attrd: [1212]: notice: attrd_perform_update: Sent update 9: fail-count-Cluster-FS-DRBD:0=INFINITY
May 15 12:26:04 server1 crmd: [1214]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server1-fail-count-Cluster-FS-DRBD.0, name=fail-count-Cluster-FS-DRBD:0, value=INFINITY, magic=NA, cib=0.10.27) : Transient attribute: update
May 15 12:26:04 server1 crmd: [1214]: info: update_abort_priority: Abort priority upgraded from 1 to 1000000
May 15 12:26:04 server1 crmd: [1214]: info: update_abort_priority: 'Event failed' abort superceeded
May 15 12:26:04 server1 attrd: [1212]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-Cluster-FS-DRBD:0 (1400136964)
May 15 12:26:04 server1 crmd: [1214]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server2-fail-count-Cluster-FS-DRBD.1, name=fail-count-Cluster-FS-DRBD:1, value=INFINITY, magic=NA, cib=0.10.28) : Transient attribute: update
May 15 12:26:04 server1 attrd: [1212]: notice: attrd_perform_update: Sent update 12: last-failure-Cluster-FS-DRBD:0=1400136964
May 15 12:26:04 server1 crmd: [1214]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server2-last-failure-Cluster-FS-DRBD.1, name=last-failure-Cluster-FS-DRBD:1, value=1400136964, magic=NA, cib=0.10.29) : Transient attribute: update
May 15 12:26:04 server1 crmd: [1214]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server1-last-failure-Cluster-FS-DRBD.0, name=last-failure-Cluster-FS-DRBD:0, value=1400136964, magic=NA, cib=0.10.30) : Transient attribute: update
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:notify:stderr) Could not connect to 'drbd' generic netlink family
May 15 12:26:04 server1 kernel: [   38.726376] block drbd0: Starting worker thread (from drbdsetup-83 [1919])
May 15 12:26:04 server1 kernel: [   38.727088] block drbd0: disk( Diskless -> Attaching ) 
May 15 12:26:04 server1 kernel: [   38.730352] block drbd0: Found 4 transactions (4 active extents) in activity log.
May 15 12:26:04 server1 kernel: [   38.730355] block drbd0: Method to ensure write ordering: flush
May 15 12:26:04 server1 kernel: [   38.730358] block drbd0: max BIO size = 131072
May 15 12:26:04 server1 kernel: [   38.730361] block drbd0: drbd_bm_resize called with capacity == 78122592
May 15 12:26:04 server1 kernel: [   38.730630] block drbd0: resync bitmap: bits=9765324 words=152584 pages=299
May 15 12:26:04 server1 kernel: [   38.730633] block drbd0: size = 37 GB (39061296 KB)
May 15 12:26:04 server1 kernel: [   38.766509] block drbd0: bitmap READ of 299 pages took 9 jiffies
May 15 12:26:04 server1 kernel: [   38.766705] block drbd0: recounting of set bits took additional 0 jiffies
May 15 12:26:04 server1 kernel: [   38.766708] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
May 15 12:26:04 server1 kernel: [   38.766716] block drbd0: disk( Attaching -> Consistent ) 
May 15 12:26:04 server1 kernel: [   38.766720] block drbd0: attached to UUIDs 23A83AE9AB70041C:0000000000000000:0001000000000000:0001000000000004
May 15 12:26:04 server1 crmd: [1214]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400136964-5 from server2
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:1_notify_0 (50) confirmed on server2 (rc=0)
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 disk /dev/sda3 /dev/sda3 internal --set-defaults --create-device --fencing=resource-only #012drbdsetup-83 0 syncer --set-defaults --create-device --rate=15M #012drbdsetup-83 0 net ipv4:192.168.0.92:7788 ipv4:192.168.0.93:7788 C --set-defaults --create-device --allow-two-primaries --shared-secret=kalki 
May 15 12:26:04 server1 kernel: [   38.783502] block drbd0: conn( StandAlone -> Unconnected ) 
May 15 12:26:04 server1 kernel: [   38.783518] block drbd0: Starting receiver thread (from drbd0_worker [1920])
May 15 12:26:04 server1 kernel: [   38.783582] block drbd0: receiver (re)started
May 15 12:26:04 server1 kernel: [   38.783588] block drbd0: conn( Unconnected -> WFConnection ) 
May 15 12:26:04 server1 lrmd: [1211]: info: operation notify[5] on Cluster-FS-DRBD:0 for client 1214: pid 1877 exited with return code 0
May 15 12:26:04 server1 crmd: [1214]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:0_notify_0 from 49:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af: lrm_invoke-lrmd-1400136964-22
May 15 12:26:04 server1 crmd: [1214]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400136964-22 from server1
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:0_notify_0 (49) confirmed on server1 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_notify_0 (call=5, rc=0, cib-update=0, confirmed=true) ok
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 18 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: run_graph: ====================================================
May 15 12:26:04 server1 crmd: [1214]: notice: run_graph: Transition 0 (Complete=17, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pengine/pe-input-18.bz2): Stopped
May 15 12:26:04 server1 crmd: [1214]: info: te_graph_trigger: Transition 0 is now complete
May 15 12:26:04 server1 crmd: [1214]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
May 15 12:26:04 server1 crmd: [1214]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
May 15 12:26:04 server1 crmd: [1214]: info: do_pe_invoke: Query 32: Requesting the current CIB: S_POLICY_ENGINE
May 15 12:26:04 server1 crmd: [1214]: info: do_pe_invoke_callback: Invoking the PE: query=32, ref=pe_calc-dc-1400136964-23, seq=60, quorate=1
May 15 12:26:04 server1 pengine: [1213]: notice: unpack_config: On loss of CCM Quorum: Ignore
May 15 12:26:04 server1 pengine: [1213]: WARN: unpack_rsc_op: Processing failed op Cluster-FS-DRBD:0_last_failure_0 on server1: unknown error (1)
May 15 12:26:04 server1 pengine: [1213]: WARN: unpack_rsc_op: Processing failed op Cluster-FS-DRBD:1_last_failure_0 on server2: unknown error (1)
May 15 12:26:04 server1 pengine: [1213]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server2 after 1000000 failures (max=1000000)
May 15 12:26:04 server1 pengine: [1213]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server2 after 1000000 failures (max=1000000)
May 15 12:26:04 server1 pengine: [1213]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server1 after 1000000 failures (max=1000000)
May 15 12:26:04 server1 pengine: [1213]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server1 after 1000000 failures (max=1000000)
May 15 12:26:04 server1 pengine: [1213]: notice: LogActions: Stop    Cluster-FS-DRBD:0#011(server1)
May 15 12:26:04 server1 pengine: [1213]: notice: LogActions: Stop    Cluster-FS-DRBD:1#011(server2)
May 15 12:26:04 server1 pengine: [1213]: notice: LogActions: Leave   Cluster-FS-Mount:0#011(Stopped)
May 15 12:26:04 server1 pengine: [1213]: notice: LogActions: Leave   Cluster-FS-Mount:1#011(Stopped)
May 15 12:26:04 server1 crmd: [1214]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
May 15 12:26:04 server1 crmd: [1214]: info: unpack_graph: Unpacked transition 1: 11 actions in 11 synapses
May 15 12:26:04 server1 crmd: [1214]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1400136964-23) derived from /var/lib/pengine/pe-input-19.bz2
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 45: notify Cluster-FS-DRBD:0_pre_notify_stop_0 on server1 (local)
May 15 12:26:04 server1 crmd: [1214]: info: do_lrm_rsc_op: Performing key=45:1:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:0_notify_0 )
May 15 12:26:04 server1 lrmd: [1211]: info: rsc:Cluster-FS-DRBD:0 notify[6] (pid 1943)
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 46: notify Cluster-FS-DRBD:1_pre_notify_stop_0 on server2
May 15 12:26:04 server1 lrmd: [1211]: info: operation notify[6] on Cluster-FS-DRBD:0 for client 1214: pid 1943 exited with return code 0
May 15 12:26:04 server1 crmd: [1214]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:0_notify_0 from 45:1:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af: lrm_invoke-lrmd-1400136964-26
May 15 12:26:04 server1 crmd: [1214]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400136964-26 from server1
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:0_notify_0 (45) confirmed on server1 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_notify_0 (call=6, rc=0, cib-update=0, confirmed=true) ok
May 15 12:26:04 server1 crmd: [1214]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400136964-6 from server2
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:1_notify_0 (46) confirmed on server2 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 1: stop Cluster-FS-DRBD:0_stop_0 on server1 (local)
May 15 12:26:04 server1 crmd: [1214]: info: do_lrm_rsc_op: Performing key=1:1:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:0_stop_0 )
May 15 12:26:04 server1 lrmd: [1211]: info: rsc:Cluster-FS-DRBD:0 stop[7] (pid 1967)
May 15 12:26:04 server1 crmd: [1214]: info: te_rsc_command: Initiating action 2: stop Cluster-FS-DRBD:1_stop_0 on server2
May 15 12:26:04 server1 pengine: [1213]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-19.bz2
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:stop:stderr) Could not connect to 'drbd' generic netlink family
May 15 12:26:04 server1 kernel: [   38.896663] block drbd0: conn( WFConnection -> Disconnecting ) 
May 15 12:26:04 server1 kernel: [   38.896683] block drbd0: Discarding network configuration.
May 15 12:26:04 server1 kernel: [   38.896704] block drbd0: Connection closed
May 15 12:26:04 server1 kernel: [   38.896708] block drbd0: conn( Disconnecting -> StandAlone ) 
May 15 12:26:04 server1 kernel: [   38.896728] block drbd0: receiver terminated
May 15 12:26:04 server1 kernel: [   38.896731] block drbd0: Terminating drbd0_receiver
May 15 12:26:04 server1 kernel: [   38.896752] block drbd0: disk( Consistent -> Failed ) 
May 15 12:26:04 server1 kernel: [   38.896763] block drbd0: Sending state for detaching disk failed
May 15 12:26:04 server1 kernel: [   38.896771] block drbd0: disk( Failed -> Diskless ) 
May 15 12:26:04 server1 kernel: [   38.896815] block drbd0: drbd_bm_resize called with capacity == 0
May 15 12:26:04 server1 kernel: [   38.896888] block drbd0: worker terminated
May 15 12:26:04 server1 kernel: [   38.896890] block drbd0: Terminating drbd0_worker
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:stop:stdout) 
May 15 12:26:04 server1 crm_attribute: [1997]: info: Invoked: crm_attribute -N server1 -n master-Cluster-FS-DRBD:0 -l reboot -D 
May 15 12:26:04 server1 lrmd: [1211]: info: RA output: (Cluster-FS-DRBD:0:stop:stdout) 
May 15 12:26:04 server1 lrmd: [1211]: info: operation stop[7] on Cluster-FS-DRBD:0 for client 1214: pid 1967 exited with return code 0
May 15 12:26:04 server1 crmd: [1214]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_stop_0 (call=7, rc=0, cib-update=33, confirmed=true) ok
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:0_stop_0 (1) confirmed on server1 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: match_graph_event: Action Cluster-FS-DRBD:1_stop_0 (2) confirmed on server2 (rc=0)
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 18 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: te_pseudo_action: Pseudo action 3 fired and confirmed
May 15 12:26:04 server1 crmd: [1214]: info: run_graph: ====================================================
May 15 12:26:04 server1 crmd: [1214]: notice: run_graph: Transition 1 (Complete=11, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-19.bz2): Complete
May 15 12:26:04 server1 crmd: [1214]: info: te_graph_trigger: Transition 1 is now complete
May 15 12:26:04 server1 crmd: [1214]: info: notify_crmd: Transition 1 status: done - <null>
May 15 12:26:04 server1 crmd: [1214]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
May 15 12:26:04 server1 crmd: [1214]: info: do_state_transition: Starting PEngine Recheck Timer
-------------- next part --------------
May 15 12:25:45 server2 corosync[1117]:   [MAIN  ] Corosync Cluster Engine ('1.4.2'): started and ready to provide service.
May 15 12:25:45 server2 corosync[1117]:   [MAIN  ] Corosync built-in features: nss
May 15 12:25:45 server2 corosync[1117]:   [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
May 15 12:25:45 server2 corosync[1117]:   [TOTEM ] Initializing transport (UDP/IP Multicast).
May 15 12:25:45 server2 corosync[1117]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
May 15 12:25:45 server2 logd: [1137]: info: logd started with /etc/logd.cf.
May 15 12:25:45 server2 openhpid: ERROR: (init.c, 76, OpenHPI is not configured.  See openhpi.conf file.)
May 15 12:25:45 server2 openhpid: ERROR: (openhpid.cpp, 270, There was an error initializing OpenHPI)
May 15 12:25:45 server2 kernel: [   16.368612] Event-channel device installed.
May 15 12:25:45 server2 xenstored: Checking store ...
May 15 12:25:45 server2 xenstored: Checking store complete.
May 15 12:25:45 server2 kernel: [   16.401409] XENBUS: Unable to read cpu state
May 15 12:25:45 server2 kernel: [   16.401543] XENBUS: Unable to read cpu state
May 15 12:25:45 server2 kernel: [   16.401677] XENBUS: Unable to read cpu state
May 15 12:25:45 server2 kernel: [   16.401795] XENBUS: Unable to read cpu state
May 15 12:25:45 server2 corosync[1117]:   [TOTEM ] The network interface [192.168.0.93] is now up.
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: process_ais_conf: Reading configure
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: config_find_init: Local handle: 4730966301143465987 for logging
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: config_find_next: Processing additional logging options...
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Found 'off' for option: debug
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Found 'no' for option: to_logfile
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Found 'yes' for option: to_syslog
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Found 'daemon' for option: syslog_facility
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: config_find_init: Local handle: 7739444317642555396 for quorum
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: config_find_next: No additional configuration supplied for: quorum
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: No default for option: provider
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: config_find_init: Local handle: 5650605097994944517 for service
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: config_find_next: Processing additional service options...
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Found '0' for option: ver
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_logd
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_startup: CRM: Initialized
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] Logging: Initialized pcmk_startup
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_startup: Service: 9
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_startup: Local hostname: server2
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_update_nodeid: Local node id: 1560324288
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Creating entry for node 1560324288 born on 0
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: 0xb936b0 Node 1560324288 now known as server2 (was: (null))
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Node server2 now has 1 quorum votes (was 0)
May 15 12:25:45 server2 stonith-ng: [1180]: info: Invoked: /usr/lib/heartbeat/stonithd 
May 15 12:25:45 server2 stonith-ng: [1180]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
May 15 12:25:45 server2 stonith-ng: [1180]: info: get_cluster_type: Cluster type is: 'openais'
May 15 12:25:45 server2 stonith-ng: [1180]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:45 server2 stonith-ng: [1180]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
May 15 12:25:45 server2 avahi-daemon[832]: Joining mDNS multicast group on interface xenbr0.IPv6 with address fe80::d6be:d9ff:fec0:61d2.
May 15 12:25:45 server2 avahi-daemon[832]: New relevant interface xenbr0.IPv6 for mDNS.
May 15 12:25:45 server2 avahi-daemon[832]: Registering new address record for fe80::d6be:d9ff:fec0:61d2 on xenbr0.*.
May 15 12:25:45 server2 cib: [1181]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May 15 12:25:45 server2 cib: [1181]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
May 15 12:25:45 server2 cib: [1181]: info: validate_with_relaxng: Creating RNG parser context
May 15 12:25:45 server2 cib: [1181]: info: startCib: CIB Initialization completed successfully
May 15 12:25:45 server2 cib: [1181]: info: get_cluster_type: Cluster type is: 'openais'
May 15 12:25:45 server2 cib: [1181]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:45 server2 cib: [1181]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
May 15 12:25:45 server2 lrmd: [1182]: info: enabling coredumps
May 15 12:25:45 server2 lrmd: [1182]: info: Started.
May 15 12:25:45 server2 attrd: [1183]: info: Invoked: /usr/lib/heartbeat/attrd 
May 15 12:25:45 server2 attrd: [1183]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:45 server2 pengine: [1184]: info: Invoked: /usr/lib/heartbeat/pengine 
May 15 12:25:45 server2 crmd: [1185]: info: Invoked: /usr/lib/heartbeat/crmd 
May 15 12:25:45 server2 crmd: [1185]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May 15 12:25:45 server2 crmd: [1185]: info: main: CRM Hg Version: 9971ebba4494012a93c03b40a2c58ec0eb60f50c
May 15 12:25:45 server2 crmd: [1185]: info: crmd_init: Starting crmd
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Node 1560324288/server2 is now: member
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: spawn_child: Forked child 1180 for process stonith-ng
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: spawn_child: Forked child 1181 for process cib
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: spawn_child: Forked child 1182 for process lrmd
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: spawn_child: Forked child 1183 for process attrd
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: spawn_child: Forked child 1184 for process pengine
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: spawn_child: Forked child 1185 for process crmd
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: Pacemaker Cluster Manager 1.1.6
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: corosync extended virtual synchrony service
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: corosync configuration service
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: corosync cluster config database access v1.01
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: corosync profile loading service
May 15 12:25:45 server2 corosync[1117]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
May 15 12:25:45 server2 corosync[1117]:   [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 52: memb=0, new=0, lost=0
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 52: memb=1, new=1, lost=0
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_peer_update: NEW:  server2 1560324288
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_peer_update: MEMB: server2 1560324288
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Node server2 now has process list: 00000000000000000000000000111312 (1118994)
May 15 12:25:45 server2 corosync[1117]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
May 15 12:25:45 server2 corosync[1117]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.0.93) ; members(old:0 left:0)
May 15 12:25:45 server2 corosync[1117]:   [MAIN  ] Completed service synchronization, ready to provide service.
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xba59e0 for attrd/1183
May 15 12:25:45 server2 stonith-ng: [1180]: info: init_ais_connection_classic: AIS connection established
May 15 12:25:45 server2 cib: [1181]: info: init_ais_connection_classic: AIS connection established
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xb9ce80 for stonith-ng/1180
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xba11e0 for cib/1181
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_ipc: Sending membership update 52 to cib
May 15 12:25:45 server2 attrd: [1183]: notice: main: Starting mainloop...
May 15 12:25:45 server2 stonith-ng: [1180]: info: get_ais_nodeid: Server details: id=1560324288 uname=server2 cname=pcmk
May 15 12:25:45 server2 stonith-ng: [1180]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
May 15 12:25:45 server2 stonith-ng: [1180]: info: crm_new_peer: Node server2 now has id: 1560324288
May 15 12:25:45 server2 stonith-ng: [1180]: info: crm_new_peer: Node 1560324288 is now known as server2
May 15 12:25:45 server2 stonith-ng: [1180]: info: main: Starting stonith-ng mainloop
May 15 12:25:45 server2 cib: [1181]: info: get_ais_nodeid: Server details: id=1560324288 uname=server2 cname=pcmk
May 15 12:25:45 server2 cib: [1181]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
May 15 12:25:45 server2 cib: [1181]: info: crm_new_peer: Node server2 now has id: 1560324288
May 15 12:25:45 server2 cib: [1181]: info: crm_new_peer: Node 1560324288 is now known as server2
May 15 12:25:45 server2 cib: [1181]: info: cib_init: Starting cib mainloop
May 15 12:25:45 server2 cib: [1181]: info: ais_dispatch_message: Membership 52: quorum still lost
May 15 12:25:45 server2 cib: [1181]: info: crm_update_peer: Node server2: id=1560324288 state=member (new) addr=r(0) ip(192.168.0.93)  (new) votes=1 (new) born=0 seen=52 proc=00000000000000000000000000111312 (new)
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 60: memb=1, new=0, lost=0
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_peer_update: memb: server2 1560324288
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 60: memb=2, new=1, lost=0
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Creating entry for node 1543547072 born on 60
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Node 1543547072/unknown is now: member
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_peer_update: NEW:  .pending. 1543547072
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_peer_update: MEMB: .pending. 1543547072
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: pcmk_peer_update: MEMB: server2 1560324288
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: send_member_notification: Sending membership update 60 to 1 children
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: 0xb936b0 Node 1560324288 ((null)) born on: 60
May 15 12:25:45 server2 corosync[1117]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
May 15 12:25:45 server2 cib: [1181]: info: ais_dispatch_message: Membership 60: quorum still lost
May 15 12:25:45 server2 cib: [1181]: info: crm_new_peer: Node <null> now has id: 1543547072
May 15 12:25:45 server2 cib: [1181]: info: crm_update_peer: Node (null): id=1543547072 state=member (new) addr=r(0) ip(192.168.0.92)  votes=0 born=0 seen=60 proc=00000000000000000000000000000000
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: 0xb9acf0 Node 1543547072 (server1) born on: 60
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: 0xb9acf0 Node 1543547072 now known as server1 (was: (null))
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Node server1 now has process list: 00000000000000000000000000111312 (1118994)
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: update_member: Node server1 now has 1 quorum votes (was 0)
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] info: send_member_notification: Sending membership update 60 to 1 children
May 15 12:25:45 server2 cib: [1181]: notice: ais_dispatch_message: Membership 60: quorum acquired
May 15 12:25:45 server2 cib: [1181]: info: crm_get_peer: Node 1543547072 is now known as server1
May 15 12:25:45 server2 cib: [1181]: info: crm_update_peer: Node server1: id=1543547072 state=member addr=r(0) ip(192.168.0.92)  votes=1 (new) born=60 seen=60 proc=00000000000000000000000000111312 (new)
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
May 15 12:25:45 server2 corosync[1117]:   [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
May 15 12:25:45 server2 corosync[1117]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.0.92) ; members(old:1 left:0)
May 15 12:25:45 server2 corosync[1117]:   [MAIN  ] Completed service synchronization, ready to provide service.
May 15 12:25:45 server2 dbus[806]: [system] Activating service name='org.freedesktop.Accounts' (using servicehelper)
May 15 12:25:45 server2 dbus[806]: [system] Successfully activated service 'org.freedesktop.Accounts'
May 15 12:25:46 server2 accounts-daemon[1196]: started daemon version 0.6.15
May 15 12:25:46 server2 dbus[806]: [system] Activating service name='org.freedesktop.ConsoleKit' (using servicehelper)
May 15 12:25:46 server2 dbus[806]: [system] Successfully activated service 'org.freedesktop.ConsoleKit'
May 15 12:25:46 server2 avahi-daemon[832]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80::d6be:d9ff:fec0:61d2.
May 15 12:25:46 server2 avahi-daemon[832]: New relevant interface eth1.IPv6 for mDNS.
May 15 12:25:46 server2 avahi-daemon[832]: Registering new address record for fe80::d6be:d9ff:fec0:61d2 on eth1.*.
May 15 12:25:46 server2 dbus[806]: [system] Activating service name='org.freedesktop.UPower' (using servicehelper)
May 15 12:25:46 server2 dbus[806]: [system] Successfully activated service 'org.freedesktop.UPower'
May 15 12:25:46 server2 anacron[1435]: Anacron 2.3 started on 2014-05-15
May 15 12:25:46 server2 anacron[1435]: Normal exit (0 jobs run)
May 15 12:25:46 server2 crmd: [1185]: info: do_cib_control: CIB connection established
May 15 12:25:46 server2 crmd: [1185]: info: get_cluster_type: Cluster type is: 'openais'
May 15 12:25:46 server2 crmd: [1185]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
May 15 12:25:46 server2 crmd: [1185]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
May 15 12:25:46 server2 crmd: [1185]: info: init_ais_connection_classic: AIS connection established
May 15 12:25:46 server2 corosync[1117]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0xbaa230 for crmd/1185
May 15 12:25:46 server2 corosync[1117]:   [pcmk  ] info: pcmk_ipc: Sending membership update 60 to crmd
May 15 12:25:46 server2 crmd: [1185]: info: get_ais_nodeid: Server details: id=1560324288 uname=server2 cname=pcmk
May 15 12:25:46 server2 crmd: [1185]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
May 15 12:25:46 server2 crmd: [1185]: info: crm_new_peer: Node server2 now has id: 1560324288
May 15 12:25:46 server2 crmd: [1185]: info: crm_new_peer: Node 1560324288 is now known as server2
May 15 12:25:46 server2 crmd: [1185]: info: ais_status_callback: status: server2 is now unknown
May 15 12:25:46 server2 crmd: [1185]: info: do_ha_control: Connected to the cluster
May 15 12:25:46 server2 dbus[806]: [system] Activating service name='org.freedesktop.ColorManager' (using servicehelper)
May 15 12:25:46 server2 crmd: [1185]: info: do_started: Delaying start, no membership data (0000000000100000)
May 15 12:25:46 server2 crmd: [1185]: info: crmd_init: Starting crmd's mainloop
May 15 12:25:46 server2 crmd: [1185]: notice: ais_dispatch_message: Membership 60: quorum acquired
May 15 12:25:46 server2 crmd: [1185]: info: crm_new_peer: Node server1 now has id: 1543547072
May 15 12:25:46 server2 crmd: [1185]: info: crm_new_peer: Node 1543547072 is now known as server1
May 15 12:25:46 server2 crmd: [1185]: info: ais_status_callback: status: server1 is now unknown
May 15 12:25:46 server2 crmd: [1185]: info: ais_status_callback: status: server1 is now member (was unknown)
May 15 12:25:46 server2 crmd: [1185]: info: crm_update_peer: Node server1: id=1543547072 state=member (new) addr=r(0) ip(192.168.0.92)  votes=1 born=60 seen=60 proc=00000000000000000000000000111312
May 15 12:25:46 server2 crmd: [1185]: notice: crmd_peer_update: Status update: Client server2/crmd now has status [online] (DC=<null>)
May 15 12:25:46 server2 crmd: [1185]: info: ais_status_callback: status: server2 is now member (was unknown)
May 15 12:25:46 server2 crmd: [1185]: info: crm_update_peer: Node server2: id=1560324288 state=member (new) addr=r(0) ip(192.168.0.93)  (new) votes=1 (new) born=60 seen=60 proc=00000000000000000000000000111312 (new)
May 15 12:25:46 server2 crmd: [1185]: info: do_started: Delaying start, Config not read (0000000000000040)
May 15 12:25:46 server2 crmd: [1185]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
May 15 12:25:46 server2 crmd: [1185]: info: config_query_callback: Checking for expired actions every 900000ms
May 15 12:25:46 server2 crmd: [1185]: info: config_query_callback: Sending expected-votes=2 to corosync
May 15 12:25:46 server2 crmd: [1185]: info: do_started: The local CRM is operational
May 15 12:25:46 server2 crmd: [1185]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
May 15 12:25:46 server2 dbus[806]: [system] Successfully activated service 'org.freedesktop.ColorManager'
May 15 12:25:46 server2 dbus[806]: [system] Activating service name='org.freedesktop.RealtimeKit1' (using servicehelper)
May 15 12:25:46 server2 dbus[806]: [system] Successfully activated service 'org.freedesktop.RealtimeKit1'
May 15 12:25:46 server2 rtkit-daemon[1533]: Successfully called chroot.
May 15 12:25:46 server2 rtkit-daemon[1533]: Successfully dropped privileges.
May 15 12:25:46 server2 rtkit-daemon[1533]: Successfully limited resources.
May 15 12:25:46 server2 rtkit-daemon[1533]: Running.
May 15 12:25:46 server2 rtkit-daemon[1533]: Canary thread running.
May 15 12:25:46 server2 rtkit-daemon[1533]: Watchdog thread running.
May 15 12:25:46 server2 rtkit-daemon[1533]: Successfully made thread 1531 of process 1531 (n/a) owned by '104' high priority at nice level -11.
May 15 12:25:46 server2 rtkit-daemon[1533]: Supervising 1 threads of 1 processes of 1 users.
May 15 12:25:47 server2 NetworkManager[835]: <info> Unmanaged Device found; state CONNECTED forced. (see http://bugs.launchpad.net/bugs/191889)
May 15 12:25:47 server2 rtkit-daemon[1533]: Successfully made thread 1540 of process 1531 (n/a) owned by '104' RT at priority 5.
May 15 12:25:47 server2 rtkit-daemon[1533]: Supervising 2 threads of 1 processes of 1 users.
May 15 12:25:47 server2 rtkit-daemon[1533]: Successfully made thread 1541 of process 1531 (n/a) owned by '104' RT at priority 5.
May 15 12:25:47 server2 rtkit-daemon[1533]: Supervising 3 threads of 1 processes of 1 users.
May 15 12:25:47 server2 rtkit-daemon[1533]: Successfully made thread 1544 of process 1544 (n/a) owned by '104' high priority at nice level -11.
May 15 12:25:47 server2 rtkit-daemon[1533]: Supervising 4 threads of 2 processes of 1 users.
May 15 12:25:47 server2 pulseaudio[1544]: [pulseaudio] pid.c: Daemon already running.
May 15 12:25:47 server2 kernel: [   18.176303] init: plymouth-stop pre-start process (1609) terminated with status 1
May 15 12:25:47 server2 crmd: [1185]: info: ais_dispatch_message: Membership 60: quorum retained
May 15 12:25:47 server2 crmd: [1185]: info: te_connect_stonith: Attempting connection to fencing daemon...
May 15 12:25:48 server2 crmd: [1185]: info: te_connect_stonith: Connected
May 15 12:25:54 server2 ntpdate[919]: step time server 91.189.94.4 offset 0.221714 sec
May 15 12:25:55 server2 kernel: [   25.450546] xenbr0: no IPv6 routers present
May 15 12:25:55 server2 kernel: [   25.994546] eth1: no IPv6 routers present
May 15 12:26:03 server2 crmd: [1185]: info: update_dc: Set DC to server1 (3.0.5)
May 15 12:26:03 server2 crmd: [1185]: info: update_attrd: Connecting to attrd...
May 15 12:26:03 server2 crmd: [1185]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
May 15 12:26:03 server2 attrd: [1183]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
May 15 12:26:03 server2 crmd: [1185]: info: erase_xpath_callback: Deletion of "//node_state[@uname='server2']/transient_attributes": ok (rc=0)
May 15 12:26:03 server2 crmd: [1185]: info: do_lrm_rsc_op: Performing key=4:0:7:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:1_monitor_0 )
May 15 12:26:03 server2 lrmd: [1182]: info: rsc:Cluster-FS-DRBD:1 probe[2] (pid 1617)
May 15 12:26:03 server2 crmd: [1185]: info: do_lrm_rsc_op: Performing key=5:0:7:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-Mount:1_monitor_0 )
May 15 12:26:03 server2 lrmd: [1182]: info: rsc:Cluster-FS-Mount:1 probe[3] (pid 1618)
May 15 12:26:04 server2 Filesystem[1618]: WARNING: Couldn't find device [/dev/drbd/by-res/r0]. Expected /dev/??? to exist
May 15 12:26:04 server2 lrmd: [1182]: info: operation monitor[3] on Cluster-FS-Mount:1 for client 1185: pid 1618 exited with return code 7
May 15 12:26:04 server2 crmd: [1185]: info: process_lrm_event: LRM operation Cluster-FS-Mount:1_monitor_0 (call=3, rc=7, cib-update=8, confirmed=true) not running
May 15 12:26:04 server2 crm_attribute: [1678]: info: Invoked: crm_attribute -N server2 -n master-Cluster-FS-DRBD:1 -l reboot -D 
May 15 12:26:04 server2 lrmd: [1182]: info: operation monitor[2] on Cluster-FS-DRBD:1 for client 1185: pid 1617 exited with return code 7
May 15 12:26:04 server2 crmd: [1185]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_monitor_0 (call=2, rc=7, cib-update=9, confirmed=true) not running
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_perform_update: Sent update 5: probe_complete=true
May 15 12:26:04 server2 crmd: [1185]: info: do_lrm_rsc_op: Performing key=11:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:1_start_0 )
May 15 12:26:04 server2 lrmd: [1182]: info: rsc:Cluster-FS-DRBD:1 start[4] (pid 1679)
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:start:stdout) 
May 15 12:26:04 server2 kernel: [   34.539678] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96)
May 15 12:26:04 server2 kernel: [   34.539682] drbd: srcversion: 71955441799F513ACA6DA60 
May 15 12:26:04 server2 kernel: [   34.539683] drbd: registered as block device major 147
May 15 12:26:04 server2 kernel: [   34.539685] drbd: minor_table @ 0xffff8800a3eef000
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:start:stdout) 
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:start:stderr) Could not connect to 'drbd' generic netlink family
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:start:stderr) Command 'drbdsetup new-resource r0
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:start:stderr) ' terminated with exit code 20
May 15 12:26:04 server2 drbd[1679]: ERROR: r0: Called drbdadm -c /etc/drbd.conf new-resource r0
May 15 12:26:04 server2 drbd[1679]: ERROR: r0: Exit code 20
May 15 12:26:04 server2 drbd[1679]: ERROR: r0: Command output: 
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:start:stdout) 
May 15 12:26:04 server2 lrmd: [1182]: info: operation start[4] on Cluster-FS-DRBD:1 for client 1185: pid 1679 exited with return code 1
May 15 12:26:04 server2 crmd: [1185]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_start_0 (call=4, rc=1, cib-update=10, confirmed=true) unknown error
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_ais_dispatch: Update relayed from server1
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-Cluster-FS-DRBD:1 (INFINITY)
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_perform_update: Sent update 8: fail-count-Cluster-FS-DRBD:1=INFINITY
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_ais_dispatch: Update relayed from server1
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-Cluster-FS-DRBD:1 (1400136964)
May 15 12:26:04 server2 attrd: [1183]: notice: attrd_perform_update: Sent update 11: last-failure-Cluster-FS-DRBD:1=1400136964
May 15 12:26:04 server2 crmd: [1185]: info: do_lrm_rsc_op: Performing key=50:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:1_notify_0 )
May 15 12:26:04 server2 lrmd: [1182]: info: rsc:Cluster-FS-DRBD:1 notify[5] (pid 1736)
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:notify:stderr) Could not connect to 'drbd' generic netlink family
May 15 12:26:04 server2 kernel: [   34.729061] block drbd0: Starting worker thread (from drbdsetup-83 [1768])
May 15 12:26:04 server2 kernel: [   34.729430] block drbd0: disk( Diskless -> Attaching ) 
May 15 12:26:04 server2 kernel: [   34.730053] block drbd0: ASSERT( from_tnr - cnr + i - from == mx+1 ) in /build/buildd/linux-3.2.0/drivers/block/drbd/drbd_actlog.c:468
May 15 12:26:04 server2 kernel: [   34.731493] block drbd0: Found 4 transactions (142 active extents) in activity log.
May 15 12:26:04 server2 kernel: [   34.731496] block drbd0: Method to ensure write ordering: flush
May 15 12:26:04 server2 kernel: [   34.731499] block drbd0: max BIO size = 131072
May 15 12:26:04 server2 kernel: [   34.731503] block drbd0: drbd_bm_resize called with capacity == 78122592
May 15 12:26:04 server2 kernel: [   34.731864] block drbd0: resync bitmap: bits=9765324 words=152584 pages=299
May 15 12:26:04 server2 kernel: [   34.731868] block drbd0: size = 37 GB (39061296 KB)
May 15 12:26:04 server2 kernel: [   34.754380] block drbd0: bitmap READ of 299 pages took 5 jiffies
May 15 12:26:04 server2 kernel: [   34.754667] block drbd0: recounting of set bits took additional 1 jiffies
May 15 12:26:04 server2 kernel: [   34.754670] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
May 15 12:26:04 server2 kernel: [   34.754680] block drbd0: disk( Attaching -> Consistent ) 
May 15 12:26:04 server2 kernel: [   34.754684] block drbd0: attached to UUIDs 23A83AE9AB70041C:0000000000000000:0001000000000000:0001000000000004
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:notify:stdout) drbdsetup-83 0 disk /dev/sda3 /dev/sda3 internal --set-defaults --create-device --fencing=resource-only 
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:notify:stdout) drbdsetup-83 0 syncer --set-defaults --create-device --rate=15M 
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:notify:stdout) drbdsetup-83 0 net ipv4:192.168.0.93:7788 ipv4:192.168.0.92:7788 C --set-defaults --create-device --allow-two-primaries --shared-secret=kalki 
May 15 12:26:04 server2 kernel: [   34.773093] block drbd0: conn( StandAlone -> Unconnected ) 
May 15 12:26:04 server2 kernel: [   34.773112] block drbd0: Starting receiver thread (from drbd0_worker [1769])
May 15 12:26:04 server2 kernel: [   34.773163] block drbd0: receiver (re)started
May 15 12:26:04 server2 kernel: [   34.773169] block drbd0: conn( Unconnected -> WFConnection ) 
May 15 12:26:04 server2 lrmd: [1182]: info: operation notify[5] on Cluster-FS-DRBD:1 for client 1185: pid 1736 exited with return code 0
May 15 12:26:04 server2 crmd: [1185]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:1_notify_0 from 50:0:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af: lrm_invoke-lrmd-1400136964-5
May 15 12:26:04 server2 crmd: [1185]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_notify_0 (call=5, rc=0, cib-update=0, confirmed=true) ok
May 15 12:26:04 server2 crmd: [1185]: info: do_lrm_rsc_op: Performing key=46:1:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:1_notify_0 )
May 15 12:26:04 server2 lrmd: [1182]: info: rsc:Cluster-FS-DRBD:1 notify[6] (pid 1792)
May 15 12:26:04 server2 lrmd: [1182]: info: operation notify[6] on Cluster-FS-DRBD:1 for client 1185: pid 1792 exited with return code 0
May 15 12:26:04 server2 crmd: [1185]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:1_notify_0 from 46:1:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af: lrm_invoke-lrmd-1400136964-6
May 15 12:26:04 server2 crmd: [1185]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_notify_0 (call=6, rc=0, cib-update=0, confirmed=true) ok
May 15 12:26:04 server2 crmd: [1185]: info: do_lrm_rsc_op: Performing key=2:1:0:bab7803e-0cb3-44da-bd05-d09fe2f8f3af op=Cluster-FS-DRBD:1_stop_0 )
May 15 12:26:04 server2 lrmd: [1182]: info: rsc:Cluster-FS-DRBD:1 stop[7] (pid 1815)
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:stop:stderr) Could not connect to 'drbd' generic netlink family
May 15 12:26:04 server2 kernel: [   34.905803] block drbd0: conn( WFConnection -> Disconnecting ) 
May 15 12:26:04 server2 kernel: [   34.905822] block drbd0: Discarding network configuration.
May 15 12:26:04 server2 kernel: [   34.905845] block drbd0: Connection closed
May 15 12:26:04 server2 kernel: [   34.905849] block drbd0: conn( Disconnecting -> StandAlone ) 
May 15 12:26:04 server2 kernel: [   34.905884] block drbd0: receiver terminated
May 15 12:26:04 server2 kernel: [   34.905887] block drbd0: Terminating drbd0_receiver
May 15 12:26:04 server2 kernel: [   34.905913] block drbd0: disk( Consistent -> Failed ) 
May 15 12:26:04 server2 kernel: [   34.905926] block drbd0: Sending state for detaching disk failed
May 15 12:26:04 server2 kernel: [   34.905934] block drbd0: disk( Failed -> Diskless ) 
May 15 12:26:04 server2 kernel: [   34.905986] block drbd0: drbd_bm_resize called with capacity == 0
May 15 12:26:04 server2 kernel: [   34.906069] block drbd0: worker terminated
May 15 12:26:04 server2 kernel: [   34.906072] block drbd0: Terminating drbd0_worker
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:stop:stdout) 
May 15 12:26:04 server2 crm_attribute: [1845]: info: Invoked: crm_attribute -N server2 -n master-Cluster-FS-DRBD:1 -l reboot -D 
May 15 12:26:04 server2 lrmd: [1182]: info: RA output: (Cluster-FS-DRBD:1:stop:stdout) 
May 15 12:26:04 server2 lrmd: [1182]: info: operation stop[7] on Cluster-FS-DRBD:1 for client 1185: pid 1815 exited with return code 0
May 15 12:26:04 server2 crmd: [1185]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_stop_0 (call=7, rc=0, cib-update=11, confirmed=true) ok


More information about the drbd-user mailing list