[DRBD-user] Data rollback when reconnecting diskless device

kvaps kvapss at gmail.com
Tue Dec 4 16:45:48 CET 2018


H,i we're using Kubernetes with linstor driver and we have quite
unpleasant problem with drbd.
We have two nodes with the data: m7c7 and m8c9, and one diskless node
m8c23, where is pod running.
After updating pod we lost data for the last three days.
I'll provide you full logs and actions which was executed during this incident.

* Pod was running on m8c23 node, and working fine.
* We've replaced image for it and kubernetes initilizated pod
rectreating procedure
* Data volume was unmounted, and pod was removed successfully:

# diskless node: m8c23
[Mon Dec  3 15:09:46 2018] drbd hosting-vol-data-web-hc1-wd24-0: role(
Primary -> Secondary )
[Mon Dec  3 15:09:46 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( UpToDate -> Outdated )
[Mon Dec  3 15:09:46 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( Outdated -> Inconsistent ) resync-susp( no ->
peer )

# data node: m7c7
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
peer( Primary -> Secondary )
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: disk( UpToDate -> Outdated )
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: bitmap WRITE of 111 pages took 8 ms
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: drbd_sync_handshake:
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: self
EAF04B095DA0C8D6:0000000000000000:94DB996564E2DEAE:0000000000000000
bits:42912536 flags:20
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: peer
C7D5AA5CA312C404:FFFFFFFFFFFFFFFF:EAF04B095DA0C8D6:D5FF87E9F97B5EC6
bits:0 flags:A0
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: uuid_compare()=-3 by rule 60
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: Writing the whole bitmap, full sync required after
drbd_sync_handshake.
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: bitmap WRITE of 30547 pages took 136 ms
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: Becoming WFBitMapT after unstable
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: repl( Established -> WFBitMapT )
[Mon Dec  3 15:09:43 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE
23(1), total 23; compression: 100.0%
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: send bitmap stats [Bytes(packets)]: plain 0(0), RLE
23(1), total 23; compression: 100.0%
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: helper command: /sbin/drbdadm before-resync-target
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: helper command: /sbin/drbdadm before-resync-target exit
code 0 (0x0)
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: disk( Outdated -> Inconsistent )
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c23: resync-susp( no -> connection dependency )
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: repl( WFBitMapT -> SyncTarget )
[Mon Dec  3 15:09:44 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: Began resync as SyncTarget (will sync 734006072 KB
[183501518 bits set]).


# data node: m8c9
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
peer( Primary -> Secondary )
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: drbd_sync_handshake:
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: self
C7D5AA5CA312C404:0000000000000000:EAF04B095DA0C8D6:D5FF87E9F97B5EC6
bits:0 flags:20
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: peer
EAF04B095DA0C8D6:FFFFFFFFFFFFFFFF:94DB996564E2DEAE:0000000000000000
bits:42912536 flags:60
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: uuid_compare()=3 by rule 80
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: Writing the whole bitmap, full sync required after
drbd_sync_handshake.
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: drbd_w_hosting-[26131] going to 'demote diskless peer' but
bitmap already locked for 'set_n_write from sync_handshake' by
drbd_r_hosting-[26165]
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: bitmap WRITE of 39201 pages took 276 ms
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: Becoming WFBitMapS after unstable
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( UpToDate -> Consistent ) repl( Established ->
WFBitMapS )
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( Consistent -> Outdated )
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: send bitmap stats [Bytes(packets)]: plain 0(0), RLE
23(1), total 23; compression: 100.0%
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE
23(1), total 23; compression: 100.0%
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: helper command: /sbin/drbdadm before-resync-source
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: helper command: /sbin/drbdadm before-resync-source exit
code 0 (0x0)
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( Outdated -> Inconsistent ) repl( WFBitMapS ->
SyncSource )
[Mon Dec  3 15:09:45 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: Began resync as SyncSource (will sync 734006072 KB
[183501518 bits set]).


* Then new pod was created on same machine (m8c23), but it was stacked
on FailedMount (see flexvolume driver logs in the bottom of this
email):

Warning  FailedMount  29s (x7 over 1m)  kubelet, m8c23
MountVolume.MountDevice failed for volume
"hosting-vol-data-web-hc1-wd24-0" : mountdevice command failed,
status: Failure, reason: Linstor Flexvoume API: mountdevice: unable to
mount device: couldn't create ext4 filesystem exit status 1: "mke2fs
1.44.1 (24-Mar-2018)\nCould not open /dev/drbd1017: Wrong medium
type\n"


* Linstor was showing that pod is inconsistent on one data node:

# linstor r l | grep wd24
| hosting-vol-data-web-hc1-wd24-0        | m7c7  | 7017 | Unused |
Inconsistent |
| hosting-vol-data-web-hc1-wd24-0        | m8c23 | 7017 | Unused |
Diskless |
| hosting-vol-data-web-hc1-wd24-0        | m8c9  | 7017 | Unused |
UpToDate |

* I came to m8c23 (diskless node) and run:

drbdadm disconnect hosting-vol-data-web-hc1-wd24-0
drbdadm connect hosting-vol-data-web-hc1-wd24-0

* after that drbd device was mounted, and started resyncronization:

# diskless node: m8c23
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 250224276 (1->0 496/16)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 250224276: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 250224276 (0ms)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Connected -> Disconnecting ) peer( Secondary -> Unknown )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: repl( Established -> Off ) resync-susp( peer -> no )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
ack_receiver terminated
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Terminating ack_recv thread
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Connection closed
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Disconnecting -> StandAlone )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Terminating receiver thread
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 1796454573 (1->2 496/16)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 1796454573: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Cluster is now split
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 1796454573 (0ms)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Connected -> Disconnecting ) peer( Secondary -> Unknown )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: pdsk( Consistent -> DUnknown ) repl( Established -> Off
)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
ack_receiver terminated
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Terminating ack_recv thread
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Connection closed
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Disconnecting -> StandAlone )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Terminating receiver thread
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( StandAlone -> Unconnected )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Starting receiver thread (from drbd_w_hosting- [451])
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Unconnected -> Connecting )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( StandAlone -> Unconnected )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Starting receiver thread (from drbd_w_hosting- [451])
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Unconnected -> Connecting )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Handshake to peer 0 successful: Agreed network protocol version 113
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC
WRITE_SAME WRITE_ZEROES.
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Peer authenticated using 20 bytes HMAC
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Starting ack_recv thread (from drbd_r_hosting- [15745])
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Handshake to peer 2 successful: Agreed network protocol version 113
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC
WRITE_SAME WRITE_ZEROES.
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Peer authenticated using 20 bytes HMAC
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Starting ack_recv thread (from drbd_r_hosting- [15747])
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 49232022 (1->2 499/146)
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 49232022: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 49232022 (0ms)
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Connecting -> Connected ) peer( Unknown -> Secondary )
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: pdsk( DUnknown -> UpToDate ) repl( Off -> Established )
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Preparing remote state change 1131460937
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Committing remote state change 1131460937 (primary_nodes=0)
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Connecting -> Connected ) peer( Unknown -> Secondary )
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: repl( Off -> Established ) resync-susp( no -> peer )
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 631149953 (1->-1 3/1)
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 631149953: primary_nodes=2, weak_nodes=FFFFFFFFFFFFFFF8
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 631149953 (0ms)
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0: role(
Secondary -> Primary )
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: sending new current UUID: B2E51E4C8A7ED6BC
[Mon Dec  3 17:08:00 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( Inconsistent -> UpToDate ) resync-susp( peer ->
no )



# data node: m7c7
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Preparing remote state change 250224276
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Committing remote state change 250224276 (primary_nodes=0)
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
conn( Connected -> TearDown ) peer( Secondary -> Unknown )
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c23: pdsk( Diskless -> DUnknown ) repl( Established -> Off
)
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
ack_receiver terminated
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Terminating ack_recv thread
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Preparing remote state change 1796454573
[Mon Dec  3 15:14:17 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Committing remote state change 1796454573 (primary_nodes=0)
[Mon Dec  3 15:14:18 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Connection closed
[Mon Dec  3 15:14:18 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
conn( TearDown -> Unconnected )
[Mon Dec  3 15:14:18 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Restarting receiver thread
[Mon Dec  3 15:14:18 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
conn( Unconnected -> Connecting )
[Mon Dec  3 15:14:22 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Handshake to peer 1 successful: Agreed network protocol version 113
[Mon Dec  3 15:14:22 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC
WRITE_SAME WRITE_ZEROES.
[Mon Dec  3 15:14:22 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Peer authenticated using 20 bytes HMAC
[Mon Dec  3 15:14:22 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Starting ack_recv thread (from drbd_r_hosting- [3344])
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Preparing remote state change 49232022
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Committing remote state change 49232022 (primary_nodes=0)
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 1131460937 (0->1 499/146)
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 1131460937: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 1131460937 (0ms)
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
conn( Connecting -> Connected ) peer( Unknown -> Secondary )
[Mon Dec  3 15:14:23 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c23: pdsk( DUnknown -> Diskless ) repl( Off -> Established
)
[Mon Dec  3 15:14:33 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Preparing remote state change 631149953
[Mon Dec  3 15:14:33 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
Committing remote state change 631149953 (primary_nodes=2)
[Mon Dec  3 15:14:33 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c23:
peer( Secondary -> Primary )
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: Resync done (total 7093 sec; paused 0 sec; 103480
K/sec)
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: updated UUIDs
B2E51E4C8A7ED6BC:0000000000000000:EAF04B095DA0C8D6:94DB996564E2DEAE
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: disk( Inconsistent -> UpToDate )
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c23: resync-susp( connection dependency -> no )
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: repl( SyncTarget -> Established )
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: helper command: /sbin/drbdadm after-resync-target
[Mon Dec  3 17:07:57 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: helper command: /sbin/drbdadm after-resync-target exit
code 0 (0x0)


# data node: m8c23
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 250224276 (1->0 496/16)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 250224276: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 250224276 (0ms)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Connected -> Disconnecting ) peer( Secondary -> Unknown )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: repl( Established -> Off ) resync-susp( peer -> no )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
ack_receiver terminated
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Terminating ack_recv thread
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Connection closed
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Disconnecting -> StandAlone )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Terminating receiver thread
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 1796454573 (1->2 496/16)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 1796454573: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Cluster is now split
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 1796454573 (0ms)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Connected -> Disconnecting ) peer( Secondary -> Unknown )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: pdsk( Consistent -> DUnknown ) repl( Established -> Off
)
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
ack_receiver terminated
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Terminating ack_recv thread
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Connection closed
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Disconnecting -> StandAlone )
[Mon Dec  3 15:14:20 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Terminating receiver thread
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( StandAlone -> Unconnected )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Starting receiver thread (from drbd_w_hosting- [451])
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Unconnected -> Connecting )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( StandAlone -> Unconnected )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Starting receiver thread (from drbd_w_hosting- [451])
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Unconnected -> Connecting )
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Handshake to peer 0 successful: Agreed network protocol version 113
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC
WRITE_SAME WRITE_ZEROES.
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Peer authenticated using 20 bytes HMAC
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Starting ack_recv thread (from drbd_r_hosting- [15745])
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Handshake to peer 2 successful: Agreed network protocol version 113
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC
WRITE_SAME WRITE_ZEROES.
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Peer authenticated using 20 bytes HMAC
[Mon Dec  3 15:14:25 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
Starting ack_recv thread (from drbd_r_hosting- [15747])
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 49232022 (1->2 499/146)
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 49232022: primary_nodes=0, weak_nodes=0
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 49232022 (0ms)
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m8c9:
conn( Connecting -> Connected ) peer( Unknown -> Secondary )
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m8c9: pdsk( DUnknown -> UpToDate ) repl( Off -> Established )
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Preparing remote state change 1131460937
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
Committing remote state change 1131460937 (primary_nodes=0)
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0 m7c7:
conn( Connecting -> Connected ) peer( Unknown -> Secondary )
[Mon Dec  3 15:14:26 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: repl( Off -> Established ) resync-susp( no -> peer )
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Preparing cluster-wide state change 631149953 (1->-1 3/1)
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0: State
change 631149953: primary_nodes=2, weak_nodes=FFFFFFFFFFFFFFF8
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0:
Committing cluster-wide state change 631149953 (0ms)
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0: role(
Secondary -> Primary )
[Mon Dec  3 15:14:36 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017: sending new current UUID: B2E51E4C8A7ED6BC
[Mon Dec  3 17:08:00 2018] drbd hosting-vol-data-web-hc1-wd24-0/0
drbd1017 m7c7: pdsk( Inconsistent -> UpToDate ) resync-susp( peer ->
no )



But after that we having outdated data in our pod.
My question is why was it happend, and what were we doing wrong?

# drbd version:
version: 9.0.14-1 (api:2/proto:86-113)
GIT-hash: 62f906cf44ef02a30ce0c148fec223b40c51c533 build by
@gitlab-runner-docker1-0, 2018-10-09 16:33:22
Transports (api:16): tcp (9.0.14-1)

# linstor version:
linstor 0.7.0; GIT-hash: 8d532f79e6a904fcd8733a086c5ec439528bcbac

# We have automatic splitbran resolution, amd the next setting in
linstor_common.conf:
common
{
    disk
    {
        c-fill-target 10240;
        c-max-rate 737280;
        c-min-rate 20480;
        c-plan-ahead 10;
    }
    net
    {
        after-sb-0pri discard-zero-changes;
        after-sb-1pri discard-secondary;
        after-sb-2pri disconnect;
        max-buffers 36864;
        protocol C;
        rcvbuf-size 2097152;
        sndbuf-size 1048576;
    }
}


# Pod removing:
Dec 03 15:09:02 m8c23 kubelet[4616]: W1203 15:09:02.921117    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume hosting-vol-data-web-hc1-wd24-0
Dec 03 15:09:16 m8c23 kubelet[4616]: W1203 15:09:16.870188    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume
hosting-vol-data-mariadb-hc1-md24-0
Dec 03 15:10:01 m8c23 kubelet[4616]: I1203 15:10:01.016435    4616
reconciler.go:181] operationExecutor.UnmountVolume started for volume
"vol-data-web" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
pod "83b78114-f3f1-11e8-8f79-001999d764e2" (UID:
"83b78114-f3f1-11e8-8f79-001999d764e2")
Dec 03 15:10:01 m8c23 linstor-flexvol[12340]: Linstor
FlexVolume[12340]: 2018/12/03 15:10:01 called with unmount:
/var/lib/kubelet/pods/83b78114-f3f1-11e8-8f79-001999d764e2/volumes/linbit~linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0
Dec 03 15:10:01 m8c23 linstor-flexvol[12340]: Linstor
FlexVolume[12340]: 2018/12/03 15:10:01 responded to unmount:
{"status":"Success","message":""}
Dec 03 15:10:01 m8c23 kubelet[4616]: I1203 15:10:01.100947    4616
operation_generator.go:698] UnmountVolume.TearDown succeeded for
volume "flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0"
(OuterVolumeSpecName: "vol-data-web") pod
"83b78114-f3f1-11e8-8f79-001999d764e2" (UID:
"83b78114-f3f1-11e8-8f79-001999d764e2"). InnerVolumeSpecName
"hosting-vol-data-web-hc1-wd24-0". PluginName
"flexvolume-linbit/linstor-flexvolume", VolumeGidValue ""
Dec 03 15:10:01 m8c23 kubelet[4616]: I1203 15:10:01.117527    4616
reconciler.go:294] operationExecutor.UnmountDevice started for volume
"hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
on node "m8c23"
Dec 03 15:10:01 m8c23 kubelet[4616]: W1203 15:10:01.117611    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume hosting-vol-data-web-hc1-wd24-0
Dec 03 15:10:01 m8c23 linstor-flexvol[12360]: Linstor
FlexVolume[12360]: 2018/12/03 15:10:01 called with unmountdevice:
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0
Dec 03 15:10:01 m8c23 kubelet[4616]: W1203 15:10:01.936718    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume hosting-vol-data-web-hc1-wd24-0
Dec 03 15:10:22 m8c23 linstor-flexvol[12360]: Linstor
FlexVolume[12360]: 2018/12/03 15:10:22 responded to unmountdevice:
{"status":"Success","message":""}
Dec 03 15:10:22 m8c23 kubelet[4616]: I1203 15:10:22.398065    4616
operation_generator.go:783] UnmountDevice succeeded for volume
"hosting-vol-data-web-hc1-wd24-0" %!(EXTRA string=UnmountDevice
succeeded for volume "hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
on node "m8c23" )
Dec 03 15:10:22 m8c23 kubelet[4616]: E1203 15:10:22.398069    4616
plugin_watcher.go:120] error could not find plugin for deleted file
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0
when handling delete event:
"/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0":
REMOVE
Dec 03 15:10:22 m8c23 kubelet[4616]: E1203 15:10:22.398069    4616
plugin_watcher.go:120] error could not find plugin for deleted file
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0
when handling delete event:
"/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0":
REMOVE

# Pod creating error (repeatedly appearing messages):
Dec 03 15:10:22 m8c23 kubelet[4616]: I1203 15:10:22.448787    4616
operation_generator.go:498] MountVolume.WaitForAttach entering for
volume "hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
pod "hc1-wd24-0" (UID: "1fd152d6-f705-11e8-8f79-001999d764e2")
DevicePath ""
Dec 03 15:10:22 m8c23 linstor-flexvol[12606]: Linstor
FlexVolume[12606]: 2018/12/03 15:10:22 called with waitforattach: ,
{"blockSize":"","controllers":"127.0.0.1:3376","disklessStoragePool":"DfltDisklessStorPool","force":"","fsOpts":"","kubernetes.io/fsType":"ext4","kubernetes.io/pvOrVolumeName":"hosting-vol-data-web-hc1-wd24-0","kubernetes.io/readwrite":"rw","mountOpts":"defaults,nodiratime,noatime","xfsDataSU":"","xfsDataSW":"","xfsDiscardBlocks":"","xfsLogDev":""}
Dec 03 15:10:22 m8c23 linstor-flexvol[12606]: Linstor
FlexVolume[12606]: 2018/12/03 15:10:22 responded to waitforattach:
{"status":"Success","message":""}
Dec 03 15:10:22 m8c23 kubelet[4616]: I1203 15:10:22.452330    4616
operation_generator.go:507] MountVolume.WaitForAttach succeeded for
volume "hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
pod "hc1-wd24-0" (UID: "1fd152d6-f705-11e8-8f79-001999d764e2")
DevicePath ""
Dec 03 15:10:22 m8c23 kubelet[4616]: W1203 15:10:22.452385    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume hosting-vol-data-web-hc1-wd24-0
Dec 03 15:10:22 m8c23 linstor-flexvol[12615]: Linstor
FlexVolume[12615]: 2018/12/03 15:10:22 called with mountdevice:
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0,
, {"blockSize":"","controllers":"127.0.0.1:3376","disklessStoragePool":"DfltDisklessStorPool","force":"","fsOpts":"","kubernetes.io/fsType":"ext4","kubernetes.io/pvOrVolumeName":"hosting-vol-data-web-hc1-wd24-0","kubernetes.io/readwrite":"rw","mountOpts":"defaults,nodiratime,noatime","xfsDataSU":"","xfsDataSW":"","xfsDiscardBlocks":"","xfsLogDev":""}
Dec 03 15:10:22 m8c23 linstor-flexvol[12615]: Linstor
FlexVolume[12615]: golinstor: 2018/12/03 15:10:22 linstor.go:265:
("hosting-vol-data-web-hc1-wd24-0"): linstor -m resource list
Dec 03 15:10:24 m8c23 linstor-flexvol[12615]: Linstor
FlexVolume[12615]: golinstor: 2018/12/03 15:10:24 linstor.go:265:
("hosting-vol-data-web-hc1-wd24-0"): mkfs -t ext4 /dev/drbd1017
Dec 03 15:10:26 m8c23 linstor-flexvol[12615]: Linstor
FlexVolume[12615]: 2018/12/03 15:10:26 responded to mountdevice:
{"status":"Failure","message":"Linstor Flexvoume API: mountdevice:
unable to mount device: couldn't create ext4 filesystem exit status 1:
\"mke2fs 1.44.1 (24-Mar-2018)\\nCould not open /dev/drbd1017: Wrong
medium type\\n\""}
Dec 03 15:10:26 m8c23 kubelet[4616]: W1203 15:10:26.786171    4616
driver-call.go:144] FlexVolume: driver call failed: executable:
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/linbit~linstor-flexvolume/linstor-flexvolume,
args: [mountdevice
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0
 {"blockSize":"","controllers":"127.0.0.1:3376","disklessStoragePool":"DfltDisklessStorPool","force":"","fsOpts":"","kubernetes.io/fsType":"ext4","kubernetes.io/pvOrVolumeName":"hosting-vol-data-web-hc1-wd24-0","kubernetes.io/readwrite":"rw","mountOpts":"defaults,nodiratime,noatime","xfsDataSU":"","xfsDataSW":"","xfsDiscardBlocks":"","xfsLogDev":""}],
error: exit status 1, output:
"{\"status\":\"Failure\",\"message\":\"Linstor Flexvoume API:
mountdevice: unable to mount device: couldn't create ext4 filesystem
exit status 1: \\\"mke2fs 1.44.1 (24-Mar-2018)\\\\nCould not open
/dev/drbd1017: Wrong medium type\\\\n\\\"\"}"
Dec 03 15:10:26 m8c23 kubelet[4616]: E1203 15:10:26.786366    4616
nestedpendingoperations.go:267] Operation for
"\"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0\""
failed. No retries permitted until 2018-12-03 15:10:27.286295481 +0100
CET m=+2383670.220800431 (durationBeforeRetry 500ms). Error:
"MountVolume.MountDevice failed for volume
\"hosting-vol-data-web-hc1-wd24-0\" (UniqueName:
\"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0\")
pod \"hc1-wd24-0\" (UID: \"1fd152d6-f705-11e8-8f79-001999d764e2\") :
mountdevice command failed, status: Failure, reason: Linstor Flexvoume
API: mountdevice: unable to mount device: couldn't create ext4
filesystem exit status 1: \"mke2fs 1.44.1 (24-Mar-2018)\\nCould not
open /dev/drbd1017: Wrong medium type\\n\""

# Pod created after disconnect/connect
Dec 03 15:15:11 m8c23 kubelet[4616]: I1203 15:15:11.553426    4616
operation_generator.go:498] MountVolume.WaitForAttach entering for
volume "hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
pod "hc1-wd24-0" (UID: "1fd152d6-f705-11e8-8f79-001999d764e2")
DevicePath ""
Dec 03 15:15:11 m8c23 linstor-flexvol[15857]: Linstor
FlexVolume[15857]: 2018/12/03 15:15:11 called with waitforattach: ,
{"blockSize":"","controllers":"127.0.0.1:3376","disklessStoragePool":"DfltDisklessStorPool","force":"","fsOpts":"","kubernetes.io/fsType":"ext4","kubernetes.io/pvOrVolumeName":"hosting-vol-data-web-hc1-wd24-0","kubernetes.io/readwrite":"rw","mountOpts":"defaults,nodiratime,noatime","xfsDataSU":"","xfsDataSW":"","xfsDiscardBlocks":"","xfsLogDev":""}
Dec 03 15:15:11 m8c23 linstor-flexvol[15857]: Linstor
FlexVolume[15857]: 2018/12/03 15:15:11 responded to waitforattach:
{"status":"Success","message":""}
Dec 03 15:15:11 m8c23 kubelet[4616]: I1203 15:15:11.557310    4616
operation_generator.go:507] MountVolume.WaitForAttach succeeded for
volume "hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
pod "hc1-wd24-0" (UID: "1fd152d6-f705-11e8-8f79-001999d764e2")
DevicePath ""
Dec 03 15:15:11 m8c23 kubelet[4616]: W1203 15:15:11.557387    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume hosting-vol-data-web-hc1-wd24-0
Dec 03 15:15:11 m8c23 linstor-flexvol[15863]: Linstor
FlexVolume[15863]: 2018/12/03 15:15:11 called with mountdevice:
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0,
, {"blockSize":"","controllers":"127.0.0.1:3376","disklessStoragePool":"DfltDisklessStorPool","force":"","fsOpts":"","kubernetes.io/fsType":"ext4","kubernetes.io/pvOrVolumeName":"hosting-vol-data-web-hc1-wd24-0","kubernetes.io/readwrite":"rw","mountOpts":"defaults,nodiratime,noatime","xfsDataSU":"","xfsDataSW":"","xfsDiscardBlocks":"","xfsLogDev":""}
Dec 03 15:15:11 m8c23 linstor-flexvol[15863]: Linstor
FlexVolume[15863]: golinstor: 2018/12/03 15:15:11 linstor.go:265:
("hosting-vol-data-web-hc1-wd24-0"): linstor -m resource list
Dec 03 15:15:11 m8c23 linstor-flexvol[15863]: Linstor
FlexVolume[15863]: golinstor: 2018/12/03 15:15:11 linstor.go:265:
("hosting-vol-data-web-hc1-wd24-0"): mount -o
defaults,nodiratime,noatime /dev/drbd1017
/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0
Dec 03 15:15:13 m8c23 linstor-flexvol[15863]: Linstor
FlexVolume[15863]: 2018/12/03 15:15:13 responded to mountdevice:
{"status":"Success","message":""}
Dec 03 15:15:13 m8c23 kubelet[4616]: I1203 15:15:13.309852    4616
operation_generator.go:528] MountVolume.MountDevice succeeded for
volume "hosting-vol-data-web-hc1-wd24-0" (UniqueName:
"flexvolume-linbit/linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0")
pod "hc1-wd24-0" (UID: "1fd152d6-f705-11e8-8f79-001999d764e2") device
mount path "/var/lib/kubelet/plugins/kubernetes.io/flexvolume/linbit/linstor-flexvolume/mounts/hosting-vol-data-web-hc1-wd24-0"
Dec 03 15:15:13 m8c23 kubelet[4616]: W1203 15:15:13.309968    4616
mounter-defaults.go:30] flexVolume driver linbit/linstor-flexvolume:
using default SetUpAt to
/var/lib/kubelet/pods/1fd152d6-f705-11e8-8f79-001999d764e2/volumes/linbit~linstor-flexvolume/hosting-vol-data-web-hc1-wd24-0
Dec 03 15:15:13 m8c23 kubelet[4616]: W1203 15:15:13.309995    4616
plugin-defaults.go:32] flexVolume driver linbit/linstor-flexvolume:
using default GetVolumeName for volume hosting-vol-data-web-hc1-wd24-0


-- 
- kvaps


More information about the drbd-user mailing list