[DRBD-user] Best practice: drbd+lvm+gfs2+dm-crypt on dual primary

Patrick Prilisauer prilisauer at googlemail.com
Wed Feb 4 10:07:10 CET 2015

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hello to all,

I have now completely reinstalled my servers according
https://alteeve.ca/w/AN!Cluster_Tutorial_2 on Centos 7 modified by some OS
Anyway, it seems that I won't understand what I'm doing wrong?
As you could see res_drbd_1_stop on.... 'not installed' is returned. The
last whole day spending on findinding my needle in the haystack wont make
it better.

Any hints?,

Last updated: Wed Feb  4 09:59:01 2015
Last change: Wed Feb  4 05:18:41 2015 via crmd on at01srv01
Stack: corosync
Current DC: at01srv02 (167772162) - partition with quorum
Version: 1.1.10-32.el7_0.1-368c726
2 Nodes configured
4 Resources configured

Node at01srv01 (167772161): standby
Online: [ at01srv02 ]

 Master/Slave Set: ms_drbd_1 [res_drbd_1]
     res_drbd_1 (ocf::linbit:drbd):     FAILED at01srv02 (unmanaged)
     Stopped: [ at01srv01 ]
stonith_fence_pcmk_1    (stonith:fence_pcmk):   Started at01srv02

Failed actions:
    res_drbd_1_stop_0 on at01srv02 'not installed' (5): call=120,
status=complete, last-rc-change='Wed Feb  4 09:3
5:43 2015', queued=15118ms, exec=0ms

[root at at01srv02 linbit]# systemctl status drbd.service
drbd.service - DRBD -- please disable. Unless you are NOT using a cluster
   Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled)
   Active: failed (Result: exit-code) since Mit 2015-02-04 09:57:06 CET;
2min 51s ago
  Process: 8544 ExecStart=/sbin/drbdadm adjust-with-progress all
(code=exited, status=1/FAILURE)
  Process: 8540 ExecStartPre=/sbin/drbdadm sh-nop (code=exited,
 Main PID: 8544 (code=exited, status=1/FAILURE)

Feb 04 09:57:06 at01srv02 drbdadm[8544]: [
Feb 04 09:57:06 at01srv02 drbdadm[8544]: adjust net:
Feb 04 09:57:06 at01srv02 drbdadm[8544]: ]
Feb 04 09:57:06 at01srv02 systemd[1]: drbd.service: main process exited,
code=exited, status=1/FAILURE
Feb 04 09:57:06 at01srv02 systemd[1]: Failed to start DRBD -- please
disable. Unless you are NOT using a c...ger..
Feb 04 09:57:06 at01srv02 systemd[1]: Unit drbd.service entered failed
Hint: Some lines were ellipsized, use -l to show in full.

2015-02-02 20:46 GMT+01:00 Digimer <lists at alteeve.ca>:

> On 02/02/15 02:44 PM, Ivan wrote:
>>  I'm not sure that two (or more) LUKS partitions are identical given
>>>> exactly the same cleartext content and the same keys. There must be some
>>>> kind of sector randomization when writing data to make cryptoanalysis
>>>> harder, so it makes me think that it's not the case (that would require
>>>> testing though).
>>>> If I'm right, I don't see how DRBD could work in that setup. (or maybe I
>>>> just need more sleep).
>>> LUKS is working on the LV, which will be backed by the PV on DRBD. DRBD
>>> doesn't know data, so it will simply replicate the LUKS structure
>>> faithfully to both nodes.
>>> Remember, for all intent and purpose, there is only one device/luks
>>> partition. DRBD is really no different from LUKS on /dev/mdX devices in
>>> this regard.
>> ah that's right - indeed more sleep needed. I've skipped the "clustered
>> LVM" part and was thinking about two luks partitions.
>> sorry for the noise.
> No worries at all. When you ask a question like this, you have a chance to
> learn a system better, so it's good. :)
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20150204/bc2acac2/attachment.htm>

More information about the drbd-user mailing list