[DRBD-user] Best practice: drbd+lvm+gfs2+dm-crypt on dual primary

Patrick Prilisauer prilisauer at googlemail.com
Mon Feb 9 17:02:26 CET 2015

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hey there,

Ih have found out one issue, the Problem was or better still is a defunced
mainboard.
I'm running an Adaptec 6504e on a Supermicro A1ASI-2750 the Mainboard
occures some strange timeouts, which hasn't been reported by the aacraid
driver because they were to short.
But suddenly after a few days the timeouts getting larger.. and some other
strange things happened linke the uefi bios doesn't detect the card etc.

Anyway, I'm still fighting with the pacemakersetup, I do not understand the
drbd documentation, the configs shown on one chapter doesn't match the
resources in another chapter...

Still if somebody knows where I could find a complete configuration made
with the pcs programm for pacemaker. I would be so happy.

BR

2015-02-04 10:07 GMT+01:00 Patrick Prilisauer <prilisauer at googlemail.com>:

> Hello to all,
>
> I have now completely reinstalled my servers according
> https://alteeve.ca/w/AN!Cluster_Tutorial_2 on Centos 7 modified by some
> OS differences.
> Anyway, it seems that I won't understand what I'm doing wrong?
> As you could see res_drbd_1_stop on.... 'not installed' is returned. The
> last whole day spending on findinding my needle in the haystack wont make
> it better.
>
> Any hints?,
> Thanks
>
>
>
> OUTPUT1:
> crm_mon
> Last updated: Wed Feb  4 09:59:01 2015
> Last change: Wed Feb  4 05:18:41 2015 via crmd on at01srv01
> Stack: corosync
> Current DC: at01srv02 (167772162) - partition with quorum
> Version: 1.1.10-32.el7_0.1-368c726
> 2 Nodes configured
> 4 Resources configured
>
>
> Node at01srv01 (167772161): standby
> Online: [ at01srv02 ]
>
>  Master/Slave Set: ms_drbd_1 [res_drbd_1]
>      res_drbd_1 (ocf::linbit:drbd):     FAILED at01srv02 (unmanaged)
>      Stopped: [ at01srv01 ]
> stonith_fence_pcmk_1    (stonith:fence_pcmk):   Started at01srv02
>
> Failed actions:
>     res_drbd_1_stop_0 on at01srv02 'not installed' (5): call=120,
> status=complete, last-rc-change='Wed Feb  4 09:3
> 5:43 2015', queued=15118ms, exec=0ms
>
>
>
>
> OUTPUT2:
> [root at at01srv02 linbit]# systemctl status drbd.service
> drbd.service - DRBD -- please disable. Unless you are NOT using a cluster
> manager.
>    Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled)
>    Active: failed (Result: exit-code) since Mit 2015-02-04 09:57:06 CET;
> 2min 51s ago
>   Process: 8544 ExecStart=/sbin/drbdadm adjust-with-progress all
> (code=exited, status=1/FAILURE)
>   Process: 8540 ExecStartPre=/sbin/drbdadm sh-nop (code=exited,
> status=0/SUCCESS)
>  Main PID: 8544 (code=exited, status=1/FAILURE)
>
> Feb 04 09:57:06 at01srv02 drbdadm[8544]: [
> Feb 04 09:57:06 at01srv02 drbdadm[8544]: adjust net:
> r0:failed(net-options:20)
> Feb 04 09:57:06 at01srv02 drbdadm[8544]: ]
> Feb 04 09:57:06 at01srv02 systemd[1]: drbd.service: main process exited,
> code=exited, status=1/FAILURE
> Feb 04 09:57:06 at01srv02 systemd[1]: Failed to start DRBD -- please
> disable. Unless you are NOT using a c...ger..
> Feb 04 09:57:06 at01srv02 systemd[1]: Unit drbd.service entered failed
> state.
> Hint: Some lines were ellipsized, use -l to show in full.
>
>
>
>
>
>
> 2015-02-02 20:46 GMT+01:00 Digimer <lists at alteeve.ca>:
>
>> On 02/02/15 02:44 PM, Ivan wrote:
>>
>>>
>>>  I'm not sure that two (or more) LUKS partitions are identical given
>>>>> exactly the same cleartext content and the same keys. There must be
>>>>> some
>>>>> kind of sector randomization when writing data to make cryptoanalysis
>>>>> harder, so it makes me think that it's not the case (that would require
>>>>> testing though).
>>>>> If I'm right, I don't see how DRBD could work in that setup. (or maybe
>>>>> I
>>>>> just need more sleep).
>>>>>
>>>>
>>>> LUKS is working on the LV, which will be backed by the PV on DRBD. DRBD
>>>> doesn't know data, so it will simply replicate the LUKS structure
>>>> faithfully to both nodes.
>>>>
>>>> Remember, for all intent and purpose, there is only one device/luks
>>>> partition. DRBD is really no different from LUKS on /dev/mdX devices in
>>>> this regard.
>>>>
>>>
>>> ah that's right - indeed more sleep needed. I've skipped the "clustered
>>> LVM" part and was thinking about two luks partitions.
>>>
>>> sorry for the noise.
>>>
>>
>> No worries at all. When you ask a question like this, you have a chance
>> to learn a system better, so it's good. :)
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20150209/049c91d3/attachment.htm>


More information about the drbd-user mailing list