Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Sorry, "EL" is a short way of saying "RHEL, CentOS or other RHEL-based distroes". CentOS 6 is fine. Please don't let yourself get too frustrated. HA clustering is not hard, but it is complex. It takes time to get it all working right. What is your primary goal? To run an HA cluster for hosting VMs? digimer On 09/02/15 11:32 AM, Patrick Prilisauer wrote: > Hey, I don't own EL but CentOS6 must do it also?. > > I know sitting for at least two weeks on setting up the Server. I'm > getting really frustrated. > > I've bought the combination of Supermicro and Adaptec to do not have to > handle mdraid + uefi + workaround... > > I will try to do the very last setup on EL6.. supermicro was my worst > decision... > > I'll be back ;-) > > 2015-02-09 17:10 GMT+01:00 Digimer <lists at alteeve.ca > <mailto:lists at alteeve.ca>>: > > Hello, > > Trying to adapt https://alteeve.ca/w/AN!Cluster_Tutorial_2 to EL7 + > pacemaker will be no easy task at all. The concepts can port, but after > that, you're pretty much on your own. > > Out of curiosity, can you try pacemaker 1.1.12 on cman+corosync on > EL6? I've avoided EL7 so far because it's still very very new, and a > major departure from previous releases, so I am not convinced it's ideal > for HA yet. > > If you find that the RAID stuff works better on EL6, then it might be > a sign of driver issues or something. Truth be told though, my early > experiments with RAID and Supermicro didn't end well, so this might well > be not OS related, too. Any firmware updates available? > > On 09/02/15 11:02 AM, Patrick Prilisauer wrote: > > Hey there, > > > > Ih have found out one issue, the Problem was or better still is a > > defunced mainboard. > > I'm running an Adaptec 6504e on a Supermicro A1ASI-2750 the Mainboard > > occures some strange timeouts, which hasn't been reported by the aacraid > > driver because they were to short. > > But suddenly after a few days the timeouts getting larger.. and some > > other strange things happened linke the uefi bios doesn't detect the > > card etc. > > > > Anyway, I'm still fighting with the pacemakersetup, I do not understand > > the drbd documentation, the configs shown on one chapter doesn't match > > the resources in another chapter... > > > > Still if somebody knows where I could find a complete configuration made > > with the pcs programm for pacemaker. I would be so happy. > > > > BR > > > > 2015-02-04 10:07 GMT+01:00 Patrick Prilisauer <prilisauer at googlemail.com <mailto:prilisauer at googlemail.com> > > <mailto:prilisauer at googlemail.com > <mailto:prilisauer at googlemail.com>>>: > > > > Hello to all, > > > > I have now completely reinstalled my servers > > according https://alteeve.ca/w/AN!Cluster_Tutorial_2 on Centos 7 > > modified by some OS differences. > > Anyway, it seems that I won't understand what I'm doing wrong? > > As you could see res_drbd_1_stop on.... 'not installed' is > returned. > > The last whole day spending on findinding my needle in the > haystack > > wont make it better. > > > > Any hints?, > > Thanks > > > > > > > > OUTPUT1: > > crm_mon > > Last updated: Wed Feb 4 09:59:01 2015 > > Last change: Wed Feb 4 05:18:41 2015 via crmd on at01srv01 > > Stack: corosync > > Current DC: at01srv02 (167772162) - partition with quorum > > Version: 1.1.10-32.el7_0.1-368c726 > > 2 Nodes configured > > 4 Resources configured > > > > > > Node at01srv01 (167772161): standby > > Online: [ at01srv02 ] > > > > Master/Slave Set: ms_drbd_1 [res_drbd_1] > > res_drbd_1 (ocf::linbit:drbd): FAILED at01srv02 > (unmanaged) > > Stopped: [ at01srv01 ] > > stonith_fence_pcmk_1 (stonith:fence_pcmk): Started at01srv02 > > > > Failed actions: > > res_drbd_1_stop_0 on at01srv02 'not installed' (5): call=120, > > status=complete, last-rc-change='Wed Feb 4 09:3 > > 5:43 2015', queued=15118ms, exec=0ms > > > > > > > > > > OUTPUT2: > > [root at at01srv02 linbit]# systemctl status drbd.service > > drbd.service - DRBD -- please disable. Unless you are NOT using a > > cluster manager. > > Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled) > > Active: failed (Result: exit-code) since Mit 2015-02-04 > 09:57:06 > > CET; 2min 51s ago > > Process: 8544 ExecStart=/sbin/drbdadm adjust-with-progress all > > (code=exited, status=1/FAILURE) > > Process: 8540 ExecStartPre=/sbin/drbdadm sh-nop (code=exited, > > status=0/SUCCESS) > > Main PID: 8544 (code=exited, status=1/FAILURE) > > > > Feb 04 09:57:06 at01srv02 drbdadm: [ > > Feb 04 09:57:06 at01srv02 drbdadm: adjust net: > > r0:failed(net-options:20) > > Feb 04 09:57:06 at01srv02 drbdadm: ] > > Feb 04 09:57:06 at01srv02 systemd: drbd.service: main process > > exited, code=exited, status=1/FAILURE > > Feb 04 09:57:06 at01srv02 systemd: Failed to start DRBD -- > please > > disable. Unless you are NOT using a c...ger.. > > Feb 04 09:57:06 at01srv02 systemd: Unit drbd.service entered > > failed state. > > Hint: Some lines were ellipsized, use -l to show in full. > > > > > > > > > > > > > > 2015-02-02 20:46 GMT+01:00 Digimer <lists at alteeve.ca > <mailto:lists at alteeve.ca> > > <mailto:lists at alteeve.ca <mailto:lists at alteeve.ca>>>: > > > > On 02/02/15 02:44 PM, Ivan wrote: > > > > > > I'm not sure that two (or more) LUKS > partitions are > > identical given > > exactly the same cleartext content and the same > > keys. There must be some > > kind of sector randomization when writing data to > > make cryptoanalysis > > harder, so it makes me think that it's not the > case > > (that would require > > testing though). > > If I'm right, I don't see how DRBD could work in > > that setup. (or maybe I > > just need more sleep). > > > > > > LUKS is working on the LV, which will be backed by the > > PV on DRBD. DRBD > > doesn't know data, so it will simply replicate the > LUKS > > structure > > faithfully to both nodes. > > > > Remember, for all intent and purpose, there is > only one > > device/luks > > partition. DRBD is really no different from LUKS on > > /dev/mdX devices in > > this regard. > > > > > > ah that's right - indeed more sleep needed. I've > skipped the > > "clustered > > LVM" part and was thinking about two luks partitions. > > > > sorry for the noise. > > > > > > No worries at all. When you ask a question like this, you > have a > > chance to learn a system better, so it's good. :) > > > > -- > > Digimer > > Papers and Projects: https://alteeve.ca/w/ > > What if the cure for cancer is trapped in the mind of a person > > without access to education? > > _________________________________________________ > > drbd-user mailing list > > drbd-user at lists.linbit.com <mailto:drbd-user at lists.linbit.com> > <mailto:drbd-user at lists.linbit.com <mailto:drbd-user at lists.linbit.com>> > > http://lists.linbit.com/__mailman/listinfo/drbd-user > > <http://lists.linbit.com/mailman/listinfo/drbd-user> > > > > > > > > > -- > Digimer > Papers and Projects: https://alteeve.ca/w/ > What if the cure for cancer is trapped in the mind of a person without > access to education? > > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education?