Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Paul,
I used classic heartbeat instead of pacemaker as I found it simpler to
configure however the one failover I have had wasn't very clean and
required some manual intervention. If your situation allows you can
also use pure manual failover (switch resources to primary, configure
sub-interfaces, etc) which I did for a while as my situation does not
require 100% up time but rather zero data loss is the focus.
For best support I can see where a properly configured pacemaker setup
would provide better support and redundancy with DRBD. Eventually my
goal is to move to a pacemaker configuration I just haven't had time to
learn it yet.
Thanks,
Aaron
On 11/5/2013 3:47 PM, Paul O'Rorke wrote:
> Thanks for that Aaron,
>
> I'm looking at this again after a hiatus.
>> you will want to use a VIP address instead of the IP address of just
>> 1 node.
> Can this be done without Pacemaker? The reading I've done so far is
> all in relation to using Pacemaker. I can alias an IP on either node
> but I'm unclear on how to 'move' the Virtual IP from node 1 to node 2
> and back without implementing Pacemaker etc.
>
> Now after adding the third remote site I am planning on implementing
> Pacemaker/fencing - perhaps I should be looking at going both
> together? I must confess that when I see messages about split brains
> I get a littler nervous about the reliability. It seems that allowing
> multiple primaries actually can make the set up less robust. Maybe
> I'm missing something and properly set up with fencing/stonith? is the
> way to go.
>
> I'm pretty new to all this and there is much reading to do so I
> apologise in advance if this is a silly question.
>
> *Paul O'Rorke*
> Tracker Software Products
> paul at tracker-software.com <mailto:paul.ororke at tracker-software.com>
> http://www.tracker-software.com/downloads/
>
> On 9/27/2013 10:26 AM, Aaron Johnson wrote:
>> Paul,
>>
>> That config looks right, however you will want to use a VIP address
>> instead of the IP address of just 1 node. This IP will move between
>> the 2 local nodes to whichever node is active, otherwise if when the
>> node with the IP in the local resource is down you will not get
>> updates to the stacked offsite node.
>>
>> Also be aware of private vs. public IP space and how the IPs may
>> appear when NAT comes into play and which IPs need to appear where in
>> the config. I avoid this by having my 2 locations connected by VPN
>> so all addresses are direct, no NAT.
>>
>> Aaron
>>
>>
>>
>> On 9/26/2013 4:06 PM, Paul O'Rorke wrote:
>>> Thanks for that Aaron,
>>>
>>> I'm looking at this page
>>> http://www.drbd.org/users-guide/s-three-nodes.html and not quite
>>> sure I understand how to merge this with my current config.
>>> Currently I have 5 resources using Protocol C on my 2 node local
>>> cluster.
>>>
>>> For the sake of this set up I will consider the set up one of these
>>> resources with a third node using a stacked resource and protocol A
>>> then hopefully once that is working I can apply this to the other
>>> resources.
>>>
>>> In the example provided it appears that I need to define all three
>>> resources in the one .res file. I have the following 2 config files:
>>>
>>> */etc/drbd.d/global_common.conf*
>>> global {
>>> usage-count yes;
>>> }
>>> common {
>>> protocol C;
>>> }
>>>
>>> and
>>>
>>> */etc/drbd.d/restored.res*
>>> resource restored {
>>> device /dev/drbd2;
>>> disk /dev/VirtualMachines/restored;
>>> meta-disk internal;
>>> on kvm-srv-01 {
>>> address 192.168.2.41:7789;
>>> }
>>> on kvm-srv-02 {
>>> address 192.168.2.42:7789;
>>> }
>>> }
>>>
>>>
>>> can I just tack something like this onto the end of
>>> */etc/drbd.d/restored.res*?
>>>
>>> resource restored-U {
>>> net {
>>> protocol A;
>>> }
>>>
>>> stacked-on-top-of restored {
>>> device /dev/drbd10;
>>> address 192.168.3.41:7788;
>>> }
>>>
>>> on buckingham {
>>> device /dev/drbd10;
>>> disk /dev/hda6;
>>> address <fixed IP at backup node>:7788; # Public IP of the backup node
>>> meta-disk internal;
>>> }
>>> }
>>>
>>> I am also wondering, since I have a spare NIC on my local nodes,
>>> would I be better to use that to connect to my off site resource or
>>> use the LAN connected NIC? In the example above I used a different
>>> subnet for the off site and called the off site machine 'buckingham'.
>>>
>>> I hope my question makes sense, still finding my feet here.
>>>
>>> Please and thanks
>>>
>>> *Paul O'Rorke*
>>> Tracker Software Products
>>> paul at tracker-software.com <mailto:paul.ororke at tracker-software.com>
>>>
>>> On 9/25/2013 2:21 PM, Aaron Johnson wrote:
>>>> Yes you can add the stacked resource later, I have done this same thing several times now by making the the device slightly larger first and using internal metadata.
>>>>
>>>> Also I have a DR site using protocol C and pull-ahead enabled without using DRBD proxy. The main site and DR site are connected via cable modem connections (10Mbit up/ 20 down both sides). The only thing I have troubles with is if I need to add a large amount of data (50+ GB), which in my case is fairly rare (daily norm is ~2GB), then it can take days or weeks to sync up fully again. Also I used truck-based updates for the initial setup of ~1TB to avoid having to pull all that over the internet link.
>>>>
>>>> Thanks,
>>>> AJ
>>>>
>>>>> On Sep 25, 2013, at 7:54 AM, Lionel Sausin<ls at numerigraphe.com> wrote:
>>>>>
>>>>> Le 25/09/2013 08:10,roberto.fastec at gmail.com a écrit :
>>>>>> The purpose you are talking about, sounds more as the purpose DRBD Proxy has been developed for
>>>>>>
>>>>>> www.linbit.com/en/products-and-services/drbd-proxy
>>>>> Yes and no, my understanding is that DRBD-proxy lets your production cluster run faster than the connection speed by acting like a write cache.
>>>>> But if I'm not mistaken you still need a stacked configuration for 3 node setups until v9.0 is released.
>>>>> Someone please correct me if that's wrong of course.
>>>>>
>>>>> Lionel Sausin
>>>>> _______________________________________________
>>>>> drbd-user mailing list
>>>>> drbd-user at lists.linbit.com
>>>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>> _______________________________________________
>>>> drbd-user mailing list
>>>> drbd-user at lists.linbit.com
>>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>
>>
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20131106/d9fa0792/attachment.htm>