[DRBD-user] Adding a third node

Paul O'Rorke paul at tracker-software.com
Wed Oct 16 23:43:03 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Aaron,

thanks for the advice, I have set up OpenVPN between nodes and am 
looking to set it up like you suggested using that to avoid the whole 
NAT thing.  Also - this way I can have a single IP for the tunnel and 
use my local subnet for DRBD.

That raises a new question, how to configure */etc/network/interfaces*? 
Obviously I need the external IP on eth0 and I'm using KVM so I'm using 
a bridged interface for the VMs.  I have currently 5 resources I want to 
sync from my 2 node cluster through this stacked resource.  Can I 
perhaps get a peek at */etc/network/interfaces* from your node that is 
off-site and behind the VPN?

With the tunnel active I have the following listed as active :

    root at kvm-srv-03:~# ifconfig
    eth0      Link encap:Ethernet  HWaddr 84:2b:2b:40:1b:2f
               inet addr:xxx.xxx.xxx.xxx  Bcast:xxx.xxx.xxx.xxx
    Mask:255.255.255.252
               inet6 addr: fe80::862b:2bff:fe40:1b2f/64 Scope:Link
               UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
               RX packets:650589 errors:0 dropped:0 overruns:0 frame:0
               TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:1000
               RX bytes:42452974 (40.4 MiB)  TX bytes:7327 (7.1 KiB)
               Interrupt:16 Memory:da000000-da012800

    eth1      Link encap:Ethernet  HWaddr 84:2b:2b:40:1b:30
               inet addr:yyy.yyy.yyy.yyy  Bcast:yyy.yyy.yyy.yyy
    Mask:255.255.255.0
               inet6 addr: fe80::862b:2bff:fe40:1b30/64 Scope:Link
               UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
               RX packets:649179 errors:0 dropped:0 overruns:0 frame:0
               TX packets:2322 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:1000
               RX bytes:42322482 (40.3 MiB)  TX bytes:325348 (317.7 KiB)
               Interrupt:17 Memory:dc000000-dc012800

    lo        Link encap:Local Loopback
               inet addr:127.0.0.1  Mask:255.0.0.0
               inet6 addr: ::1/128 Scope:Host
               UP LOOPBACK RUNNING  MTU:16436  Metric:1
               RX packets:123 errors:0 dropped:0 overruns:0 frame:0
               TX packets:123 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:0
               RX bytes:8626 (8.4 KiB)  TX bytes:8626 (8.4 KiB)

    tun0      Link encap:UNSPEC  HWaddr
    00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
               inet addr:172.16.0.41  P-t-P:172.16.0.42 Mask:255.255.255.255
               UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Metric:1
               RX packets:11 errors:0 dropped:0 overruns:0 frame:0
               TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:100
               RX bytes:924 (924.0 B)  TX bytes:924 (924.0 B)


on my local nodes */etc/network/interfaces* looks like this:

    root at kvm-srv-01:~# cat /etc/network/interfaces
    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    auto eth0
    iface eth0 inet manual

    # Network bridge
    auto br0
    iface br0 inet static
    address 192.168.0.30
    network 192.168.0.0
    netmask 255.255.255.0
    broadcast 192.168.0.255
    gateway 192.168.0.254
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

    # The secondary network interface used for DRBD resource replication
    auto eth1
    allow-hotplug eth1
    iface eth1 inet static
    address 192.168.2.31
    network 192.168.2.0
    netmask 255.255.255.0
    broadcast 192.168.2.255

    auto eth1:0
    allow-hotplug eth1:0
    iface eth1:0 inet static
    address 192.168.2.41
    netmask 255.255.255.0

    auto eth1:1
    allow-hotplug eth1:1
    iface eth1:1 inet static
    address 192.168.2.51
    netmask 255.255.255.0

    auto eth1:2
    allow-hotplug eth1:2
    iface eth1:2 inet static
    address 192.168.2.61
    netmask 255.255.255.0

    auto eth1:3
    allow-hotplug eth1:3
    iface eth1:3 inet static
    address 192.168.2.71
    netmask 255.255.255.0

    # The tertiary network interface - DMZ
    auto eth2
    iface eth2 inet manual

    # Network bridge - DMZ
    auto br2
    iface br2 inet static
    address 192.168.4.30
    network 192.168.4.0
    netmask 255.255.255.0
    broadcast 192.168.4.255
    # static routing
    post-up route add -net 0.0.0.0 gw 192.168.4.254
    pre-down route del -net 0.0.0.0 gw 192.168.4.254
    dns-nameservers 64.59.160.13 64.59.160.15
    bridge_ports eth2
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

on the remote node I still have only my 2 external IPs configured from 
the basic Debian set up.  :

    root at kvm-srv-03:~# cat /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    allow-hotplug eth0
    iface eth0 inet static
             address xxx.xxx.xxx.xxx
             netmask 255.255.255.252
             gateway xxx.xxx.xxx.xxx
             dns-nameservers 64.59.160.15 64.59.161.69

    # The DRBD network interface
    allow-hotplug eth1
    iface eth1 inet static
             address yyy.yyy.yyy.yyy
             netmask 255.255.255.0
             gateway yyy.yyy.yyy.yyy
             dns-nameservers 64.59.160.15 64.59.161.69

So I'm not at all sure how to alias the IPs through the tunnel at 
kvm-srv-03 as I did directly on eth1 on kvm-srv-01.

Am I making any sense here?  I think I'm confusing myself...

*Paul O’Rorke*
Tracker Software Products
paul at tracker-software.com <mailto:paul.ororke at tracker-software.com>

++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PLEASE NOTE : - If you are sending files for us to look at or assist with
these must ALWAYS be wrapped in either a ZIP/RAR or 7z FILE
or they will be removed by our Firewall/Virus management software.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++

**Certified by Microsoft**
"Works with Vista"
PDF-XChange & SDK, Image-XChange
PDF-Tools & SDK, TIFF-XChange & SDK.

Support:
http://tracker-software.com/support/
or
http://www.tracker-software.com/forum/index.php

Download latest Releases
http://www.tracker-software.com/downloads/

On 9/27/2013 10:26 AM, Aaron Johnson wrote:
> Paul,
>
> That config looks right, however you will want to use a VIP address 
> instead of the IP address of just 1 node.  This IP will move between 
> the 2 local nodes to whichever node is active, otherwise if when the 
> node with the IP in the local resource is down you will not get 
> updates to the stacked offsite node.
>
> Also be aware of private vs. public IP space and how the IPs may 
> appear when NAT comes into play and which IPs need to appear where in 
> the config.  I avoid this by having my 2 locations connected by VPN so 
> all addresses are direct, no NAT.
>
> Aaron
>
>
>
> On 9/26/2013 4:06 PM, Paul O'Rorke wrote:
>> Thanks for that Aaron,
>>
>> I'm looking at this page 
>> http://www.drbd.org/users-guide/s-three-nodes.html and not quite sure 
>> I understand how to merge this with my current config.  Currently I 
>> have 5 resources using Protocol C on my 2 node local cluster.
>>
>> For the sake of this set up I will consider the set up one of these 
>> resources with a third node using a stacked resource and protocol A 
>> then hopefully once that is working I can apply this to the other 
>> resources.
>>
>> In the example provided it appears that I need to define all three 
>> resources in the one .res file.  I have the following 2 config files:
>>
>> */etc/drbd.d/global_common.conf*
>> global {
>>         usage-count yes;
>> }
>> common {
>>         protocol C;
>> }
>>
>> and
>>
>> */etc/drbd.d/restored.res*
>> resource restored {
>>         device    /dev/drbd2;
>>         disk        /dev/VirtualMachines/restored;
>>         meta-disk internal;
>>         on kvm-srv-01 {
>>             address 192.168.2.41:7789;
>>         }
>>         on kvm-srv-02 {
>>             address 192.168.2.42:7789;
>>         }
>> }
>>
>>
>> can I just tack something like this onto the end of 
>> */etc/drbd.d/restored.res*?
>>
>> resource restored-U {
>>    net {
>>      protocol A;
>>    }
>>
>>    stacked-on-top-of restored {
>>      device     /dev/drbd10;
>>      address    192.168.3.41:7788;
>>    }
>>
>>    on buckingham {
>>      device     /dev/drbd10;
>>      disk       /dev/hda6;
>>      address    <fixed IP at backup node>:7788; # Public IP of the backup node
>>      meta-disk  internal;
>>    }
>> }
>>
>> I am also wondering, since I have a spare NIC on my local nodes, 
>> would I be better to use that to connect to my off site resource or 
>> use the LAN connected NIC?  In the example above I used a different 
>> subnet for the off site and called the off site machine 'buckingham'.
>>
>> I hope my question makes sense, still finding my feet here.
>>
>> Please and thanks
>>
>> *Paul O’Rorke*
>> Tracker Software Products
>> paul at tracker-software.com <mailto:paul.ororke at tracker-software.com>
>>
>> On 9/25/2013 2:21 PM, Aaron Johnson wrote:
>>> Yes you can add the stacked resource later, I have done this same thing several times now by making the the device slightly larger first and using internal metadata.
>>>
>>> Also I have a DR site using protocol C and pull-ahead enabled without using DRBD proxy.  The main site and DR site are connected via cable modem connections (10Mbit up/ 20 down both sides).  The only thing I have troubles with is if I need to add a large amount of data (50+ GB), which in my case is fairly rare (daily norm is ~2GB), then it can take days or weeks to sync up fully again.  Also I used truck-based updates for the initial setup of ~1TB to avoid having to pull all that over the internet link.
>>>
>>> Thanks,
>>> AJ
>>>
>>>> On Sep 25, 2013, at 7:54 AM, Lionel Sausin<ls at numerigraphe.com>  wrote:
>>>>
>>>> Le 25/09/2013 08:10,roberto.fastec at gmail.com  a écrit :
>>>>> The purpose you are talking about, sounds more as the purpose DRBD Proxy has been developed for
>>>>>
>>>>> www.linbit.com/en/products-and-services/drbd-proxy
>>>> Yes and no, my understanding is that DRBD-proxy lets your production cluster run faster than the connection speed by acting like a write cache.
>>>> But if I'm not mistaken you still need a stacked configuration for 3 node setups until v9.0 is released.
>>>> Someone please correct me if that's wrong of course.
>>>>
>>>> Lionel Sausin
>>>> _______________________________________________
>>>> drbd-user mailing list
>>>> drbd-user at lists.linbit.com
>>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user at lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20131016/38471507/attachment.htm>


More information about the drbd-user mailing list