[DRBD-user] how to divide shared storage network and drbd replication network

Ralf W. mrsun2001 at yahoo.de
Wed Jun 2 13:32:05 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


We tested the bandwidth after we divided shared storage (iscsi-targets) from the replication network, and we found that there was a ~35% increase in speed. 
Ralf




----- Original Message ----
From: Bart Coninckx <bart.coninckx at telenet.be>
To: drbd-user at lists.linbit.com
Cc: Ralf W. <mrsun2001 at yahoo.de>
Sent: Wed, June 2, 2010 1:28:38 PM
Subject: Re: [DRBD-user] how to divide shared storage network and drbd replication network

On Wednesday 02 June 2010 12:42:53 Ralf W. wrote:
> Hello - I have the following network configuration on both storage servers:
> bond0 --> 10.255.255.x/24 (shared storage network - here I have KVM node
>  who are exporting iscsi targets from the ha-cluster) bond1 -->
>  10.13.0.x/24 (This should be the replication link, were both clusters
>  replicate/sync/update) eth0 --> 10.12.33.x/24  (only admin network)
> eth1 --> 10.14.33.x/24 (here I want the ha heartbeat to go over)
> 
> -----> crm configure show
> node $id="e471b446-a7e2-4253-a257-bda343d7c13d" sm-storage-1b \
>     attributes standby="off"
> node $id="fe27f2e0-d551-4495-bfb9-819d31884a65" sm-storage-1a \
>     attributes standby="off"
> primitive ha_drbd ocf:linbit:drbd \
>     params drbd_resource="vm3" drbdconf="/etc/drbd.conf" \
>     op monitor interval="59s" role="Master" timeout="30s" \
>     op monitor interval="60s" role="Slave" timeout="30s" \
>     meta is-managed="true"
> primitive ip_drbd ocf:heartbeat:IPaddr2 \
>     params ip="10.255.255.205" nic="bond0" \
>     meta is-managed="true"
> primitive iscsi lsb:iscsi-target \
>     meta is-managed="true"
> primitive lvm_drbd ocf:heartbeat:LVM \
>     params volgrpname="vg_ralf1" exclusive="true" \
>     meta is-managed="true"
> group drbdd lvm_drbd iscsi ip_drbd \
>     meta target-role="Started"
> ms ms_drbd_fail ha_drbd \
>     meta master-max="1" master-node-max="1" clone-max="2"
>  clone-node-max="1" notify="true" target-role="Started" colocation col_drbd
>  inf: drbdd ms_drbd_fail:Master
> order drbd_after inf: ms_drbd_fail:promote drbdd:start
> property $id="cib-bootstrap-options" \
>     dc-deadtime="60" \
>     cluster-delay="60" \
>     stonith-enabled="false" \
>     default-action-timeout="20" \
>     stonith-timeout="60" \
>     dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
>     cluster-infrastructure="Heartbeat" \
>     last-lrm-refresh="1275383370"
> <---- end
> 
> ---> ha.cf
> debugfile /var/log/ha-debug
> logfacility    local0
> mcast eth1 225.0.0.1 694 1 0
> ping 10.14.33.118
> ping 10.14.33.119
> respawn hacluster /usr/lib/heartbeat/dopd
> apiauth dopd gid=haclient uid=hacluster
> node sm-storage-1a
> node sm-storage-1b
> crm yes
> <---
> 
> 
> --> /etc/drbd.conf
> global { usage-count yes; }
>  common { syncer { rate 512M; } }
> resource vm3 {
>     protocol C;
>     startup {
>     wfc-timeout 0;
>         degr-wfc-timeout 120;
> #    become-primary-on both;
>     }
>     handlers {
>         fence-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
>     }
> 
>     disk {
>         on-io-error detach;
>     fencing resource-only;
>     }
>     net {
>     cram-hmac-alg sha1;
> #     allow-two-primaries; # Wichtig für Primary/Primary Setup
>     }
> 
>     on sm-storage-1a {
>         device /dev/drbd0;
>         disk /dev/sde1;
>         address 10.255.255.203:7788;
>         meta-disk internal;
>     }
> 
>     on sm-storage-1b {
>         device /dev/drbd0;
>         disk /dev/sde1;
>         address 10.255.255.204:7788;
>         meta-disk internal;
>     }
> }
> <---
> 
> sm-storage-1a = 10.12.33.118(eth0), 10.255.255.203(bond0),
>  10.13.0.118(bond1), 10.14.33.118(eth1) sm-storage-1b = 10.12.33.119(eth0),
>  10.255.255.204(bond0), 10.13.0.119(bond1), 10.14.33.119(eth1)
> 
> 
> Question:  How can I divide shared storage network and drbd replication
>  network?  Is this a crm configuration or a ha.cf configuration.  I'm
>  confused.  Thank you for your help.
> 
> Ralf
> 
> 
> 
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 


If you connect to just the NICs that are meant for iSCSI, do you then need a 
further separation? IET listens to all interfaces, but as long you connect to 
certain IP addresses only those NICs suffer performance I guess.


B.



      




More information about the drbd-user mailing list