[DRBD-user] slow synchronisation

Christian Garling christian at cg-networks.de
Fri Sep 3 17:41:55 CEST 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


> On Thursday 02 September 2004 21:36, Christian Garling wrote:
>> Hello people,
>>
>> i have a problem with synchronisation. i have 2 servers with escalade
>> 7506-4P raid controller and three maxtor baracuda 160gb harddisks. they
>> are connected through two intel eepro 1000 gigabit ethernet cards
>> (bonding
>> mode 0). when i comment out the rate setting in drbd.conf the initial
>> sync
>> runs with the default value of 250kb/s. but when i use the rate setting
>> it
>> only runs with about 12kb/s. i tested the connection with iptraf monitor
>> and it seems everything to be ok. here is my actually configuration. its
>> very basicly at the moment.
>>
>> resource r0 {
>>   protocol C;
>>   incon-degr-cmd "halt -f";
>>
>>   startup {
>>     wfc-timeout  0;
>>     degr-wfc-timeout 120;
>>  }
>>
>>   disk {
>>     on-io-error   panic;
>>   }
>>
>>   net {
>>     timeout       60;    #  6 seconds  (unit = 0.1 seconds)
>>     connect-int   10;    # 10 seconds  (unit = 1 second)
>>     ping-int      10;    # 10 seconds  (unit = 1 second)
>>     ko-count 5;
>>     on-disconnect reconnect;
>>   }
>>
>>   syncer {
>>     rate 10M;
>>   }
>>
>>   on node01 {
>>     device     /dev/drbd0;
>>     disk       /dev/sda1;
>>     address    10.0.0.10:7788;
>>     meta-disk  internal;
>>   }
>>
>>   on node02 {
>>     device    /dev/drbd0;
>>     disk      /dev/sda1;
>>     address   10.0.0.20:7788;
>>     meta-disk internal;
>>   }
>> }
>>
>
> I would really onjoy to see such a cluster in real live (I mean one
> that does only 12kb/s).
>
> I installed a csync2/DRBD/heartbeat cluster yesterday. The two boxes
> had 3Ware Escalades 9xxxx Controllers.
>
> We did some primary crash simulation test and had 20MB/sec resync
> simultaniously on two drbd-resources resyncing in parallel.
> (rate was set to 20M. Probabely these machines would do even more)
>
> Here the usual questions:
> Have you tested the bandwith of your network link ? How ? What numbers
> do you get ?
> Have you tested the bandwith of your disks ? How ? What numbers
> do you get ?
> Which kernel ? Which drbd release ? Hardware ?
> Have you tested without bonding ?
> Are you using jumbo-frames ? MTU ?
>
> -Philipp
> --
> : Dipl-Ing Philipp Reisner                      Tel +43-1-8178292-50 :
> : LINBIT Information Technologies GmbH          Fax +43-1-8178292-82 :
> : Schönbrunnerstr 244, 1120 Vienna, Austria    http://www.linbit.com :
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>

Hello,

Hardware: Tyan Thunder i7501 Pro, 512MB RAM, Intel Xeon 2400, 3Ware
Escalade 7506-4P RAID Controller, 3x Seagate Baracuda 160GB (RAID 5), 2x
Intel Etherexpress 1000MBit (no Jumbo-Frames, MTU on default value)

Software: Debian GNU/Linux Woody, Kernel 2.4.26, DRDB 0.7.3

Network link tested with iptraf. I sent a 190MB Package over the two
gigabit cards and measured a speed of 12MB/s. So I think the network cards
are alright. I tried with bonding mode 0 and 1. I can´t test it without
bonding, because the RAID on the second node rebuilds at the moment.

I dont tested the harddisks.

Greetings,

Christian Garling





More information about the drbd-user mailing list