Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Christian, Another informations... What is the filesystem type, block size, of the drbd devices ? if you have a raid the level, number of disk, chunk...? You can test: dd if=/dev/nbx of=/dev/null (check the iostat) dd if=/dev/zero of=/mount_drbd_device/file_name count=xxxxx bs=block_size_of_drbd_device (check the iostat, normaly you have I/Os on each node) rsync from the drbd device of one node to a ram filesystem and a disk filesystem of the other node using the drbd link and using another link. Best regards. Francis Francis SOUYRI wrote: > Hello Christian, > > Christian Hammers wrote: > >> Hello >> >> On Wed, Jan 21, 2004 at 01:34:31PM +0100, Lars Ellenberg wrote: >> >> >>>> Hm, as I have to reboot the machines anyway due to a problem with a >>>> NIC (which is not involved in DRBD), I will probably rather >>>> downgrade to 0.6.4 and see what happens there. >>>> >>> >>> yes, you are right, this would be nice to know. though I doubt >>> that it makes a difference. >>> anyways, "never change more than one thing at a time" :) >>> >> >> >> The third night spending my time with this !"§% system... >> First, I switched back to 0.6.4 but the syncall speed stayed slow >> so I disconnected the server and rebootet 0.6.10 again. >> >> Then I commented out the ts-size=5000 parameter to reset it to the >> default of 256. Didn't gained me much, the speed even dropped to >> about 1100 KB/s :-( >> >> So the only things that could have an influence and were not yet tried >> are: >> - Kernel change from 2.4.23 to 2.4.24 due to the security issue >> >> > Vanilla kernel ? Distrib kernel ? > >> - New CPU+Mainboards (Intel P4 & AMD-Athlon, both about 2,6GHz) >> - New internal NICs, both Intel Gigabit with e1000.o driver >> >> > Could you create a ram filesystems on each node and start a rsync. > Do you have the possibility to test other NICs ? > >> As somebody asked me for it I have the iostat and vmstat lines of both >> systems here: >> >> [the primary] >> >> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> dev8-0 58.00 6840.00 8.00 6840 8 >> >> procs memory swap io system cpu >> r b w swpd free buff cache si so bi bo in cs >> us sy id >> ... >> 0 4 7 0 592748 62792 177188 0 0 3780 188 762 935 >> 0 2 98 >> >> >> >> >> [the secondary] >> >> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> dev8-0 336.00 0.00 6496.00 0 6496 >> >> procs memory swap io system cpu >> r b w swpd free buff cache si so bi bo in cs >> us sy id >> ... >> 0 0 0 0 983316 4876 18260 0 0 0 3668 929 1332 >> 0 5 95 >> >> >> >> >> >>>>>> 0: cs:SyncingAll st:Secondary/Primary ns:0 nr:1435652 dw:1435652 >>>>>> dr:0 pe:0 ua:15 >>>>>> [=====>..............] sync'ed: 26.8% (3668/5004)M >>>>>> finish: 0:47:57h speed: 1,319 (1,313) K/sec >>>>>> 1: cs:SyncingAll st:Secondary/Primary ns:0 nr:1367248 dw:1367248 >>>>>> dr:0 pe:0 ua:15 >>>>>> [=====>..............] sync'ed: 26.7% (3672/5004)M >>>>>> finish: 0:47:40h speed: 1,336 (1,310) K/sec >>>>>> >>>>> >> >> >> > Could you give the config of each server (cpu, memory, disk adapter, > nic...) the output on each node of: > > cat /proc/version > cat /proc/drbd > cat /etc/drbd.conf > cat /var/lib/drbd/drbd.conf.parsed > /sbin/drbdsetup /dev/nb0 show > /sbin/drbdsetup /dev/nb1 show > > Start only one synchro: > > /sbin/drbdsetup /dev/nbX replicate > > During the synchro check the output of: > > while true ; do cat /proc/drbd; sleep 1; done > ps -ef > iostat -k 1 > vmstat 1 > > I have 3 clusters running Redhat 9, heartbeat and drbd (2x100Mb/s NICs > bonding) without problem if you want I could send you a configs. > >> bye & thanks for any help, >> >> -christian- >> >> P.S.: Changing the nicelevel below zero is not a good idea as the system >> becomes quite laggy without any visible speed gain. >> >> > Best regards. > > Francis > > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > >