Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> > Hi,
> >
> > I have DRBD 0.7.22 running on two SLES 10 SP1 servers. The partitions for
> > DRBD are located on a LVM volume. While doing the initial sync, it seems
> > that this is very slow:
> >
> > 0: cs:SyncSource st:Primary/Secondary ld:Consistent
> > ns:598700 nr:0 dw:0 dr:601844 al:0 bm:36 lo:0 pe:198 ua:786 ap:0
> > [>...................] sync'ed: 0.1% (604824/605408)M
> > finish: 215:02:54 speed: 708 (684) K/sec
> >
> > I use dedicated gigabit NICs for this, having tried a straight and
> > crossed cable and a switch. "ethtool" reports 1000 Mbits. Filecopies with
> > SCP go beyond 80 MB/sec. The "rate" parameter in drbd.conf is at 33M,
> > which I derived from the drbd.org site.
> >
> > I do get this in the secondary node:
> >
> > Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 0.0%id, 99.3%wa, 0.0%hi, 0.3%si, 0.0%st
> > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni, 0.0%id,100.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> >
> > Which seems to indicate that twee CPUs are at 100%.
> >
> > Anyone any idea as to why?
> >
> > Thank you!
> >
> >
> > Bart
>
> All,
>
> insipred by another thread, I changed the network cards for the DRBD sync
> link, hoping this would fix things, but the same result applies.
>
>
> Anyone any ideas whatsover?
>
> Any help is much appreciated!
>
> Thank you!
"Help yourself and you will be helped". :-) A thread about cciss based RAID
controllers and DRBD revealed that these cards require some special settings
in drbd.con under "net":
sndbuf-size 512k;
max-buffers 20480;
max-epoch-size 16384;
unplug-watermark 20480;
These gave me a whopping 13000 KB/sec in stead of 700 KB/sec. Oddly, when
going back to gigabit NICs, I still don't get what I think should be maximum
speeds, but perhaps I need to tweak some more ...
Thank you,
Bart