Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello,
after an upgrade from drbd 0.6.11+cvs to 0.7.8 we encountered serious
performance problems. Both nodes are identical, with 2x AMD Athlon 2200
MP, 2 GB RAM, 2x 3ware 7500 IDE Raid controller w/ 8x 120 GB HD,
connected directly via Gigabit Eth (Intel PRO/1000, MTU 9000).
SuSE 8.2, kernel 2.4.21-144-smp
After the update (internal metadata) the first, forced sync was only
about 4 - 5 MB/s in contrast to about 50 MB/s with 0.6.11+cvs! Even read
performance seems to suffer, but this was only a subjective impression.
After the sync (> 24 hours!) write performance had also the same bad
values.
The only solution was to downgrade to 0.6 again because employees
complained about the slow server :(
What shall I do to track down the problem?
Thanks a lot,
Felix
Configuration was:
resource drbd0 {
protocol C;
on linux1 {
device /dev/drbd0;
disk /dev/sda5;
address 10.20.30.1:7788;
meta-disk internal;
}
on linux2 {
device /dev/drbd0;
disk /dev/sda5;
address 10.20.30.2:7788;
meta-disk internal;
}
net {
sndbuf-size 256k;
timeout 60;
connect-int 18;
ping-int 18;
max-buffers 128;
on-disconnect reconnect;
ko-count 4;
max-epoch-size 128;
}
disk {
on-io-error panic;
}
syncer {
rate 100M;
group 1;
}
startup {
wfc-timeout 0;
degr-wfc-timeout 120;
}
}