Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I have been working on a DRBD setup but I am getting really slow sync
performance and after digging around in the lists for a while I still
don't have any idea what is causing the problem so I was hoping someone
can get me some help.
My setup consists of two Dell PowerEdge 2650 servers that are identical.
I have a RAID1 with two 300GB 10K disks that I will be replicating with
DRBD to both machines. I am using the built in PERC 3/di (aacraid) to
provide the RAID1. The link is made from the two onboard gige NICs
between the servers and it is using the bonding driver in
active-failover.
I have run iperf on the link between the servers and can get 98 -
75MBytes/sec without problems.
Running a straight dd on the drives I am getting about 34MBytes/sec in
write.
When I setup DRBD the initial syncs runs at an average of 3MBytes/sec.
I have verifed that DRBD is using the correct interfaces between the
machines and have tried changing the al-extents, sndbuf-size,
max-buffers, max-epoch-size, unplug-watermark and such as others have
mentioned on the list but it always runs at about 3MBytes/sec.
The OS is CentOS 5.1 and I am starting to wonder if there might be
something odd in it. Below is a copy of a my simple DRBD config that I
have been messing around with trying to get this all work.
Any help would be wonderful.
Thanks,
Morey
global {
usage-count no;
}
common {
protocol C;
syncer {
rate 40M;
}
net {
sndbuf-size 512k;
}
}
resource r0 {
on nas1.nmt.edu {
device /dev/drbd0;
disk /dev/sdb1;
address 10.180.0.10:7789;
meta-disk internal;
}
on nas2.nmt.edu {
device /dev/drbd0;
disk /dev/sdb1;
address 10.180.0.20:7789;
meta-disk internal;
}
}