Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi,
I've noticed that the performance of my disk drops significantly
during the initial synchronization stage when I bring a second drbd
node online. After the "Sync" period ends, performance is fine. I am
running over a 100mbps link, and I have limited the sync rate to 4M.
Here is my config:
common {
protocol A;
}
resource r0 {
device /dev/drbd1;
disk /dev/VolGroup00/drbd0;
meta-disk internal;
on rcs6 {
address 192.168.246.6:7789;
}
on rcs7 {
address 192.168.246.7:7789;
}
net {
}
syncer {
csums-alg md5;
rate 4M;
}
}
I'm not sure why it would matter, but I am using the device as a disk
for a Xen virtual machine. During the sync stage, small disk access
in the VM can take several seconds, and if I look at "top" it reports
most of the CPU time being spent as iowait. Performance is erratic --
during part of the sync it will be fine, and at other times it will be
very slow.
Any idea if this is a problem with Xen interacting with DRBD in a
funny way, or if I just don't have my drbd setup properly?
thanks!
Tim