Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi.
On my drbd system the write performance is far below what i expected.
I have two opteron servers connected with gbit and each one has 2
sata 500gb disks. drbd runs on a dedicated cross-connect gbit link.
I'm pretty sure that neither the cpu, network, ram nor the disks are
a bottleneck.
For a start i created 2 drbd devices that are primary on one server.
each device on a separate disk. I exported the devices with lustre to
another host. the second drbd host just receives the mirrored data
over the dedicated link and doesn't export anything.
I get about 20mb/s for a single drbd device and about 28mb/s when i
stripe over both. I also tried multiple drbd devices per disk. I can
reach 35mb/s with 4 drbd devices, 2 devices on each disk, but that's
it. I tried to modify buffers and the protocol, but it seems to be
the limit.
When I disable mirroring data in drbd ("drbdadm down all" on second
server) I can reach rates of 40mb/s, 85mb/s and 88mb/s as in the
cases above.
Is this the performance impact i have to expect from drbd? Where
could be the bottleneck?
Has anybody reached higher write performance with drbd?
All resouce sections from my drbd.conf look like this:
resource lustre1 {
protocol C;
disk {
on-io-error detach;
}
net {
max-buffers 8192;
sndbuf-size 512k;
max-epoch-size 8192;
}
syncer {
rate 20M;
al-extents 1024;
group 1;
}
on tnode1 {
device /dev/drbd0;
disk /dev/sda9;
address 10.0.0.5:7788;
meta-disk internal;
}
on tnode2 {
device /dev/drbd0;
disk /dev/sda9;
address 10.0.0.6:7788;
meta-disk internal;
}
}
It says sndbuf-size should not be set over 1M. Is it a problem if
sndbuf-size aggregated over multiple sections is above 1M?
Cheers,
Anselm Strauss