Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi Anselm,
There is one important thing in your setup that can be affecting your performance. You have defined the syncer rate as 20MBytes/second. The maximum theoretician performance that gigabit network can deliver is 125MBytes/second.
you must change this:
syncer {
rate 20M;
al-extents 1024;
group 1;
}
to this :
syncer {
rate 125M;
al-extents 1024;
group 1;
}
see if you have some difference in your speed.
I would like to know more about what you are doing with lustre. are you documentating it ? can tell me more in private ? leonardo.mello at planejamento dot gov dot br
I have some experience in gfs and ocfs2, and want to give lustre one try.
Best Regards
Leonardo Rodrigues de Mello
-----Original Message-----
From: drbd-user-bounces at lists.linbit.com on behalf of Anselm Strauss
Sent: qua 23/8/2006 06:20
To: drbd-user at lists.linbit.com
Cc:
Subject: [DRBD-user] write performance
resource lustre1 {
protocol C;
disk {
on-io-error detach;
}
net {
max-buffers 8192;
sndbuf-size 512k;
max-epoch-size 8192;
}
syncer {
rate 20M;
al-extents 1024;
group 1;
}
on tnode1 {
device /dev/drbd0;
disk /dev/sda9;
address 10.0.0.5:7788;
meta-disk internal;
}
on tnode2 {
device /dev/drbd0;
disk /dev/sda9;
address 10.0.0.6:7788;
meta-disk internal;
}
}
It says sndbuf-size should not be set over 1M. Is it a problem if
sndbuf-size aggregated over multiple sections is above 1M?
Cheers,
Anselm Strauss
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20060823/0b79221a/attachment.htm>