Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello Group! I have been trying to set up a drbd configuration with two storage servers running under rpath Linux (openfiler). The Kernel version is 2.6.26. The Systems are each running on a q-core Xeon processor, 4GB memory and have a 16 channel icp raid controller with BBWC and are configured with raid 60 underlying. Trying to write to a sample partition /dev/sdbX on each of the machines will reach about 120MByte/s write performance which seems is poor for a raid of 16 1TB SATA drives but okay that is not the point here. Trying to write to the drbd1 device gives a performance of ~ 3MByte/s if both nodes are online and ~8Mbyte if writing to the primary node where the secondary is offline. I watched the webinar about performance tuning for drbd and tried several options such as max-buffers, al-extents and no-disk-flushes which all in all did not help at all. Performance increases with this tuning parameters are not more than 10-15 percent whereby this might be measuring inaccuracy too. Both machines are connected with two crossover cables and a balance-rr bonding device which gives a throughput of ~ 210MByte/s using tcp. But the fact that writing on a local disk is very slow using the drbd driver either I think the fault has to found somewhere in the io-configuration. I tested performance using: dd if=/dev/zero of=/dev/drbd1 bs=512M count=1 giving me a performance of ~ 8Mbyte /s and dd if=/dev/zero of/dev/sdb3 bs=512M count=1 giving me a performance of ~ 110-120Mbyte/s (where drbd1 is pointing to /dev/sdb1 on each node). Read performance is not as bad as write performance so I get 230Mbyte/s vs 125Mbyte/s (but its bad enough) If anyone has a hint what to try next it would be greatly appreciated. Thank you in advanced With kind regards, Felix