[DRBD-user] performance issues with DRBD :(

Beck Ingmar ingmar.beck at eccos-pro.com
Thu Dec 10 13:23:52 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.



Hi Florian,

why improves bonding of the replication-connection not the performance.

We currently have a direct connection for the 1 Gb DRBD replication and
I
wanted to activate an active-active bonding (with a further LAN-port). I
would expect nearly double of replication-performance. Andreas Kurz had
spoken at a workshop for that.

Or depends it on whether the connection between the nodes directly or
via
1-n switches.

regards

"Florian Haas" <florian.haas-63ez5xqkn6DQT0dZR+AlfA at public.gmane.org>
schrieb im Newsbeitrag news:<4B20ACF4.5090707 at linbit.com>...
On 2009-12-10 00:01, Jakov Sosic wrote:
> Network is not the problem because I have 4 intel giga NIC's in
bonding
> mode 0 (round-robin), and I can see that nasnodes are utilizing all
the
> interfaces, but load on them is around 1-5%

You have round robin over 4 links that you replicate DRBD over? TCP
reordering is definitely going to kill you. This is expected to have
lower throughput that a single non-bonded link.

Remove that bonding configuration. Then re-run your tests.

If you want higher throughput than about 170MB/s over the wire in any
single TCP connection, get 10G Ethernet. Or Infiniband. Or Dolphin
Express. Bonding isn't going to help you.

Cheers,
Florian




More information about the drbd-user mailing list