[DRBD-user] Better DRBD sync performance with bonding or tcp multipath?

Christian Balzer chibi at gol.com
Wed Nov 13 09:13:39 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, 13 Nov 2013 07:29:13 +0100 ml ml wrote:

> Hello List,
> 
> i have a simple 2 Node Setup. They are directly connected. We are now
> planning to implement SSD Disks.
> 
> I now have the fear that the gigabit link between the two hosts will
> become a bottle neck.
>
It would be even with "normal" disks, certainly so if you have more than
one (RAID).
 
> 
> Do you think bonding or tcp multipath will increase this? What would you
> recommend?
>
A google search finds the section of the DRBD user guide first, and
funnily enough a thread with some morsels of wisdom by yours truly as well:

http://lists.linbit.com/pipermail/drbd-user/2010-October/014858.html
 
> Some mix between performance and redundancy is the goal.
> 
> Has someone got real life production experience here?
> 
Search and you will find many examples in the list archives and elsewhere. 

Up to about 200MB/s replication speed and with a small budget, bonded
(directly connected) GbE links are fine.

Once you need speeds over 250MB/s and/or fast I/O (transactions) I would
recommend directly connected Infiniband.

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/



More information about the drbd-user mailing list