[DRBD-user] Drbd and network speed

Diego Remolina diego.remolina at physics.gatech.edu
Wed Sep 16 13:36:08 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


You will be limited on writes to the speed of your drbd replication link 
(If using protocol C, which you should if you care about your data). 
Network teaming, bonding, etc will not work because you are pretty much 
going from a single IP to a single IP, so there is no benefit in 
aggregating NICs. If you really want to use the full potential of your 
backend storage, you need to purchase 10GBit network cards for drbd.

A very long time ago I ran some benchmarks:

https://services.ibb.gatech.edu/wiki/index.php/Benchmarks:Storage#Benchmark_Results_3

If you look at the first result in the table, even if the backend is 
faster (I got over 200MB/s writes on a non-drbd partition), the drbd 
partition maxes out for writes at around gigabit speed 123,738KB/s.

I currently have a new set of servers with the ARECA SAS controllers and 
24 1TB drives. The backend can write up to ~500MB/s but when I use drbd, 
the bottleneck is just 120MB/s.

I guess the only other configuration that may help speed would be to 
have a separate NIC per drbd device if you backend is capable of reading 
and writing from different locations on disk and feeding several gigabit 
replication links. I think it should be able with SAS drives.

e.g

/dev/drbd0 uses eth1 for replication
/dev/drbd1 uses eth2 for replication
/dev/drbd2 uses eth3 for replication

... you get the idea...

HTH,

Diego

Chris wrote:
> 
> Hello everyone,
> 
> I have a raid 6 setup with lvm and my write is fast around 500mb sec. I 
> want to but the drbd on top the lvm and the another lvm on top of drbd. 
> I'm wondering
> 
> 1. Will why write speed decrease because drdb needs to write the data to 
> the secound server?
> 
> 2. My raid 6 is 1tb is it better to break it up into 4 250gig or two 
> five hundred gig and 4 drbd for better speed?
> 
> All the data that is going thru drbd is connected by 1gig cross over 
> cable to both servers. Just thinking that the gig link may slow down 
> brbd with drive size and speed of drive. Sas 15k
> 
> This setup is for two xen server that will be using the allocated lvm 
> chunck
> 
> Sent from my iPhone_______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

-- 
Diego Julian Remolina
System Administrator - Systems Support Specialist IV
School of Physics
Georgia Institute of Technology
Phone: (404) 385-3499



More information about the drbd-user mailing list