[DRBD-user] drbd performance with GbE in connected mode

Ralf Gross Ralf-Lists at ralfgross.de
Sun Jan 14 22:21:26 CET 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

I've two server that are connected by a dedicated GbE connection.
Benchmarks with netpipe/netio are showing good GbE performance. Even a
drbd sync shows >100MB/s  (total 19179 sec; paused 8925 sec; 106968
K/sec).

NPtcp:
[...]
120: 6291459 bytes      3 times -->    892.94 Mbps in   53754.85 usec
121: 8388605 bytes      3 times -->    868.80 Mbps in   73664.51 usec
122: 8388608 bytes      3 times -->    881.13 Mbps in   72634.30 usec
123: 8388611 bytes      3 times -->    877.56 Mbps in   72929.82 usec

NETIO - Network Throughput Benchmark, Version 1.26
(C) 1997-2005 Kai Uwe Rommel

TCP connection established.
Packet size  1k bytes:  114306 KByte/s Tx,  114558 KByte/s Rx.
Packet size  2k bytes:  114575 KByte/s Tx,  114573 KByte/s Rx.
Packet size  4k bytes:  114608 KByte/s Tx,  114573 KByte/s Rx.
Packet size  8k bytes:  114612 KByte/s Tx,  114554 KByte/s Rx.
Packet size 16k bytes:  114608 KByte/s Tx,  114562 KByte/s Rx.
Packet size 32k bytes:  114585 KByte/s Tx,  114567 KByte/s Rx.
Done.


In disconnected mode I can reach >120Mb/s write performance on both
sides.

Sequential Reads
File  Blk   Num                   Avg    Maximum    Lat%    Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s     >10s   Eff
----- ----- --- ------ ------ --------- --------- -------  ------- ---
8000  4096   1  163.66 24.57%     0.118  1734.92  0.00000  0.00000 666

Random Reads
File  Blk   Num                   Avg    Maximum    Lat%     Lat% CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s  Eff
----- ----- --- ------ ------ --------- --------- ------- -------- ---
8000  4096   1    1.65 0.648%    11.861   269.61  0.00000  0.00000 254

Sequential Writes
File  Blk   Num                   Avg    Maximum    Lat%     Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s   Eff
----- ----- --- ------ ------ --------- --------- -------- ------- ---
8000  4096   1  126.34 65.20%     0.143  2579.81  0.00000  0.00000 194

Random Writes
File  Blk   Num                   Avg    Maximum    Lat%     Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s   Eff
----- ----- --- ------ ------ --------- --------- -------  ------- ---
8000  4096   1    3.26 2.255%     5.352   635.71  0.00000  0.00000 145



Now, when both server are connected, write performance is between 70 and
80MB/s.


Sequential Reads
File  Blk   Num                   Avg    Maximum    Lat%     Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s   Eff
----- ----- --- ------ ------ --------- --------- -------- -------- ---
8000  4096    1  161.03 25.72%     0.120 1910.72  0.00000  0.00000  626

Random Reads
File  Blk   Num                   Avg    Maximum    Lat%     Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s   Eff
----- ----- --- ------ ------ --------- --------- --------  ------- ---
8000  4096    1   1.72 0.571%    11.375   738.84   0.00000  0.00000 300

Sequential Writes
File  Blk   Num                   Avg    Maximum    Lat%     Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s   Eff
----- ----- --- ------ ------ --------- --------- -------  -------- ---
8000  4096    1  71.62 42.12%     0.257  17274.23  0.00117  0.00000 170

Random Writes
File  Blk   Num                   Avg    Maximum    Lat%     Lat%  CPU
Size  Size  Thr  Rate  (CPU%)  Latency   Latency    >2s      >10s   Eff
----- ----- --- ------ ------ --------- --------- -------- -------- ---
8000  4096    1   2.58 2.364%     5.107   299.99  0.00000  0.00000  109



I understand that write performace is limited by GbE performace. But
shouldn't the write performance be about 90-100MB/s? This is what I can
get with the drbd benchmark tool.

./dm -x -a 0 -s 4g -b 20m -m -y -p -o /mnt/test
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRr
97.63 MB/sec (4294967296 B / 00:41.952388

And a simple dd test.

dd if=/dev/zero of=/mnt/foo bs=1024 count=5000000
5000000+0 Datensätze ein
5000000+0 Datensätze aus
5120000000 Bytes (5,1 GB) kopiert, 61,3171 Sekunden, 83,5 MB/s


I already tried different setting in drbd.conf.

 sndbuf-size      1M;
 max-buffers      20480;
 max-epoch-size   16384;


The two systems:

Debian etch 
drbd 0.7.21 (debian package)

server 1: Dual Xeon 2.8 GHz
          HP cciss Raid 1 for OS
          easyRAID ext. SATA Raid array with 4 drbd devices on Raid6

server 2: Core 2 Duo 6600 2.4 GHz
          Areca ARC-1230 PCI-e int. SATA Raid controller

The tested ext3 fs is on a 300GB lvm vg.

Are these the numbers I have to expect? Anything more I could try to
impove the write throughput?

Ralf



More information about the drbd-user mailing list