Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi all, I have a two node DRBD cluster with Primary/Primary configuration. The underling hard-drive that maps to drbd device is a home-made RAM disk that present itself as SCSI device(not the Linux /dev/ram# device). I tested the read throughput via dd command, it gets at around 1.6G/s with 512k block size which is faster than SSD. In terms of write, I first test it through a 1G Ethernet. Surprisingly, it gives me 110M/s which really exceed my expectation. Later on, I try to boost the write throughput up by adding 10G Ethernet card dedication connection. However, no matter how I fine tune those parameters in drbd.conf, the best write throughput I can get stays at around 320M/s. Then I dumped out TCP package to see if there is too much overhead in protocol handshaking, in fact, I don`t see too many small package ticking around. Has anybody countered the similar issue? The following is my configuration. [root [at] NSS-SM-3 etc]# drbdadm dump # /etc/drbd.conf common { protocol C; } # resource mirror on NSS-SM-34: not ignored, not stacked resource mirror { on NSS-SM-33 { device /dev/drbd1 minor 1; disk /dev/sdp; address ipv4 192.168.3.33:7790; meta-disk internal; } on NSS-SM-34 { device /dev/drbd1 minor 1; disk /dev/sdp; address ipv4 192.168.3.34:7790; meta-disk internal; } net { allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri consensus; after-sb-2pri call-pri-lost-after-sb; max-buffers 8192; sndbuf-size 0; } syncer { rate 300M; csums-alg md5; al-extents 3800; } startup { become-primary-on both; } } [root [at] NSS-SM-3 etc]# drbdsetup /dev/drbd1 show disk { size 0s _is_default; # bytes on-io-error pass_on _is_default; fencing dont-care _is_default; max-bio-bvecs 0 _is_default; } net { timeout 60 _is_default; # 1/10 seconds max-epoch-size 2048 _is_default; max-buffers 8192; unplug-watermark 128 _is_default; connect-int 10 _is_default; # seconds ping-int 10 _is_default; # seconds sndbuf-size 0; # bytes rcvbuf-size 131070 _is_default; # bytes ko-count 0 _is_default; allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri consensus; after-sb-2pri call-pri-lost-after-sb; rr-conflict disconnect _is_default; ping-timeout 5 _is_default; # 1/10 seconds } syncer { rate 307200k; # bytes/second after -1 _is_default; al-extents 3800; csums-alg "md5"; } protocol C; _this_host { device minor 1; disk "/dev/sdp"; meta-disk internal; address ipv4 192.168.3.34:7790; } _remote_host { address ipv4 192.168.3.33:7790; } 10G Ethernet test: [root [at] NSS-SM-3 etc]# isttcp -t -l 65536 -n 20480 192.168.3.34 isttcp-t: buflen=65536, nbuf=20480, align=16384/0, port=5001 tcp -> 192.168.3.34 isttcp-t: socket isttcp-t: nodelay isttcp-t: connect isttcp-t: 1342177280 bytes in 1.19 real seconds = 1105582.49 KB/sec +++ isttcp-t: 20480 I/O calls, msec/call = 0.06, calls/sec = 17274.73 isttcp-t: 0.0user 0.5sys 0:01real 43% 0i+0d 0maxrss 0+16pf 681+2csw sync-write throughput: [root [at] NSS-SM-3 etc]# dd if=/dev/zero of=/dev/drbd1 bs=512k oflag=direct dd: writing `/dev/drbd1': No space left on device 4000+0 records in 3999+0 records out 2097049600 bytes (2.1 GB) copied, 7.36803 seconds, 285 MB/s Turn off Ethernet interface on the counterpart machine and test write WITHOUT going though network. [root [at] NSS-SM-3 etc]# dd if=/dev/zero of=/dev/drbd1 bs=512k oflag=direct dd: writing `/dev/drbd1': No space left on device 4000+0 records in 3999+0 records out 2097049600 bytes (2.1 GB) copied, 0.937389 seconds, 2.2 GB/s For small sync write, the performance is even worse, [root at NSS-SM-33 tools]# dd if=/dev/zero of=/dev/drbd1 bs=4k oflag=direct count=20000 20000+0 records in 20000+0 records out 81920000 bytes (82 MB) copied, 5.08456 seconds, 16.1 MB/s Thanks for any help. Commit yourself to constant self-improvement -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100914/ab646352/attachment.htm>