[DRBD-user] Slow bandwith
Baji Zsolt
bajizs at cnt.rs
Thu Jan 7 10:10:14 CET 2021
Hello,
I don't known what is the problem, cant solve. I have local storage with
hardware RAID 5 and nvme SSD cache (on LVM). When I test local storage
speed always get 90-110MB/s. But over DRBD this is 35-45MB/s between two
nodes. The nodes is not used now in production only for testing. Link
between servers is 10G SFP+ optic over Cisco Nexus 3K. I use 9k JumboFrame.
For setup I used linstor controller and client on both server (system
vill use more servers after successful setup). Try to cjhange
c-max-rate, sndbuf-size, c-fill-target, max-buffers, max-epoch-size...
Nothing changed, always get some results. I don't known why.
tcp_lat:
latency = 24.6 us
msg_rate = 40.6 K/sec
loc_send_bytes = 40.6 KB
loc_recv_bytes = 40.6 KB
loc_send_msgs = 40,603
loc_recv_msgs = 40,602
rem_send_bytes = 40.6 KB
rem_recv_bytes = 40.6 KB
rem_send_msgs = 40,602
rem_recv_msgs = 40,602
tcp_bw:
bw = 1.24 GB/sec
msg_rate = 18.9 K/sec
send_bytes = 2.48 GB
send_msgs = 37,776
recv_bytes = 2.47 GB
recv_msgs = 37,730
Zsolt
More information about the drbd-user
mailing list