<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi GIanni,<br>
<br>
we get around 20Gbit/s over iperf with Kernel 5 with the untuned
iperf command.<br>
Sorry, that i didn't mention that.<br>
Network seems to be completly fine. We don't have any packet drops
or similar too.<br>
<br>
Thanks,<br>
Alex<br>
<p><br>
</p>
<div class="moz-cite-prefix">Am 26.07.19 um 11:50 schrieb Gianni
Milo:<br>
</div>
<blockquote type="cite"
cite="mid:CACzVk9VJ4LHe1+XWjJMu4dB8Ctj+-DvoO47byq-9AvNa5fC7bg@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Hi Alexander,</div>
<div><br>
</div>
<div>Do you get same behaviour when using other tools like iperf
for example ? if so, then this might be not related to the
DRBD itself, but due to ethernet drivers issues for example..</div>
<div><br>
</div>
<div>Gianni</div>
<div><br>
</div>
<div><br>
</div>
</div>
<div dir="ltr">
<div dir="ltr"><br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, 25 Jul 2019 at
13:26, Alexander Karamanlidis <<a
href="mailto:alexander.karamanlidis@lindenbaum.eu"
target="_blank" moz-do-not-send="true">alexander.karamanlidis@lindenbaum.eu</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">Hi everyone,<br>
<br>
<br>
so we upgraded to PVE6 (Proxmox VE 6) about a week ago to
test it.<br>
<br>
Since then our DRBD resource got incredibly slow (Around
105Mbit/s).<br>
<br>
If we boot to Kernel 4.15 our Speeds get back to normal (Max
15Gbit/s)<br>
<br>
<br>
Here's some data:<br>
<br>
<br>
root@node1:~# cat<br>
/sys/kernel/debug/drbd/resources/r0/connections/node2/0/proc_drbd ; echo<br>
-e "\n\n" ; uname -a ; echo -e "\n\n" ; dpkg -l | grep<br>
'pve-kernel\|drbd' ; echo -e "\n\n" ; drbdadm dump<br>
0: cs:SyncTarget ro:Secondary/Primary
ds:Inconsistent/UpToDate C r-----<br>
ns:0 nr:12242948 dw:12242948 dr:110085144 al:0 bm:0 lo:0
pe:[0;93]<br>
ua:0 ap:[0;0] ep:1 wo:2 oos:7352525548<br>
[>....................] sync'ed: 1.5%
(7180200/7287584)M<br>
finish: 4:03:52 speed: 502,476 (530,020 -- 484,408)
want:<br>
2,000,000 K/sec<br>
1% sector pos: 297469952/15002273264<br>
resync: used:2/61 hits:214057 misses:1684 starving:0
locked:0<br>
changed:842<br>
act_log: used:0/1237 hits:0 misses:0 starving:0
locked:0 changed:0<br>
blocked on activity log: 0/0/0<br>
<br>
<br>
<br>
Linux node1 4.15.18-18-pve #1 SMP PVE 4.15.18-44 (Wed, 03
Jul 2019<br>
11:19:13 +0200) x86_64 GNU/Linux<br>
<br>
<br>
<br>
ii drbd-dkms
9.0.19-1 <br>
all RAID 1 over TCP/IP for Linux module source<br>
ii drbd-utils
9.10.0-1 <br>
amd64 RAID 1 over TCP/IP for Linux (user utilities)<br>
ii drbdtop
0.2.1-1 <br>
amd64 like top, but for drbd<br>
ii pve-firmware
3.0-2 <br>
all Binary firmware code for the pve-kernel<br>
ii pve-kernel-4.15
5.4-6 <br>
all Latest Proxmox VE Kernel Image<br>
ii pve-kernel-4.15.18-12-pve
4.15.18-36 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-4.15.18-16-pve
4.15.18-41 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-4.15.18-18-pve
4.15.18-44 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-5.0
6.0-5 <br>
all Latest Proxmox VE Kernel Image<br>
ii pve-kernel-5.0.15-1-pve
5.0.15-1 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-helper
6.0-5 <br>
all Function for various kernel maintenance tasks.<br>
<br>
<br>
<br>
# /etc/drbd.conf<br>
# resource r0 on node1: not ignored, not stacked<br>
# defined at /etc/drbd.d/r0.res:1<br>
resource r0 {<br>
on node1 {<br>
node-id 1;<br>
volume 0 {<br>
device /dev/drbd0 minor 0;<br>
disk <br>
/dev/disk/by-uuid/8a879a82-3880-4998-b5cb-70a95ce4bf79;<br>
meta-disk internal;<br>
}<br>
address ipv4 <a
href="http://192.168.99.1:7788" rel="noreferrer"
target="_blank" moz-do-not-send="true">192.168.99.1:7788</a>;<br>
}<br>
on node2 {<br>
node-id 0;<br>
volume 0 {<br>
device /dev/drbd0 minor 0;<br>
disk <br>
/dev/disk/by-uuid/8a879a82-3880-4998-b5cb-70a95ce4bf79;<br>
meta-disk internal;<br>
}<br>
address ipv4 <a
href="http://192.168.99.2:7788" rel="noreferrer"
target="_blank" moz-do-not-send="true">192.168.99.2:7788</a>;<br>
}<br>
net {<br>
after-sb-0pri discard-zero-changes;<br>
after-sb-1pri discard-secondary;<br>
after-sb-2pri disconnect;<br>
csums-alg sha1;<br>
max-buffers 36864;<br>
max-epoch-size 20000;<br>
rcvbuf-size 2097152;<br>
sndbuf-size 1048576;<br>
verify-alg sha1;<br>
}<br>
disk {<br>
c-fill-target 10240;<br>
c-max-rate 2237280;<br>
c-min-rate 204800;<br>
c-plan-ahead 0;<br>
resync-rate 2000000;<br>
}<br>
}<br>
<br>
<br>
<br>
If we put I/O on the DRBD Resource we get a maximum of
14.19Gbit/s (We<br>
have 25Gbit/s direct Attached Network) on our bond interface
with the<br>
4.15 Kernel<br>
<br>
<br>
<br>
<br>
<br>
<br>
root@node1:~# cat<br>
/sys/kernel/debug/drbd/resources/r0/connections/node2/0/proc_drbd ; echo<br>
-e "\n\n" ; uname -a ; echo -e "\n\n" ; dpkg -l | grep<br>
'pve-kernel\|drbd' ; echo -e "\n\n" ; drbdadm dump<br>
0: cs:SyncTarget ro:Secondary/Primary
ds:Inconsistent/UpToDate C r-----<br>
ns:0 nr:541700 dw:541700 dr:10270724 al:0 bm:0 lo:0
pe:[0;107] ua:0<br>
ap:[0;0] ep:1 wo:2 oos:7334758124<br>
[>....................] sync'ed: 0.2%
(7162848/7172768)M<br>
finish: 14:05:20 speed: 144,596 (154,112 -- 166,556)
want:<br>
2,000,000 K/sec<br>
2% sector pos: 332978176/15002273264<br>
resync: used:2/61 hits:19875 misses:162 starving:0
locked:0<br>
changed:81<br>
act_log: used:0/1237 hits:0 misses:0 starving:0
locked:0 changed:0<br>
blocked on activity log: 0/0/0<br>
<br>
<br>
<br>
Linux node1 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 Jul
2019 10:51:57<br>
+0200) x86_64 GNU/Linux<br>
<br>
<br>
<br>
ii drbd-dkms
9.0.19-1 <br>
all RAID 1 over TCP/IP for Linux module source<br>
ii drbd-utils
9.10.0-1 <br>
amd64 RAID 1 over TCP/IP for Linux (user utilities)<br>
ii drbdtop
0.2.1-1 <br>
amd64 like top, but for drbd<br>
ii pve-firmware
3.0-2 <br>
all Binary firmware code for the pve-kernel<br>
ii pve-kernel-4.15
5.4-6 <br>
all Latest Proxmox VE Kernel Image<br>
ii pve-kernel-4.15.18-12-pve
4.15.18-36 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-4.15.18-16-pve
4.15.18-41 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-4.15.18-18-pve
4.15.18-44 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-5.0
6.0-5 <br>
all Latest Proxmox VE Kernel Image<br>
ii pve-kernel-5.0.15-1-pve
5.0.15-1 <br>
amd64 The Proxmox PVE Kernel Image<br>
ii pve-kernel-helper
6.0-5 <br>
all Function for various kernel maintenance tasks.<br>
<br>
<br>
<br>
# /etc/drbd.conf<br>
# resource r0 on node1: not ignored, not stacked<br>
# defined at /etc/drbd.d/r0.res:1<br>
resource r0 {<br>
on node1 {<br>
node-id 1;<br>
volume 0 {<br>
device /dev/drbd0 minor 0;<br>
disk <br>
/dev/disk/by-uuid/8a879a82-3880-4998-b5cb-70a95ce4bf79;<br>
meta-disk internal;<br>
}<br>
address ipv4 <a
href="http://192.168.99.1:7788" rel="noreferrer"
target="_blank" moz-do-not-send="true">192.168.99.1:7788</a>;<br>
}<br>
on node2 {<br>
node-id 0;<br>
volume 0 {<br>
device /dev/drbd0 minor 0;<br>
disk <br>
/dev/disk/by-uuid/8a879a82-3880-4998-b5cb-70a95ce4bf79;<br>
meta-disk internal;<br>
}<br>
address ipv4 <a
href="http://192.168.99.2:7788" rel="noreferrer"
target="_blank" moz-do-not-send="true">192.168.99.2:7788</a>;<br>
}<br>
net {<br>
after-sb-0pri discard-zero-changes;<br>
after-sb-1pri discard-secondary;<br>
after-sb-2pri disconnect;<br>
csums-alg sha1;<br>
max-buffers 36864;<br>
max-epoch-size 20000;<br>
rcvbuf-size 2097152;<br>
sndbuf-size 1048576;<br>
verify-alg sha1;<br>
}<br>
disk {<br>
c-fill-target 10240;<br>
c-max-rate 2237280;<br>
c-min-rate 204800;<br>
c-plan-ahead 0;<br>
resync-rate 2000000;<br>
}<br>
}<br>
<br>
If we put I/O on the DRBD Resource we get a maximum of
107Mbit/s (We<br>
have 25Gbit/s direct Attached Network) on our bond interface
with the<br>
5.0.15 Kernel<br>
<br>
<br>
<br>
Maybe someone has a clue what changed in Kernel 5 that is
slowing us<br>
that much down.<br>
<br>
Maybe someone even knows a solution for that.<br>
<br>
<br>
<br>
Kind Regards,<br>
<br>
Alexander Karamanlidis<br>
<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
Star us on GITHUB: <a href="https://github.com/LINBIT"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://github.com/LINBIT</a><br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank"
moz-do-not-send="true">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote>
</div>
</div>
</blockquote>
<pre class="moz-signature" cols="72">--
Freundliche Grüße
Kind regards,
Alexander Karamanlidis
IT Systemadministrator
Phone: +49 721 480 848 – 609
Lindenbaum GmbH Conferencing - Virtual Dialogues
Facebook | LinkedIn | Youtube | Website
Head office: Ludwig-Erhard-Allee 34 im Park Office, 76131 Karlsruhe
Registration court: Amtsgericht Mannheim, HRB 706184
Managing director: Maarten Kronenburg
Tax number: 35007/02060, USt. ID: DE 263797265
Lindenbaum auf der CallCenterWorld – und auf dem Mobile World Congress
</pre>
</body>
</html>