<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hi!<br>
</p>
Am 10.02.2017 um 12:47 schrieb Robert Altnoeder:<br>
<div class="moz-cite-prefix"><br>
<blockquote type="cite">The first question would be what the
configuration for each of those resources is
(global_common.conf, resource configuration file).<br>
</blockquote>
Have not send them because they are plain default:<br>
<br>
Pair A:<br>
<br>
resource r0 {<br>
on mail1 {<br>
device /dev/drbd1;<br>
disk /dev/sda1;<br>
address 172.27.250.8:7789;<br>
meta-disk internal;<br>
}<br>
on mail2 {<br>
device /dev/drbd1;<br>
disk /dev/sda1;<br>
address 172.27.250.9:7789;<br>
meta-disk internal;<br>
}<br>
}<br>
global {<br>
usage-count no;<br>
# minor-count dialog-refresh disable-ip-verification<br>
}<br>
<br>
common {<br>
handlers {<br>
}<br>
<br>
startup {<br>
}<br>
<br>
options {<br>
}<br>
<br>
disk {<br>
}<br>
<br>
net {<br>
protocol C;<br>
}<br>
}<br>
<br>
No LVM below the DRBD, but above. <br>
<br>
Pair B:<br>
<br>
root@proxmox-1:~# lvdisplay <br>
--- Logical volume ---<br>
LV Path /dev/drbdpool/.drbdctrl_0<br>
LV Name .drbdctrl_0<br>
VG Name drbdpool<br>
LV UUID Qq7v8Y-Mu4b-S2ZN-fPma-3oLo-12z1-M5t6A5<br>
LV Write Access read/write<br>
LV Creation host, time proxmox-1, 2017-02-08 22:07:21 +0100<br>
LV Status available<br>
# open 2<br>
LV Size 4.00 MiB<br>
Current LE 1<br>
Segments 1<br>
Allocation
inherit
<br>
Read ahead sectors
auto
<br>
- currently set to
256
<br>
Block device
251:5
<br>
<br>
--- Logical volume
---
<br>
LV Path
/dev/drbdpool/.drbdctrl_1
<br>
LV Name
.drbdctrl_1
<br>
VG Name
drbdpool
<br>
LV UUID
31B7yV-iqfq-bK1G-KmHj-FoT4-2Ui4-z59NAs
<br>
LV Write Access
read/write
<br>
LV Creation host, time proxmox-1, 2017-02-08 22:07:21
+0100
<br>
LV Status
available
<br>
# open
2
<br>
LV Size 4.00
MiB
<br>
Current LE
1
<br>
Segments
1
<br>
Allocation
inherit
<br>
Read ahead sectors
auto
<br>
- currently set to
256
<br>
Block device
251:6
<br>
<br>
--- Logical volume
---
<br>
LV Path
/dev/drbdpool/vm-100-disk-1_00
<br>
LV Name
vm-100-disk-1_00
<br>
VG Name
drbdpool
<br>
LV UUID
mPiP9w-db8R-KHfq-vb06-7Msx-P5B0-BSer22
<br>
LV Write Access
read/write
<br>
LV Creation host, time proxmox-1, 2017-02-09 03:13:46
+0100
<br>
LV Status available<br>
# open 2<br>
LV Size 32.03 GiB<br>
Current LE 8200<br>
Segments 1<br>
Allocation inherit<br>
Read ahead sectors auto<br>
- currently set to 256<br>
Block device 251:7<br>
<br>
root@proxmox-1:~# pvdisplay <br>
--- Physical volume ---<br>
PV Name /dev/sdb1<br>
VG Name drbdpool<br>
PV Size 824.19 GiB / not usable 4.00 MiB<br>
Allocatable yes <br>
PE Size 4.00 MiB<br>
Total PE 210991<br>
Free PE 202789<br>
Allocated PE 8202<br>
PV UUID 7cJFqj-cV2h-z6AT-4DJk-CTy9-TGuG-vnAF4L<br>
<br>
--- Physical volume ---<br>
PV Name /dev/sda3<br>
VG Name pve<br>
PV Size 37.75 GiB / not usable 2.00 MiB<br>
Allocatable yes <br>
PE Size 4.00 MiB<br>
Total PE 9663<br>
Free PE 1175<br>
Allocated PE 8488<br>
PV UUID QNtXfY-fk0o-TDb4-mugP-qnov-skaB-9mCWCY<br>
<br>
resource .drbdctrl {<br>
net {<br>
cram-hmac-alg sha256;<br>
shared-secret "xxxxx";<br>
allow-two-primaries no;<br>
}<br>
volume 0 {<br>
device minor 0;<br>
disk /dev/drbdpool/.drbdctrl_0;<br>
meta-disk internal;<br>
}<br>
volume 1 {<br>
device minor 1;<br>
disk /dev/drbdpool/.drbdctrl_1;<br>
meta-disk internal;<br>
}<br>
on proxmox-2 {<br>
node-id 1;<br>
address ipv4 10.0.0.2:6999;<br>
}<br>
on proxmox-1 {<br>
node-id 0;<br>
address ipv4 10.0.0.1:6999;<br>
}<br>
connection-mesh {<br>
hosts proxmox-2 proxmox-1;<br>
net {<br>
protocol C;<br>
}<br>
}<br>
}<br>
</div>
<br>
resource vm-100-disk-1 {<br>
template-file "/var/lib/drbd.d/drbdmanage_global_common.conf";<br>
<br>
net {<br>
allow-two-primaries yes;<br>
shared-secret "xxxxxxx";<br>
cram-hmac-alg sha1;<br>
}<br>
on proxmox-2 {<br>
node-id 0;<br>
address ipv4 10.0.0.2:7000;<br>
volume 0 {<br>
device minor 101;<br>
disk /dev/null;<br>
disk {<br>
size 33554432k;<br>
}<br>
meta-disk internal;<br>
}<br>
}<br>
on proxmox-1 {<br>
node-id 1;<br>
address ipv4 10.0.0.1:7000;<br>
volume 0 {<br>
device minor 101;<br>
disk /dev/drbdpool/vm-100-disk-1_00;<br>
disk {<br>
size 33554432k;<br>
}<br>
meta-disk internal;<br>
}<br>
}<br>
connection-mesh {<br>
hosts proxmox-2 proxmox-1;<br>
}<br>
}<br>
<br>
common {<br>
handlers {<br>
}<br>
<br>
startup {<br>
}<br>
<br>
options {<br>
}<br>
<br>
disk {<br>
}<br>
<br>
net {<br>
protocol C;<br>
}<br>
}<br>
<br>
The hardware is checked:<br>
* sysbench shows 1700 IO-Ops per second (27Mb/sec) at random writes
on both Pair B servers on a ext2 mounted LVM test device in
drbdpool. The hardware of Pair B is several years old and only for
testing purposes, but still powerfull.<br>
<br>
root@proxmox-1:/mnt/test# sysbench --test=fileio
--file-total-size=10G --file-test-mode=rndwr --init-rng=on
--max-time=300 --max-requests=0 run<br>
sysbench 0.4.12: multi-threaded system evaluation benchmark<br>
<br>
Running the test with following options:<br>
Number of threads: 1<br>
Initializing random number generator from timer.<br>
<br>
<br>
Extra file open flags: 0<br>
128 files, 80Mb each<br>
10Gb total file size<br>
Block size 16Kb<br>
Number of random requests for random IO: 0<br>
Read/Write ratio for combined random IO test: 1.50<br>
Periodic FSYNC enabled, calling fsync() each 100 requests.<br>
Calling fsync() at the end of test, Enabled.<br>
Using synchronous I/O mode<br>
Doing random write test<br>
Threads started!<br>
Time limit exceeded, exiting...<br>
Done.<br>
<br>
Operations performed: 0 Read, 513000 Write, 656560 Other = 1169560
Total<br>
Read 0b Written 7.8278Gb Total transferred 7.8278Gb <b>
(26.717Mb/sec)</b><br>
<b>1709.91 Requests/sec executed</b><br>
<br>
Test execution summary:<br>
total time: 300.0155s<br>
total number of events: 513000<br>
total time taken by event execution: 5.9942<br>
per-request statistics:<br>
min: 0.01ms<br>
avg: 0.01ms<br>
max: 0.45ms<br>
approx. 95 percentile: 0.01ms<br>
<br>
Threads fairness:<br>
events (avg/stddev): 513000.0000/0.00<br>
execution time (avg/stddev): 5.9942/0.00<br>
<br>
<br>
* iperf shows a 960 GBit TCP transfer rate (The line ist 10GBit, but
the CISCO switchport ist 1Gbit). Sending UDP packages the first
package losses occur around 800Mbit utilisation of the line. The
latency on this line is 0.3-0.4 msec. So I think that the line is
not the problem. Also Pair A using the same CISCO switch and
10G-Backbone has a good rate.<br>
<br>
Next test I will do is to reimplement drbd8 in the proxmox-kernel
and do the comparison drbd8/9 on the same hardware. <br>
<br>
Cheers<br>
<br>
Volker<br>
--
<pre class="moz-signature" cols="72">=========================================================
inqbus Scientific Computing Dr. Volker Jaenisch
Richard-Strauss-Straße 1 +49(08861) 690 474 0
86956 Schongau-West <a class="moz-txt-link-freetext" href="http://www.inqbus.de">http://www.inqbus.de</a>
=========================================================</pre>
</body>
</html>