Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi!
Am 10.02.2017 um 12:47 schrieb Robert Altnoeder:
> The first question would be what the configuration for each of those
> resources is (global_common.conf, resource configuration file).
Have not send them because they are plain default:
Pair A:
resource r0 {
on mail1 {
device /dev/drbd1;
disk /dev/sda1;
address 172.27.250.8:7789;
meta-disk internal;
}
on mail2 {
device /dev/drbd1;
disk /dev/sda1;
address 172.27.250.9:7789;
meta-disk internal;
}
}
global {
usage-count no;
# minor-count dialog-refresh disable-ip-verification
}
common {
handlers {
}
startup {
}
options {
}
disk {
}
net {
protocol C;
}
}
No LVM below the DRBD, but above.
Pair B:
root at proxmox-1:~# lvdisplay
--- Logical volume ---
LV Path /dev/drbdpool/.drbdctrl_0
LV Name .drbdctrl_0
VG Name drbdpool
LV UUID Qq7v8Y-Mu4b-S2ZN-fPma-3oLo-12z1-M5t6A5
LV Write Access read/write
LV Creation host, time proxmox-1, 2017-02-08 22:07:21 +0100
LV Status available
# open 2
LV Size 4.00 MiB
Current LE 1
Segments 1
Allocation
inherit
Read ahead sectors
auto
- currently set to
256
Block device
251:5
--- Logical volume
---
LV Path
/dev/drbdpool/.drbdctrl_1
LV Name
.drbdctrl_1
VG Name
drbdpool
LV UUID
31B7yV-iqfq-bK1G-KmHj-FoT4-2Ui4-z59NAs
LV Write Access
read/write
LV Creation host, time proxmox-1, 2017-02-08 22:07:21
+0100
LV Status
available
# open
2
LV Size 4.00
MiB
Current LE
1
Segments
1
Allocation
inherit
Read ahead sectors
auto
- currently set to
256
Block device
251:6
--- Logical volume
---
LV Path
/dev/drbdpool/vm-100-disk-1_00
LV Name
vm-100-disk-1_00
VG Name
drbdpool
LV UUID
mPiP9w-db8R-KHfq-vb06-7Msx-P5B0-BSer22
LV Write Access
read/write
LV Creation host, time proxmox-1, 2017-02-09 03:13:46
+0100
LV Status available
# open 2
LV Size 32.03 GiB
Current LE 8200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:7
root at proxmox-1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name drbdpool
PV Size 824.19 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 210991
Free PE 202789
Allocated PE 8202
PV UUID 7cJFqj-cV2h-z6AT-4DJk-CTy9-TGuG-vnAF4L
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 37.75 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 9663
Free PE 1175
Allocated PE 8488
PV UUID QNtXfY-fk0o-TDb4-mugP-qnov-skaB-9mCWCY
resource .drbdctrl {
net {
cram-hmac-alg sha256;
shared-secret "xxxxx";
allow-two-primaries no;
}
volume 0 {
device minor 0;
disk /dev/drbdpool/.drbdctrl_0;
meta-disk internal;
}
volume 1 {
device minor 1;
disk /dev/drbdpool/.drbdctrl_1;
meta-disk internal;
}
on proxmox-2 {
node-id 1;
address ipv4 10.0.0.2:6999;
}
on proxmox-1 {
node-id 0;
address ipv4 10.0.0.1:6999;
}
connection-mesh {
hosts proxmox-2 proxmox-1;
net {
protocol C;
}
}
}
resource vm-100-disk-1 {
template-file "/var/lib/drbd.d/drbdmanage_global_common.conf";
net {
allow-two-primaries yes;
shared-secret "xxxxxxx";
cram-hmac-alg sha1;
}
on proxmox-2 {
node-id 0;
address ipv4 10.0.0.2:7000;
volume 0 {
device minor 101;
disk /dev/null;
disk {
size 33554432k;
}
meta-disk internal;
}
}
on proxmox-1 {
node-id 1;
address ipv4 10.0.0.1:7000;
volume 0 {
device minor 101;
disk /dev/drbdpool/vm-100-disk-1_00;
disk {
size 33554432k;
}
meta-disk internal;
}
}
connection-mesh {
hosts proxmox-2 proxmox-1;
}
}
common {
handlers {
}
startup {
}
options {
}
disk {
}
net {
protocol C;
}
}
The hardware is checked:
* sysbench shows 1700 IO-Ops per second (27Mb/sec) at random writes on
both Pair B servers on a ext2 mounted LVM test device in drbdpool. The
hardware of Pair B is several years old and only for testing purposes,
but still powerfull.
root at proxmox-1:/mnt/test# sysbench --test=fileio --file-total-size=10G
--file-test-mode=rndwr --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.
Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
Done.
Operations performed: 0 Read, 513000 Write, 656560 Other = 1169560 Total
Read 0b Written 7.8278Gb Total transferred 7.8278Gb *(26.717Mb/sec)*
*1709.91 Requests/sec executed*
Test execution summary:
total time: 300.0155s
total number of events: 513000
total time taken by event execution: 5.9942
per-request statistics:
min: 0.01ms
avg: 0.01ms
max: 0.45ms
approx. 95 percentile: 0.01ms
Threads fairness:
events (avg/stddev): 513000.0000/0.00
execution time (avg/stddev): 5.9942/0.00
* iperf shows a 960 GBit TCP transfer rate (The line ist 10GBit, but the
CISCO switchport ist 1Gbit). Sending UDP packages the first package
losses occur around 800Mbit utilisation of the line. The latency on this
line is 0.3-0.4 msec. So I think that the line is not the problem. Also
Pair A using the same CISCO switch and 10G-Backbone has a good rate.
Next test I will do is to reimplement drbd8 in the proxmox-kernel and do
the comparison drbd8/9 on the same hardware.
Cheers
Volker
--
=========================================================
inqbus Scientific Computing Dr. Volker Jaenisch
Richard-Strauss-Straße 1 +49(08861) 690 474 0
86956 Schongau-West http://www.inqbus.de
=========================================================
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20170210/562a5a83/attachment.htm>