[DRBD-user] Lower device primary-primary, upper device StandAlone + LVM = "block drbd0: Got DiscardAck packet..."

3Flight admin at tradomed-invest.ru
Mon Apr 18 16:14:21 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


My configuration is following:

PVE host1: drbd0-pri>>drbd10-pri-standalone
PVE host2: drbd0-pri>>drbd10-pri-standalone

I create PV from drbd10 on one host, and I can immediately see it with
pvscan on another host.
Then I create VG on one host, and see in on the other.
Then I create VM via webui of proxmox on host1, and this creates an LV(call
it lv1). I can see it on host2.
Then I create VM via webui of proxmox on host2, and this creates an LV(call
it lv2). I can see it on host1.
Then I start both VMs simultaneously and start installing debian on them.
Eventually one of VMs hangs up and if I run "dmesg" I can see this:

block drbd0: drbd10_worker[3489] Concurrent remote write detected! [DISCARD
L] new: 5879393s +512; pending: 5879393s +512
block drbd0: drbd0_receiver[2896] Concurrent local write detected!      new:
5879393s +512; pending: 5879393s +512
block drbd0: Concurrent write! [W AFTERWARDS] sec=5879393s
block drbd0: Got DiscardAck packet 5879393s +512! DRBD is not a random data
generator!
block drbd0: drbd0_receiver[2896] Concurrent local write detected!      new:
5879394s +512; pending: 5879394s +512
block drbd0: Concurrent write! [W AFTERWARDS] sec=5879394s
block drbd0: Got DiscardAck packet 5879394s +512! DRBD is not a random data
generator!
block drbd0: drbd10_worker[3489] Concurrent remote write detected! [DISCARD
L] new: 5879395s +512; pending: 5879395s +512
block drbd0: drbd10_worker[3489] Concurrent remote write detected! [DISCARD
L] new: 5879394s +512; pending: 5879394s +512
block drbd0: drbd0_receiver[2896] Concurrent local write detected!      new:
5879395s +512; pending: 5879395s +512
block drbd0: Concurrent write! [W AFTERWARDS] sec=5879395s
block drbd0: Got DiscardAck packet 5879395s +512! DRBD is not a random data
generator!
block drbd0: Got DiscardAck packet 5879396s +512! DRBD is not a random data
generator!
INFO: task drbd10_worker:3489 blocked for more than 120 seconds.
drbd10_worker D ffff880001f156c0     0  3489      2 0x00000000
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a3d98>] _drbd_md_sync_page_io+0xe5/0x17a [drbd]
 [<ffffffffa03a48e0>] drbd_md_sync_page_io+0x337/0x3ff [drbd]
 [<ffffffffa03a4fcc>] w_al_write_transaction+0x202/0x2e2 [drbd]
 [<ffffffffa0391d80>] drbd_worker+0x55a/0x567 [drbd]
 [<ffffffffa03ab7f6>] drbd_thread_setup+0x33/0x119 [drbd]
 [<ffffffffa03ab7c3>] ? drbd_thread_setup+0x0/0x119 [drbd]
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a3c5a>] drbd_al_begin_io+0x15b/0x1b4 [drbd]
 [<ffffffffa03a4dca>] ? w_al_write_transaction+0x0/0x2e2 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa03a2d8e>] ? _al_get+0x78/0x8f [drbd]
 [<ffffffffa03a3be4>] drbd_al_begin_io+0xe5/0x1b4 [drbd]
 [<ffffffffa03a0f41>] drbd_make_request_common+0x32b/0xd84 [drbd]
 [<ffffffffa03a1c19>] drbd_make_request_26+0x27f/0x3da [drbd]
 [<ffffffffa039f94b>] ? drbd_merge_bvec+0x74/0xa2 [drbd]

Similar logs can be retrieved from the other host:
block drbd0: drbd10_worker[6323] Concurrent remote write detected! [DISCARD
L] new: 5879395s +512; pending: 5879395s +512
block drbd0: drbd0_receiver[6264] Concurrent local write detected!      new:
5879393s +512; pending: 5879393s +512
block drbd0: Concurrent write! [DISCARD BY FLAG] sec=5879393s
block drbd0: drbd0_receiver[6264] Concurrent local write detected!      new:
5879394s +512; pending: 5879394s +512
block drbd0: Concurrent write! [DISCARD BY FLAG] sec=5879394s
block drbd0: drbd0_receiver[6264] Concurrent local write detected!      new:
5879395s +512; pending: 5879395s +512
block drbd0: Concurrent write! [DISCARD BY FLAG] sec=5879395s
block drbd0: drbd0_receiver[6264] Concurrent local write detected!      new:
5879396s +512; pending: 5879396s +512
block drbd0: Concurrent write! [DISCARD BY FLAG] sec=5879396s

But this hosts' VM did not hang. But I tested this a couple of times and
sometimes both VMs hang up. What's wrong with this config? I want to do this
because later I want to sync upper layer drbd to a 3rd node.


-- 
View this message in context: http://old.nabble.com/Lower-device-primary-primary%2C-upper-device-StandAlone-%2B-LVM-%3D-%22block-drbd0%3A-Got-DiscardAck-packet...%22-tp31424242p31424242.html
Sent from the DRBD - User mailing list archive at Nabble.com.




More information about the drbd-user mailing list