[DRBD-user] blocking I/O with drbd

Volker mail at blafoo.org
Mon Dec 19 13:44:47 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

>> Now, how do i "debug" sync? :-)
> 
> your writes are being delayed. What "bug" are you trying to find?

Well, without drbd in use the writes on the very same

- sdb-device
- volume_group
- lvm_volume

are not being delayed. And currently im looking at drbd as a possible
cause. It must not necessarily be a bug :-)

> What does top have to say about CPU utilization (iowait)?
top shows 1-2 core(s) of eight total at around 95-100% wait while the
aforementioned dd is running, the other six are 90-95% idle.

That got me thinking about the cpu-affinity of the drbd-threads and i
looked at the worker drbd_worker:

###
$ taskset -p 3953
pid 3953's current affinity mask: 40
###

For some reason only one core is being assigned, even though 8 are
available. That was the case on both nodes. I tried setting

cpu_mask ff;

in the config on both nodes. which worked fine:

###
$ taskset -c -p 3953
pid 4385's current affinity list: 0-7
###

But it only worked for the drbd_worker and drbd_asender, not for the
drbd_receiver. Is that the desired behaviour?

Sadly the write-delay-behaviour has not changed after setting this,
doing 'adjust' and running a dd+sync.

Any hints here?

> Is the network link congested?
No, the network link is performing fine syncing at 25MB/s without any
problems.

> How is the RTT developing?
I guess we're talking round-trip-time? If so, there is only a single
switch inbetween the two servers which also shows no problems or any
kind of error message (its managable, with syslog etc.).




More information about the drbd-user mailing list