Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Thank you!
Fixing the cpu mask works like a charme with you suggestions:
1.)
Primary# for i in $(pidof drbd0_worker drbd1_worker drbd0_receiver drbd1_receiver drbd0_asender drbd1_asender); do taskset -c -p $i; done
pid 4129's current affinity list: 0,2,4,6
pid 4143's current affinity list: 0,2,4,6
pid 4152's current affinity list: 0,2,4,6
pid 4154's current affinity list: 0,2,4,6
pid 12752's current affinity list: 0,2,4,6
pid 15831's current affinity list: 0,2,4,6
2.)
--- drbd.conf.old
+++ drbd.conf
al-extents 3389;
- cpu-mask 255;
+ cpu-mask ff;
}
3.)
Primary# drbdadm -v adjust all
drbdsetup 0 syncer --set-defaults --create-device --cpu-mask=ff --al-extents=3389 --verify-alg=sha1 --rate=100M
drbdsetup 1 syncer --set-defaults --create-device --cpu-mask=ff --al-extents=3389 --verify-alg=sha1 --rate=100M --after=0
Primary# for i in $(pidof drbd0_worker drbd1_worker drbd0_receiver drbd1_receiver drbd0_asender drbd1_asender); do taskset -c -p $i; done
pid 4129's current affinity list: 0-7
pid 4143's current affinity list: 0-7
pid 4152's current affinity list: 0,2,4,6
pid 4154's current affinity list: 0,2,4,6
pid 12752's current affinity list: 0-7
pid 15831's current affinity list: 0-7
4.)
Secondary# drbdadm disconnect all
Secondary# drbdadm connect all
Primary# for i in $(pidof drbd0_worker drbd1_worker drbd0_receiver drbd1_receiver drbd0_asender drbd1_asender); do taskset -c -p $i; done
pid 4129's current affinity list: 0-7
pid 4143's current affinity list: 0-7
pid 4152's current affinity list: 0-7
pid 4154's current affinity list: 0-7
pid 11350's current affinity list: 0-7
pid 11351's current affinity list: 0-7
(DRBD is wonderful;-)
Kind Regards,
Roland
Am 31.01.2011 20:17, schrieb Lars Ellenberg:
> On Mon, Jan 31, 2011 at 11:12:49AM +0100, Roland Friedwagner wrote:
>> Hello,
>>
>> after reading the available documentation in DRBD User-Guide
>> (http://www.drbd.org/users-guide/s-latency-tuning.html#s-latency-tuning-cpu-mask)
>>
>> ...
>> A mask of 12 (00001100) implies DRBD may use the third and fourth CPU.
>> ...
>>
>> and the man page drbd.conf:
>>
>> ...
>> The default value of cpu-mask is 0, which means that
>> DRBD's kernel threads should be spread over all CPUs of the machine.
>> This value must be given in hexadecimal notation.
>> ...
>>
>>
>> I set the config parameter cpu-mask in drbd.conf to 255
>> (to enable usage of all 8 available cores) but got this:
>>
>> # ps u 4387
>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
>> COMMAND
>> root 4387 3.5 0.0 0 0 ? S 2010 1539:29
>> [drbd0_worker]
>> # taskset -c -p 4387
>> pid 4387's current affinity list: 0,2,4,6
>>
>> But expected this list: 0,1,2,3,4,5,6,7
>>
>> => Conclusion:
>>
>> 1. The example in DRBD User-Guide is simply wrong
>> (drbd.conf: "cpu-mask 12;" => affinity list: 1,4)
>>
>> 2. The cpu-mask parameter has to be specified, as stated in the man
>> page,
>> as Hexstring ("cpu-mask ff;" to get the first 8 cpus) in drbd.conf
>>
>> 3. But if the parameter cpu-mask is explicit set to zero in drbd.conf
>> (to get it run on _all_ cpus) I get only the second cput (affinity
>> list: 1).
>> So in this aspect the man page is wrong about the default.
>
> It's not exactly wrong, but possibly lacks an important detail:
> if the cpu_mask is not specified, the drbd kernel threads of a specific
> minor will be pinned on one particular cpu, but accross all minors, drbd
> threads will be spread over all cpus.
> At least that was the intention, iirc.
> Actual results of an unspecified cpu-mask (or explicitly specified as 0)
> may even vary with kernel version.
>
>> My DRBD Version is 8.3.9.
>>
>> @linbit: Could this be fixed in User-Guide and man page
>
> Thanks, noted, will be fixed.
>
>> And I'm not sure, if it can safely fixed by setting the mask on running
>> [drbd1_worker], [drbd1_receiver] and [drbd1_asender] tasks like
>> this:
>>
>> taskset -c -p 0-7<drbdX_yyy pids>
>
> In case kernel threads won't ignore attempts to set their cpu mask
> from userland, and I don't think they do, then this should just work.
>
>> (Because I won't like to shutdown drbd resources on primary)
>> Or may this triggers some race condition and drbd hangs or show other
>> erratic behaviour?
>
> It won't cause any harm.
>
> But you should just set cpu-mask ff.
>
> Note that it may take a new write request or other "full round trip"
> through all threads to become visible: to avoid locking issues they all
> set their own cpumask in their respective "main loop" equivalent.
>