Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I've tested on the stock RHEL 5.3 x86_64 kernel (2.6.18-128.2.1.el5)
with drbd 8.3.2-3. Still having the same issue.
Perhaps this bug is related to the one mentioned in the "drbd 8.3.2
crash on Centos 5 while verify" thread?
Joshua West wrote:
> Just an update... the problem also occurs on 8.2.7. DRBD was built
> against 2.6.18.8 (xen 3.4.1rc8) on x86_64.
>
> Side note... I see the Recv-Q and Send-Q counts (from netstat -plan)
> very high with the connections between the two drbd hosts. 'pe' and
> 'ua' in /proc/drbd are also very high.
>
> Anybody have any thoughts?
>
> I'm going to test on the stock RHEL 5.3 kernel next; will post results.
>
> Thanks.
>
> Joshua West wrote:
>
>> Hey all,
>>
>> I'm attempting to run "drbdadm verify all" with either 8.3.0 or 8.3.2
>> and having no luck. In fact, upon executing that command and waiting a
>> bit, I still see no progress made in /proc/drbd (0% for all resources).
>> Additionally, dmesg starts to receive logs like:
>>
>> drbd0: [drbd0_worker/24497] sock_sendmsg time expired, ko = 4294967204
>> drbd2: [drbd2_worker/25816] sock_sendmsg time expired, ko = 4294967203
>> drbd1: [drbd1_worker/28768] sock_sendmsg time expired, ko = 4294967203
>> drbd0: [drbd0_worker/24497] sock_sendmsg time expired, ko = 4294967203
>> drbd2: [drbd2_worker/25816] sock_sendmsg time expired, ko = 4294967202
>> drbd1: [drbd1_worker/28768] sock_sendmsg time expired, ko = 4294967202
>>
>> and just keeps on going. 'dmesg' output on the peer drbd server looks
>> similar. All drbdadm commands -- on both servers -- freeze at this
>> point, but rebooting just one of the two systems resolves the issue.
>>
>> For reference, my /etc/drbd.conf looks like:
>>
>> global {
>> usage-count no;
>> }
>>
>> common {
>> protocol C;
>> syncer {
>> rate 1G;
>> verify-alg sha1;
>> }
>> net {
>> allow-two-primaries;
>> }
>> startup {
>> become-primary-on both;
>> }
>> }
>>
>> resource vm_ha-test1 {
>> on xen-ha-f1.unet.brandeis.edu {
>> device /dev/drbd0;
>> disk /dev/vg0/drbd-vm_ha-test1;
>> address 129.64.101.47:7789;
>> meta-disk internal;
>> }
>> on xen-ha-g1.unet.brandeis.edu {
>> device /dev/drbd0;
>> disk /dev/vg0/drbd-vm_ha-test1;
>> address 129.64.101.49:7789;
>> meta-disk internal;
>> }
>> }
>>
>> resource vm_ha-test1_swap {
>> on xen-ha-f1.unet.brandeis.edu {
>> device /dev/drbd1;
>> disk /dev/vg0/drbd-vm_ha-test1_swap;
>> address 129.64.101.47:7790;
>> meta-disk internal;
>> }
>> on xen-ha-g1.unet.brandeis.edu {
>> device /dev/drbd1;
>> disk /dev/vg0/drbd-vm_ha-test1_swap;
>> address 129.64.101.49:7790;
>> meta-disk internal;
>> }
>> }
>>
>> Any help would be greatly appreciated, as I'd prefer to have nightly
>> cron jobs verifying the state of the drbd resources' data.
>>
>> Thanks.
>>
>>
>>
>
>
>
--
Joshua West
Senior Systems Engineer
Brandeis University
http://www.brandeis.edu