Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi,
I just figured out that this is happening only when I am detaching two
primaries.
Switching from primary to secondary before detach does not produce any
resync.
From the man, "In case a primary node leaves the cluster unexpectedly
the areas covered by
the active set must be resynced upon rejoin of the failed node"
But in this case I am manually detaching so it is not an unexpected
leave from a primary node.
Is this a bug or is it the intended behaviour?
Cheers,
Cristian
Cristian Zamfir wrote:
>
>
> Ross S. W. Walker wrote:
>>> -----Original Message-----
>>> From: drbd-user-bounces at lists.linbit.com
>>> [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Cristian
>>> Zamfir
>>> Sent: Wednesday, February 14, 2007 12:52 PM
>>> To: drbd-user at lists.linbit.com
>>> Subject: [DRBD-user] out-of-sync based on AL
>>>
>>>
>>> Hi,
>>>
>>> I am trying to understand why the AL is marking 196MB as out of sync.
>>> I have two connected primaries A and B. After they are synced I
>>> detach B. In the mean time, the disk on A is not used. I reconnect B
>>> and the on disk bit map sees that no bytes are out-of-sync, but AL
>>> marks 196MB as out-of-sync.
>>>
>>> In case I am misunderstandings something, can you please explain a
>>> bit how the AL works?
>>>
>>> This is what dmesg says on node B:
>>>
>>>
>>> [11989.600828] drbd1: disk( UpToDate -> Diskless )
>>> [11994.007899] drbd1: disk( Diskless -> Attaching )
>>> [11994.040025] drbd1: Found 6 transactions (50 active extents) in
>>> activity log.
>>> [11994.040067] drbd1: max_segment_size ( = BIO size ) = 32768
>>> [11994.050505] drbd1: reading of bitmap took 1 jiffies
>>> [11994.050759] drbd1: recounting of set bits took additional 0 jiffies
>>> [11994.050780] drbd1: 0 KB marked out-of-sync by on disk bit-map.
>>> [11994.050819] drbd1: Marked additional 196 MB as out-of-sync based
>>> on AL.
>>> [11994.066857] drbd1: disk( Attaching -> Negotiating )
>>> [11994.066942] drbd1: Writing meta data super block now.
>>> [11994.067332] drbd1: conn( Connected -> WFBitMapT )
>>> [11994.067358] drbd1: Writing meta data super block now.
>>> [11994.079356] drbd1: conn( WFBitMapT -> WFSyncUUID )
>>> [11994.090853] drbd1: conn( WFSyncUUID -> SyncTarget ) disk(
>>> Negotiating -> Inconsistent )
>>> [11994.090895] drbd1: Began resync as SyncTarget (will sync 200704
>>> KB [50176 bits set]).
>>> [11994.090926] drbd1: Writing meta data super block now.
>>> [12014.611505] drbd1: Resync done (total 20 sec; paused 0 sec; 10032
>>> K/sec)
>>> [12014.611562] drbd1: conn( SyncTarget -> Connected ) disk(
>>> Inconsistent -> UpToDate )
>>> [12014.611601] drbd1: Writing meta data super block now.
>>>
>>
>> Interesting, what method did you use to detach and which version are you
>> running?
>>
>> -Ross
>>
>> ______________________________________________________________________
>> This e-mail, and any attachments thereto, is intended only for use by
>> the addressee(s) named herein and may contain legally privileged
>> and/or confidential information. If you are not the intended recipient
>> of this e-mail, you are hereby notified that any dissemination,
>> distribution or copying of this e-mail, and any attachments thereto,
>> is strictly prohibited. If you have received this e-mail in error,
>> please immediately notify the sender and permanently delete the
>> original and any copy or printout thereof.
>>
>>
>>
> I use drbdadmin detach and drbdadmin attach.
> I checked out the latest svn trunk on Mon, 11 Dec 2006 and my kernel
> is 2.6.16.29.
> I am using drbd on top of a 4GB LVM partition with an internal
> metadata disk.
>
> resource "r1" {
> protocol C;
> startup {
> wfc-timeout 0; ## Infinite!
> degr-wfc-timeout 120; ## 2 minutes.
> }
> disk {
> on-io-error detach;
> }
> net {
> # timeout 60;
> # connect-int 10;
> # ping-int 10;
> # max-buffers 2048;
> # max-epoch-size 2048;
> allow-two-primaries ;
> }
> syncer {
> rate 10M;
> al-extents 257;
> }
>
> on ramree {
> device /dev/drbd1;
> disk /dev/vgn/vm;
> address 130.209.253.128:7789;
> meta-disk internal;
> }
>
> on rangatira {
> device /dev/drbd1;
> disk /dev/vgn/vm;
> address 130.209.253.129:7789;
> meta-disk internal;
> }
> }
>
>
> Cheers,
>
> Cristian
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>