[DRBD-user] [I've speak too quickly... ]OCFS 1.2.3 + DRBD 0.8pre6 + XEN 3.0.2 + DEBIAN sarge 3.1 works ...

Sébastien CRAMATTE s.cramatte at wanadoo.fr
Mon Nov 6 11:57:55 CET 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Milind Dumbare a écrit :
> I saw one topic on OCFS website about I/O scheduler. Please check that. You 
> might be facing same problem as I have. I have changed io scheduler to 
> deadline as described on OCFS FAQ. "elevator=deadline"
>
>   
Do you use the same architecure ? I mean  Xen + drbd + ocfs2
Does this Works for you now ?

Regards

> On Monday 06 November 2006 02:07, Sébastien CRAMATTE wrote:
>   
>> I've speak too quickly ...
>> 5 mn after ...
>> I've copied  (using rsync)  my /home directory to  the mounted  ocfs2
>> folder ... works quite well
>> But when I've tried to delete somes files inside the cluster  ....
>> Kernel Panic !!!
>>
>> I don't know if the problem was drbd  or ocfs2  ...
>> And now I've got drbd tel that it have an outdated disk   ... how can I
>> rebuild ... I mean how can  I put it upToDate ...
>> I don't found doc about this ... :(
>>
>>
>> *Starting DRBD resources:    [ d0 drbd0: disk( Diskless -> Attaching )
>> drbd0: Found 6 transactions (19 active extents) in activity log.
>> drbd0: max_segment_size ( = BIO size ) = 4096
>> drbd0: drbd_bm_resize called with capacity == 4194104
>> drbd0: resync bitmap: bits=524263 words=16384
>> drbd0: size = 2047 MB (2097052 KB)
>> drbd0: reading of bitmap took 0 jiffies
>> drbd0: recounting of set bits took additional 0 jiffies
>> drbd0: 191 MB marked out-of-sync by on disk bit-map.
>> drbd0: Marked additional 64 MB as out-of-sync based on AL.
>> drbd0: disk( Attaching -> UpToDate ) pdsk( DUnknown -> Outdated )
>> drbd0: Writing meta data super block now.
>> n0 drbd0: conn( StandAlone -> Unconnected )
>> drbd0: receiver (re)started
>> drbd0: conn( Unconnected -> WFConnection )
>> ].
>> drbd0: conn( WFConnection -> WFReportParams )
>> drbd0: Handshake successful: DRBD Network Protocol version 85
>> drbd0: Split-Brain detected, dropping connection!
>> drbd0: self
>> E147CD0A49E9C18B:9DFB11BEEE623801:0000000000000004:0000000000000000
>> drbd0: peer
>> 83515216F5F6A08D:9DFB11BEEE623800:0000000000000004:0000000000000000
>> drbd0: conn( WFReportParams -> Disconnecting )
>> drbd0: error receiving ReportState, l: 4!
>> drbd0: asender terminated
>> drbd0: tl_clear()
>> drbd0: Connection closed
>> drbd0: conn( Disconnecting -> StandAlone )
>> drbd0: receiver terminated
>> ..........
>> ***************************************************************
>>  DRBD's startup script waits for the peer node(s) to appear.
>>  - In case this node was already a degraded cluster before the
>>    reboot the timeout is 120 seconds. [degr-wfc-timeout]
>>  - If the peer was available before the reboot the timeout will
>>    expire after 10 seconds. [wfc-timeout]
>>    (These values are for resource 'r0'; 0 sec -> wait forever)
>>  To abort waiting enter 'yes' [  10]:
>> Starting periodic command scheduler: cron.
>> *
>> # cat /proc/drbd
>> version: 8.0pre6 (api:85/proto:85)
>> SVN Revision: 2585 build by root at compilbox.telelorca.com, 2006-11-05
>> 16:25:26
>>  0: cs:StandAlone st:Primary/Unknown ds:UpToDate/Outdated   r---
>>     ns:0 nr:0 dw:0 dr:0 al:0 bm:16 lo:0 pe:0 ua:0 ap:0
>>         resync: used:0/31 hits:0 misses:0 starving:0 dirty:0 changed:0
>>         act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
>>
>> ....
>>
>> I need to rebuild it asap to try to reproduce the crash to send a dump
>> to  DRBD and OCFS2 list
>>
>> regards
>>
>> Sébastien CRAMATTE a écrit :
>>     
>>> Hi,
>>>
>>> I've just setup
>>>
>>> OCFS 1.2.3 + DRBD 0.8pre6 + XEN 3.0.2 + DEBIAN sarge 3.1 works ...
>>>
>>> I can mount correclty  the drbd0 node + ocfs 1.2.3 volume in
>>> active/active mode within a 2 XEN 3.0.2 domU on different server.
>>> I've just made some basic test  ...
>>>
>>> I would like to test this setup in depth  ... I'm newbie in drbd  and
>>> ocfs What kind of thing should be tested  ? Any tips, ideas are welcome
>>>
>>>
>>> Regards
>>>
>>>
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user at lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>       
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>     
>
>
>   




More information about the drbd-user mailing list