<div> Hi<br>
</div>
<div id="AOLMsgPart_2_457dc958-47ec-4f04-8863-939e42e150ea"><font face="Arial, Helvetica, sans-serif">
<br>
In /proc/drbd, I get oos:500 about every week. I get it after I run verify. After that I restart the secondary, and get oos:0, and then verify pass fine with oss:0. But after a week I get oos:500 again. When I write 500, I mean about 500, every week the number is a bit different. I have ocfs2 on top of drbd that is on top of lvm that is on top of mdadm raid5. My meta data is on top of an lvm that is on top of a separate single disk. My meta data is a 47MB partition of which 44MB is used by LVM. I didn't touch any of the default sizes for anything of the underlying systems. I know that mdadm chunk size is 64kb. Lvm PE is 4MB. drbd version is 8.3.2rc2<br>
<br>
Is this a serious problem? <br>
I want to run drbd in dual primary mode, but due to this problem I am being cautious and run it primary only on node #1.<br>
What can I do to solve this problem?<br>
<br>
</font>
</div>
<!-- end of AOLMsgPart_2_457dc958-47ec-4f04-8863-939e42e150ea -->