Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Sun, Sep 27, 2009 at 03:06:09AM -0400, sylarrrrrrr at aim.com wrote: > > Hi > > > > > > In /proc/drbd, I get oos:500 about every week. I get it after I run > verify. After that I restart the secondary, and get oos:0, and then > verify pass fine with oss:0. But after a week I get oos:500 again. > When I write 500, I mean about 500, every week the number is a bit > different. I have ocfs2 on top of drbd that is on top of lvm that is > on top of mdadm raid5. My meta data is on top of an lvm that is on top > of a separate single disk. My meta data is a 47MB partition of which > 44MB? is used by LVM. I didn't touch any of the default sizes for > anything of the underlying systems. I know that mdadm chunk size is > 64kb. Lvm PE is 4MB. drbd version is 8.3.2rc2 > > > > Is this a serious problem? It _may_ be indicative of a problem. some threads around that topic: What causes nodes to become out-of-sync? http://thread.gmane.org/gmane.linux.network.drbd/15430/ Behaviour of verify: false positives -> true positives (gmane of this seems currently broken, but there are more archives.) http://marc.info/?l=drbd-dev&m=122112577026196&w=2 http://marc.info/?l=drbd-dev&m=122112583726317&w=2 tons of out-of-sync sectors detected http://thread.gmane.org/gmane.linux.network.drbd/15537 and that one: http://thread.gmane.org/gmane.linux.network.drbd/15167/focus=15171 > I want to run drbd in dual primary mode, but due to this problem I am > being cautious and run it primary only on node #1. > > What can I do to solve this problem? -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed