On Tue, Nov 17, 2009 at 5:27 PM, Brian Marshall <<a href="mailto:brian@netcents.com">brian@netcents.com</a>> wrote:<br>> Hello,<br>><br>[snip]<br>> I installed the disks at the remote location and brought up the array in<br>
> active-active synchronous mode. <br>[snip]<br>> so I did an invalidate-remote<br>> from the local node and watched in horror as it tried to re-sync the<br>> entire array. <br>[snip]<br><br>from user guide:<br>
<br>invalidate Forces DRBD to consider the data on the local backing storage device<br> as out-of-sync. Therefore DRBD will copy <b>each and every block</b> over<br> from its peer, to bring the local storage device back in sync.<br>
invalidate-remote This command is similar to the invalidate command, however, the<br> peer's backing storage is invalidated and hence rewritten with the data<br> of the local node.<br>
<br>So DRBD is simply doing what it is asked to do...<br>Probably to test you could use "secondary" command for the peer instead of invalidate, then write to the only-remained primary and then re-run primary on the peer.<br>
<br>but I'm not using ocfs2 and I'm not sure abut ocfs2 DLM behaviour/messages when secondary command is issued on the peer and how/if to recover if in the mean time you write also to the peer node fs.....<br>For sure, chapter 13 of the udrbd user guide would help... '-)<br>