Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Thank you for your hint. Of course I have mounted the device on Machine B in r/o mode in order not to loose data. I am still curious how drbd is populating the changes on the file system on machine B: - So drbd is populating the changes to machine B (example: new file on harddisk on B). - The filesystem on B should be in a current state (example: new file on harddisk on B is really stored). - But on the OS side ext3 is not aware of the changes (example: file is not present). I can imagine that in a r/w environment this will create diffcilties. But in r/o mode I should not be able to change the data - or? I am looking for a solution to have at least a limited access from B to the joint filesystem. Lars Ellenberg schrieb: > On Fri, Oct 12, 2007 at 08:56:39PM +0200, Peter P GMX wrote: > >> I am using ext3 filesystem on top of drbd - on top of lvm2 - on top of >> raid 1 hard disks. >> >> "what exactly do you mean by "detach"?" >> I checked it again to be more precise. When I change data on machine A I >> have to unmount the drbd volume on machine B and then mount it again in >> order to refelect the changes. >> >> The strange thing is that drbd tells me that everything is in sync, but in >> fact the filesystem isn't unless I mount and unmount. >> > > to use any one file system > on more than one node at the same time, > you need to use a network sharing file system (nfs, samba, etc.) > or a cluster aware file system (GFS, OCFS, etc.). > > if you use ext3/reiserfs/xfs/ any "conventional" file system > on drbd in "allow-two-primaries", concurrently accessing it > >from both nodes at the same time, > it will not work. > no, not even read-only. > it may crash your machines, > and it definetly will scramble your data. > > this is mentioned in the example config file, > in the manpages, and in several other places. > > repeat. > > ## you currently are actively scrambling your data. ## > ## you currently are actively scrambling your data. ## > ## you currently are actively scrambling your data. ## > > happy cleaning up. > >