[DRBD-user] Cluster filesystem question

Kushnir, Michael (NIH/NLM/LHC) [C] michael.kushnir at nih.gov
Thu Dec 1 19:58:15 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Lars,

I'm a bit confused by this discussion. Can you please clarify the difference?

What I think you are saying is:

OK:
Dual-primary DRBD -> cluster aware something (OCFS, GFS, clvmd, etc...) -> exported via iSCSI on both nodes -> multipathed on the client

Not OK:
Dual-primary DRBD -> not cluster aware (raw LUN, Ext3/4, etc...) -> exported via iSCSI on both nodes -> multipathed on client

Is this correct?

Thanks,
Mike

-----Original Message-----
From: Lars Ellenberg [mailto:lars.ellenberg at linbit.com] 
Sent: Thursday, December 01, 2011 11:11 AM
To: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] Cluster filesystem question

On Wed, Nov 30, 2011 at 12:30:43PM +0200, Kaloyan Kovachev wrote:
> On Tue, 29 Nov 2011 21:17:30 +0100, Florian Haas <florian at hastexo.com>
> wrote:
> > 
> > As this sort of issue currently pops up on IRC every other day, I've 
> > just posted this rant:
> > 
> >
> http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-mul
> tipath-dont-do-that/
> > 
> > ... in the hope that at least a few of those who are attempting to 
> > set this up are stopped in their tracks by googling first. Lars, if 
> > you object to this or would like some edits, please let me know or 
> > find me on IRC. Thanks.
> > 
> > Cheers,
> > Florian
> 
> 'doing that' for near an year now, so here are my 0.02 ...
> 
> quote from the link above:
> "So please, if you’re seeing Concurrent local write detected or the 
> tell-all DRBD is not a random data generator! message in your logs, 
> don’t come complaining. And even if you don’t see them yet, you will, 
> eventually."
> 
> 'Concurrent local write detected' appeared in the logs at the 
> beginning (during the tests) without cman running on the DRBD nodes, 
> but as soon as they became members of the cluster accessing the data - 
> cluster FS locks do synchronize the write attempts and multipath is 
> running fine even in multibus configuration - no more such messages in 
> the logs. Maybe it is because there are mostly reads and fewer writes, 
> while the service which does the writes runs on the DRBD node itself 
> and accessing its own device only (no multipath)
> 
> it is possible, but only if properly configured:
>  use GFS2, OCFS2 or other cluster aware FS with properly configured 
> cluster  have cluster manager running on DRBD nodes or preferably have 
> DRBD as a service controlled from the cluster  do not start iSCSI 
> target until both nodes are connected and in Primary/Primary state  
> have proper fencing and _always_ use resource-and-stonith - no I/O 
> until split-brain is resolved

Cluster FS on Dual-Primary DRBD is ok
(if done right, as you apparently do),
as they are, well, cluster aware.

Independent iSCSI targets on Dual-Primary DRBD is not, as those targets are *not* cluster aware.

That's what the "Don't do that." mainly was about:
Do not expect DRBD to make non-cluster aware things cluster-aware, magically, somehow. It does not, can not, and will not do magic.

> it won't help spreading the writes, as they are done on both nodes, 
> but may speed up the reads if used as a storage for VM images, it is 
> better to have each VM using it's own DRBD device - live migration 
> possible with no risk for the data

--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com _______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


More information about the drbd-user mailing list