[DRBD-user] Cluster filesystem question

Kaloyan Kovachev kkovachev at varna.net
Wed Nov 30 11:30:43 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, 29 Nov 2011 21:17:30 +0100, Florian Haas <florian at hastexo.com>
wrote:
> 
> As this sort of issue currently pops up on IRC every other day, I've
> just posted this rant:
> 
>
http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-multipath-dont-do-that/
> 
> ... in the hope that at least a few of those who are attempting to set
> this up are stopped in their tracks by googling first. Lars, if you
> object to this or would like some edits, please let me know or find me
> on IRC. Thanks.
> 
> Cheers,
> Florian

'doing that' for near an year now, so here are my 0.02 ...

quote from the link above:
"So please, if you’re seeing Concurrent local write detected or the
tell-all DRBD is not a random data generator! message in your logs, don’t
come complaining. And even if you don’t see them yet, you will,
eventually."

'Concurrent local write detected' appeared in the logs at the beginning
(during the tests) without cman running on the DRBD nodes, but as soon as
they became members of the cluster accessing the data - cluster FS locks do
synchronize the write attempts and multipath is running fine even in
multibus configuration - no more such messages in the logs. Maybe it is
because there are mostly reads and fewer writes, while the service which
does the writes runs on the DRBD node itself and accessing its own device
only (no multipath)

it is possible, but only if properly configured:
 use GFS2, OCFS2 or other cluster aware FS with properly configured
cluster
 have cluster manager running on DRBD nodes or preferably have DRBD as a
service controlled from the cluster
 do not start iSCSI target until both nodes are connected and in
Primary/Primary state
 have proper fencing and _always_ use resource-and-stonith - no I/O until
split-brain is resolved

it won't help spreading the writes, as they are done on both nodes, but
may speed up the reads
if used as a storage for VM images, it is better to have each VM using
it's own DRBD device - live migration possible with no risk for the data



More information about the drbd-user mailing list