[DRBD-user] Access to the slave node

Ondrej Valousek Ondrej.Valousek at s3group.com
Fri Mar 16 10:11:35 CET 2018


Hi Veit,

Thanks for the detailed reply.
1. Yes, I have tried GFS - the problem is that the whole pacemaker/corosync setup seems to me bit difficult and fragile. Also the distributed filesystems like GlusterFS/GFS/OCFS will never offer such performance like normal filesystem + asynchronous NFS. Hence I am here on the DRBD mailing list as I need redundancy :)

2. Thanks for the snapshotting hint - it did not occurred to me. I think the metadata won't be problem for the reasons you described. I am also aware of the performance penalty.

I will give drbd some more tests (possibly with a large send buffer) as I need to be 100% sure it will not crash the filesystem if protocol A is used. So far, so good.

Thanks,
Ondrej

-----Original Message-----
From: Veit Wahlich [mailto:cru.lists at zodia.de] 
Sent: Thursday, March 15, 2018 7:38 PM
To: drbd-user at lists.linbit.com; Ondrej Valousek <Ondrej.Valousek at s3group.com>; drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] Access to the slave node

Hi Ondrej, 

yes, this is perfectly normal in single-primary environments. DRBD simply does not permit to access the resource block devices until it is promoted to primary. What you describe would only work in dual-primary environments, but running such an environment also requires a lot more precautions than single-primary to not endanger your data. Also remember that even mounting read-only does for many (most?) filesystems not mean that no data is altered; at least meta data such as "last-mounted" attributes are still written, also journal replay might occur. As the fs on the primary is still updated while your read-only side does not expect this, your ro-mount will most likely read garbage at some point and might even freeze the system.

There are only a few scenarios to prevent such situations, and I regard the two following the most useful ones:

a) Implement a dual-primary environment running a cluster filesystems such as GFS or OCFS2 on top -- this is hard work to learn and build and offers lots of pitfalls that put your data in danger and is currently limited to 2 nodes, but it even allows to write the fs from both sides.

b) Build a single-primary environment like your existing, but use a block layer that allows snapshots (e. g. classic LVM, LVM thinp or ZFS) to place your DRBD backing devices upon -- when you need to access the primary's data from a secondary, take a snapshot of its backing device on the secondary and mount the snapshot instead of the DRBD volume.

Addendum to b): This reflects the state of the fs only at the point in time the snapshot was created. You will be able to even mount the snapshot rw without affecting the DRBD volume. If using a backing device with internal metadata, this metadata will also be present in the snapshot, but most (if not all) Linux filesystems will ignore any data at the end of the block device that is out of the fs' actual size. The snapshot will grow as data is written to the DRBD volume and, depending on the snapshot implementation and block size/pointer granularity, will slow down writes to both the DRBD volume and the snapshot as long as the snapshot exists (due to copy on write and/or pointer tracking). So only choose this scenario if you need to read data from the secondary for a limited time (such as backup reasons), or you are willing to renew the snapshot on a regular basis, or you can afford to sacrifice possibly a lot of storage and write performance on this. 

Best regards,
// Veit 


-------- Ursprüngliche Nachricht --------
Von: Ondrej Valousek <Ondrej.Valousek at s3group.com>
Gesendet: 15. März 2018 11:21:49 MEZ
An: "drbd-user at lists.linbit.com" <drbd-user at lists.linbit.com>
Betreff: [DRBD-user] Access to the slave node

Hi list,

When trying to mount the filesystem on the slave node (read-only, I do not want to crash the filesystem), I am receiving:

mount: mount /dev/drbd0 on /brick1 failed: Wrong medium type

Is it normal? AFAIK it should be OK to mount the filesystem read-only on the slave node.
Thanks,

Ondrej


-----

The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communications at s3group.com. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18.

-----

The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communications at s3group.com. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18.


More information about the drbd-user mailing list