[DRBD-user] Some info

Gandalf Corvotempesta gandalf.corvotempesta at gmail.com
Wed Oct 11 21:14:58 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

So, let's assume a raid -> drbd -> lvm

starting with a single RAID1, what If I would like to add a second
raid1 converting the existing one to a RAID10 ? drbdadm resize would
be enoguh ?

keeping lvm as the upper layer would be best, I think, because will
allow me to create logical volumes, snapshot and so on.

what happens if local raid totally fails ? the upper layer will stay
up thanks to DRBD fetching data from the other node ?

is "raid -> drbd -> lvm" a standard configuration or something bad? I
don't want to put in production something "custom" and not supported.

How to prevent splitbrains ? Would be enough to bond the cluster
network ? Any qdevice or fencing to configure ?

2017-10-11 21:07 GMT+02:00 Adam Goryachev <mailinglists at websitemanagers.com.au>:
> On 12/10/17 05:10, Gandalf Corvotempesta wrote:
>> Previously i've asked about DRBDv9+ZFS.
>> Let's assume a more "standard" setup with DRBDv8 + mdadm.
>> What I would like to archieve is a simple redundant SAN. (anything
>> preconfigured for this ?)
>> Which is best, raid1+drbd+lvm or drbd+raid1+lvm?
>> Any advantage by creating multiple drbd resources ? I think that a
>> single DRBD resource is better for administrative point of view.
>> A simple failover would be enough, I don't need master-master
>> configuration.
> In my case, the best option was raid + lvm + drbd
> It allows me to use lvm tools to resize each exported resource as required
> easily:
> lvmextend...
> drbdadm resize ...
> However, the main reason was to improve drbd "performance" so that it will
> use different counters for each resource instead of a single set of counters
> for one massive resource.
> BTW, how would you configure drbd + raid + lvm ?
> If you do DRBD with a raw drive on each machine, then use raid1 on top
> within each local machine, then your raw drbd drive dies, the second raid
> member will not contain or participate with DRBD anymore, so the whole node
> is failed. This only adds DR ability to recover the user data. I would
> suggest this should not be a considered configuration at all (unless I'm
> awake to early and am overlooking something).
> Actually, assuming machine1 with disk1 + disk2, and machine2 with disk3 +
> disk4, I guess you could setup drbd1 between disk1 + disk3, and a drbd2 with
> disk2 + disk4, and then create raid on machine 1 with drbd1+drbd2 and raid
> on machine2 with drbd1+drbd2 and then use the raid device for lvm. You would
> need double the write bandwidth between the two machines. When machine1 is
> primary, and a write for the LV, it will be sent to raid which will send the
> write to drbd1 and also drbd2. Locally, they are written to disk1 + disk2,
> but also those 2 x writes will need to send over the network to machine2, so
> it can be written to disk3 (drbd1) and disk4 (drbd2). Still not a sensible
> option IMHO.
> The two valid options would be raid + drbd + lvm or raid + lvm + drbd (or
> just lvm + drbd if you use lvm to handle the raid as well).
> Regards,
> Adam
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

More information about the drbd-user mailing list