[DRBD-user] Some info

Adam Goryachev mailinglists at websitemanagers.com.au
Wed Oct 11 21:07:14 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On 12/10/17 05:10, Gandalf Corvotempesta wrote:
> Previously i've asked about DRBDv9+ZFS.
> Let's assume a more "standard" setup with DRBDv8 + mdadm.
> What I would like to archieve is a simple redundant SAN. (anything
> preconfigured for this ?)
> Which is best, raid1+drbd+lvm or drbd+raid1+lvm?
> Any advantage by creating multiple drbd resources ? I think that a
> single DRBD resource is better for administrative point of view.
> A simple failover would be enough, I don't need master-master configuration.
In my case, the best option was raid + lvm + drbd
It allows me to use lvm tools to resize each exported resource as 
required easily:
drbdadm resize ...

However, the main reason was to improve drbd "performance" so that it 
will use different counters for each resource instead of a single set of 
counters for one massive resource.

BTW, how would you configure drbd + raid + lvm ?

If you do DRBD with a raw drive on each machine, then use raid1 on top 
within each local machine, then your raw drbd drive dies, the second 
raid member will not contain or participate with DRBD anymore, so the 
whole node is failed. This only adds DR ability to recover the user 
data. I would suggest this should not be a considered configuration at 
all (unless I'm awake to early and am overlooking something).

Actually, assuming machine1 with disk1 + disk2, and machine2 with disk3 
+ disk4, I guess you could setup drbd1 between disk1 + disk3, and a 
drbd2 with disk2 + disk4, and then create raid on machine 1 with 
drbd1+drbd2 and raid on machine2 with drbd1+drbd2 and then use the raid 
device for lvm. You would need double the write bandwidth between the 
two machines. When machine1 is primary, and a write for the LV, it will 
be sent to raid which will send the write to drbd1 and also drbd2. 
Locally, they are written to disk1 + disk2, but also those 2 x writes 
will need to send over the network to machine2, so it can be written to 
disk3 (drbd1) and disk4 (drbd2). Still not a sensible option IMHO.

The two valid options would be raid + drbd + lvm or raid + lvm + drbd 
(or just lvm + drbd if you use lvm to handle the raid as well).


More information about the drbd-user mailing list