[DRBD-user] Some info

Adam Goryachev mailinglists at websitemanagers.com.au
Thu Oct 12 00:55:55 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On 12/10/17 06:52, Gandalf Corvotempesta wrote:
> 2017-10-11 21:22 GMT+02:00 Adam Goryachev <mailinglists at websitemanagers.com.au>:
>> You can also do that with raid + lvm + drbd... you just need to create a new
>> drbd as you add a new LV, and also resize the drbd after you resize the LV.
> I prefere to keep drbd as minimum. I'm much more familiar with LVM.
> If not an issue, i prefere to keep the number of drbd resources as bare minimum.
Except that you should become familiar with DRBD so that when something 
goes wrong, you will be better placed to fix it. If you set it up once 
and don't touch it for three years, then it breaks, you will have no 
idea on what to do or even where to start. You will probably have 
forgotten how it was configured/supposed to work.
>> If both drives fail on one node, then raid will pass the disk errors up to
>> DRBD, which will mark the local storage as down, and yes, it will read all
>> needed data from remote node (writes are always sent to the remote node).
>> You would probably want to migrate the remote node to primary as quickly as
>> possible, and then work on fixing the storage.
> Why should I migrate the remote node to primary? Any advantage?
Yes, avoids reads from going over the network, reducing latency and 
increasing throughput (depending on bandwidth between nodes). I guess 
this is not a MUST, but just an easy optimisation.

>> Yes, it is not some bizarre configuration that has never been seen before.
>> You also haven't mentioned the size of your proposed raid, nor what size you
>> are planning on growing it to?
> Currently, I'm planning to start with 2TB disks. I don't think to go
> over 10-12TB
That is a significant growth. I would advise to plan how you will 
achieve that growth now. For example, create a 200GB array, with DRBD + 
LVM etc, then try to grow the array (add extra 200GB partitions to the 
drive) and make sure everything works as expected. A good idea to 
document the process while you are doing this, so that when you need it, 
you have a very good idea on how to proceed. (You should still re-test 
it at that time in case tools have changed/etc).

One thing you have ignored is that DRBD will behave differently with a 
single resource as opposed to multiple resources. For me, this was 
enough of a difference that it made a horrible solution into a viable 
solution (from the end users that were using it, performance was 
terrible with the single resource, and possible with multiple, other 
changes were also made to convert it to highly useful).
>> Yes, you will always want multiple network paths between the two nodes, and
>> also fencing. bonding can be used to improve performance, but you should
>> *also* have an additional network or serial or other connection between the
>> two nodes which is used for fencing.
> Ok.
> Any "bare-metal" distribution with DRBD or detailed guide on how to
> implement HA?
> Something like FreeNAS, or similiar.

No, I just use debian and then configure things as required, for me that 
is the best way to become familiar with the system, and be prepared for 
when things break. I would also strongly advise to read the very good 
documentation, try and read through all of it at least once. (Another 
thank you to linbit for this documentation!)


Adam Goryachev Website Managers www.websitemanagers.com.au
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.

More information about the drbd-user mailing list