[DRBD-user] DRBDv9 - some questions

Adam Goryachev adam at websitemanagers.com.au
Thu Sep 28 15:01:32 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.



On 28/9/17 22:31, Gandalf Corvotempesta wrote:
> Hi to all,
> Some questions about DRBDv9 (i'm really new to DRBD and DRBDv9 seems
> to be a major refactor):
>
> Let's assume a 3-node cluster
>
> a) should I use RAID or creating resources on raw-disks would be ok?
The choice is yours. If you build on raw disks, then the failure of a 
disk will cause the failure of the node. If that meets your data 
protection needs, then OK.
Equally, just because you use RAID, doesn't mean it improves your data 
protection. RAID0 will make it worse, but might improve performance. 
Personally, RAID10 or RAID6 would be my preferred options. A small 
premium for additional drives with a massive improvement in data 
resilience, especially when you factor in the 3 x servers, each with 
RAID10 for example.
> b) can I use ZFS on top of DRBD ?
Yes
> c) what if I have to aggragate multiple resources to increase spaces ?
> Can I use LVM with something like this:
>
> pvcreate /dev/drbd0
> pvcreate /dev/drbd1
> vgcreate vg0 /dev/drbd0 /dev/drbd1
> lvcreate .......
>
> to create a single LV on top of 2 drbd resources ?
I've never tried, but I don't see why not. DRBD provides a block device, 
it doesn't "care" what you are storing on it, whether a FS or LVM...
>
> d) what If node0.disk0 on resource0 will fail and node0.disk1 on
> resource1 don't?
> My LV spawning both resource will still work as expect but in a little
> bit slow mode as drbd has to fetch missing data (due to failed disk0)
> from one of the other peer ?
Normally, the resource0 would be on disk0 on both servers, and resource1 
on disk1 on both servers
So if node0.disk0 fails then node1.disk0 will still have the data. You 
could move resource0 to primary on node1 which would solve any 
performance issue, in fact you might see performance improve, since you 
no longer need to write data across to node0.disk0.
>
> e) AFAIK drbd is a network block device, to access this "block device"
> should I put an NFS/iSCSI/Whatever in front of it or are there
> something integrated in drbd ? (Like NFS inside Gluster)
AFAIK, no, thought it depends on your needs/wants. DRBD9 changes things 
a bit in that you can have multiple satellite nodes which do not have 
local storage, but do "use" the DRBD devices.
In 8.4 which I still use in production, I used iscsi to export the DRBD 
devices to their clients, but I expect if/when I move to DRBD9, I would 
use diskless satellite nodes, and then pass the raw DRBD device to the 
application. This will remove the iscsi layer in my system, which I hope 
might improve performance slightly, but certainly will reduce complexity 
(one less software to go wrong).
> f) a 3node cluster is still subject to splitbrain ?
I expect not, there are three* possibilities given node1 node2 and node3
1) node1 by itself, node2 + node3 together
2) all nodes together
3) all nodes alone

* ignoring reshuffles of the same thing, eg, you could have node1 + 
node3 and node2 alone, but basically it is just saying 2 nodes together, 
and one alone.

Clearly, in scenario 1, node1 should shutdown (stonith), and leave nodes 
2 + 3 operational, so no split brain.
In scenario 2, everything is working well, so no SB
In scenario 3, all nodes are alone, so they should all stonith and die. 
This is why you need to build the interconnects to ensure that the 
connections between the nodes is more resilient. Don't just plug a 
single ethernet from each to a single switch, or the switch will die 
killing all your storage. They should have direct connections to each 
other in addition to multiple connections to multiple switches.

Of course, doing this properly can be difficult and expensive.

Regards,
Adam

-- 
Adam Goryachev
Website Managers
P: +61 2 8304 0000                    adam at websitemanagers.com.au
F: +61 2 8304 0001                     www.websitemanagers.com.au




More information about the drbd-user mailing list