[DRBD-user] drbd with a third node diskless - how ?

Pierre-Philipp Braun pbraun at nethence.com
Wed Jun 23 23:04:59 CEST 2021


> At the  moment i do this as a proof if it is possible to build a proxmox
> cluster with 2 diskfull nodes, 1 quorum node (diskless)
> and replicated storage.  My actual hobbycluster works since years in
> similar way.  Proxmox have no solution for this usecase
> Ceph needs to much hardware and seems to be very complicate , ZFS is not
> usefull because its async.

I've seen Ceph RBD working well with Proxmox (and maybe 3 nodes are 
enough) however:

- if we mess something up, it's becomes VERY bad.  Ceph is dangerous. 
it's a specialty on its own, not just for any storage engineer.  and 
even so, it would still be scary to me, thanks to its complexity and 
bloated feature set (the originating CRUSH paper shows use-cases aiming 
at GAFAM instead of explaining what matters, the algorithm itself).

- I suppose its performance is not as good as DRBD farms because of the 
additional processing required by the algorithm and/or other overheads 
related to Ceph's non-trival design.  I've seen a paper from Linbit 
before, about performance comparison between Ceph and DRBD, but I cannot 
find it back.

I also still wonder about the space usage of Ceph block replicas.  I am 
not sure it truly offers the advantages of, say RAID-5 over RAID-1.  But 
this now really off-topic.

What ZFS feature, tool or wrapper are you referring to?  I remember some 
guy told me about a convergent FreeBSD ZFS & jail farm, many years ago, 
and still wonder how on earth he did manage to do that, as ZFS is not 
network-oriented (or was it at the time).

> DRBD would be the best solution, i think. The right tool for this job.

I agree, obviously.  FreeBSD has HAST tho, but no diskless feature.

> Can you give me a hint how to configure that ? Do you use proxmox ? The
> proxmox plugin is only for linstor i think.

On community XEN, guests have a configuration with a line pointing at 
the virtual disk e.g.

disk = ['phy:/dev/drbd1,xvda1,w']

so as long as you've allowed dual-primaries (even if that's actually 
used very temporarily), you're good already, be it a diskfull or 
diskless DRBD device.

-- 
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS and verify MX certificates
<https://nethence.com/smtp/>



More information about the drbd-user mailing list