[DRBD-user] DRBDv9 with iSCSI as scaleout SAN

Yannis Milios yannis.milios at gmail.com
Tue Oct 3 17:56:47 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

In addition, as long as you're using proxmox, it would be way easier to
setup the native drbd9 plugin for proxmox instead of using the iscsi
method. In this case  both drbd and proxmox should be hosted on the same
servers (hyper-converged setup). Each vm will reside in a separate drbd9
resource/volume and you can control the redundancy level as well.

On Tue, Oct 3, 2017 at 1:04 PM, Adam Goryachev <
mailinglists at websitemanagers.com.au> wrote:

> Note, all the below relates to my uses of DRBD 8.4 in production. I'm
> assuming most of it will be equally applicable to DRBD9.
> On 3/10/17 19:52, Gandalf Corvotempesta wrote:
>> Just trying to figure out if drbd9 can do the job.
>> Requirement: a scale-out storage for VMs image hosting (and other
>> services, but they would be made by creating, in example, an NFS VM on
>> top of DRBD)
>> Let's assume a 3-nodes DRBDv9 cluster.
>> I would like to share this cluster by using iSCSI (or better protocol, if
>> any).
>> Multiple proxmox nodes sharing this drbd cluster.
>> Probably, one drbd resource is created for each VM.
>> Now, some question:
>> how can I ensure that my iSCSI target is redundant across all nodes in
>> the cluster ?
> What do you mean by redundant? You only have a single iscsi server, this
> is the current DRBD primary server. You would use heartbeat or similar to
> automatically stop the iscsi server, change primary to a different server,
> and then start iscsi server on that machine. Your iscsi clients will get no
> response during this time, which looks like a disk stall. Note, it's
> important to ensure you do this in the correct order:
> 1) Remove IP address (or firewall so that no response is sent back, no
> ICMP port closed message, no TCP packets, nothing at all).
> 2) Stop iscsi service
> 3) Change to secondary
> 4) Change other server to primary
> 5) Start iscsi service on new primary server
> 6) Add IP address, or fix firewall to allow traffic in/out.
>> When I have to add a fourth or fifth node to drbd cluster, should I
>> replicate the iscsi target configuration on both ?
> Yes, you must ensure the iscsi config is identical on every server which
> could potentially become primary.
>> Will the drbd resources automatically rebalanced across the new nodes ?
> I'm not sure, I suspect you are considering to make one of your DRBD nodes
> primary for some of the resources, and another primary for a different
> group of resources, and then somehow your peers will work out which primary
> to talk to for their iscsi service. This could be possible (thinking,
> definitely you will want to test this first).
> Consider if each DRBD resource will have a dedicated IP address. You will
> need to somehow dynamically configure the iscsi service (it is possible
> with iet and messing around in /proc) to listen on this extra IP, and serve
> this extra resource. Doing this individually for each resource (ie, the
> above 6 steps would be repeated once for each resource). However, I wonder
> if this would get you any significant benefit? All data will still need to
> be written to all servers, though I suppose reads will be better balanced
> than an all on one primary system.
> Should I change something in the iscsi/proxmox configuration after the
>> rebalance or is it transparent?
> I'm thinking yes... I suspect your heartbeat layer will need to manage
> these changes for you.
>> Any pitfalls or drawbacks ?
> Lots.... make sure you test.... a lot... including any and all failure
> modes you can think of, as well as a complete failure (all nodes die and
> recover).
> Hopefully someone with more hands on experience with DRBD9 can comment
> further....
> Regards,
> Adam
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20171003/a0707c9a/attachment.htm>

More information about the drbd-user mailing list