[DRBD-user] DRBD 9 without Linstor

kvaps kvapss at gmail.com
Thu Oct 4 15:02:47 CEST 2018


You're welcome.

With LVM you will have separated resource for each VM in the end.
The only thing is snapshots, unfortunately they are not working for
LVM-storage.
And there is no way for use thinlvm in shared mode.

PS: Ow, I thought that I've sent it to drbd-user list. OK, I'll send there
copy too :)
- kvaps


On Thu, Oct 4, 2018 at 2:47 PM Yannis Milios <yannis.milios at gmail.com>
wrote:

> Hi Andrey,
>
> Thanks for sharing your setup, which is actually interesting. However, in
> my setup I prefer to have separate DRBD resources per VM, hence the need to
> have something like LINSTOR (or drbdmanage before that) to automate
> resource/volume creation and management. I'm using it for both QEMU and LXC
> and so far it's working great.
> As a backing storage I'm using LVM Thin (and ZFS Thin sometimes), which
> means that I can have instant snapshots, per VM/LXC with the ability to
> rollback at any time instantly. In this setup there's no need to use an
> iSCSI target as you do in your setup, as LINSTOR plugin for Proxmox take
> care for everything.
>
> Regards,
> Yannis
>
>
> On Thu, 4 Oct 2018 at 13:37, kvaps <kvapss at gmail.com> wrote:
>
>> Hi, I've using DRBD9 on three nodes Proxmox-cluster without LINSTOR, it
>> is working fine.
>>
>> You probably should prepare one big DRBD-device replicated to three nodes.
>>
>>     # cat /etc/drbd.d/tgt1.res
>>     resource tgt1 {
>>       meta-disk internal;
>>       device    /dev/drbd100;
>>       protocol  C;
>>       net {
>>         after-sb-0pri discard-zero-changes;
>>         after-sb-1pri discard-secondary;
>>         after-sb-2pri disconnect;
>>       }
>>       on pve1 {
>>         address   192.168.2.11:7000;
>>         disk
>> /dev/disk/by-partuuid/95e7eabb-436e-4585-94ea-961ceac936f7;
>>         node-id   0;
>>       }
>>       on pve2 {
>>         address   192.168.2.12:7000;
>>         disk
>> /dev/disk/by-partuuid/aa7490c0-fe1a-4b1f-ba3f-0ddee07dfee3;
>>         node-id   1;
>>       }
>>       on pve3 {
>>         address   192.168.2.13:7000;
>>         disk
>> /dev/disk/by-partuuid/847b9713-8c00-48a1-8dff-f84c328b9da2;
>>         node-id   2;
>>       }
>>       connection-mesh {
>>         hosts pve1 pve2 pve3;
>>       }
>>     }
>>
>> Then you can create LXC container, which will use this block device as
>> it's rootfs.
>>
>>     mkfs -t ext4 /dev/drbd100
>>     wget
>> http://download.proxmox.com/images/system/ubuntu-16.04-standard_16.04-1_amd64.tar.gz
>> -P /var/lib/vz/template/cache/
>>     pct create 101
>> local:vztmpl/ubuntu-16.04-standard_16.04-1_amd64.tar.gz \
>>       --hostname=tgt1 \
>>       --net0=name=eth0,bridge=vmbr0,gw=192.168.1.1,ip=192.168.1.11/24 \
>>       --rootfs=volume=/dev/drbd100,shared=1
>>     pct start 101
>>
>> Login into container and create one big file there.
>>
>>     pct exec 101 bash
>>     mkdir -p /data
>>     fallocate -l 740G /data/target1.img
>>
>> Install istgt, and configure iSCSI-export for this file.
>>
>>     # cat /etc/istgt/istgt.conf
>>     [Global]
>>       Comment "Global section"
>>       NodeBase "iqn.2018-07.org.example.tgt1"
>>       PidFile /var/run/istgt.pid
>>       AuthFile /etc/istgt/auth.conf
>>       MediaDirectory /var/istgt
>>       LogFacility "local7"
>>       Timeout 30
>>       NopInInterval 20
>>       DiscoveryAuthMethod Auto
>>       MaxSessions 16
>>       MaxConnections 4
>>       MaxR2T 32
>>       MaxOutstandingR2T 16
>>       DefaultTime2Wait 2
>>       DefaultTime2Retain 60
>>       FirstBurstLength 262144
>>       MaxBurstLength 1048576
>>       MaxRecvDataSegmentLength 262144
>>       InitialR2T Yes
>>       ImmediateData Yes
>>       DataPDUInOrder Yes
>>       DataSequenceInOrder Yes
>>       ErrorRecoveryLevel 0
>>     [UnitControl]
>>       Comment "Internal Logical Unit Controller"
>>       AuthMethod CHAP Mutual
>>       AuthGroup AuthGroup10000
>>       Portal UC1 127.0.0.1:3261
>>       Netmask 127.0.0.1
>>     [PortalGroup1]
>>       Comment "SINGLE PORT TEST"
>>       Portal DA1 192.168.1.11:3260
>>     [InitiatorGroup1]
>>       Comment "Initiator Group1"
>>       InitiatorName "ALL"
>>       Netmask 192.168.1.0/24
>>     [LogicalUnit1]
>>       Comment "Hard Disk Sample"
>>       TargetName disk1
>>       TargetAlias "Data Disk1"
>>       Mapping PortalGroup1 InitiatorGroup1
>>       AuthMethod Auto
>>       AuthGroup AuthGroup1
>>       UseDigest Auto
>>       UnitType Disk
>>       LUN0 Storage /data/target1.img Auto
>>
>> On the Proxmox go to the storage interface, and connect this export to
>> all three nodes. (remove checkmark from "Use LUN Directly")
>>     https://hsto.org/webt/uw/j3/pu/uwj3pusr-nf9bc7neisd5x-fcsg.png
>>
>> After that you can create one shared LVM storage on top of iSCSI device
>> via Proxmox interface. (maek "shared" checkmark)
>>     https://hsto.org/webt/j1/ob/mw/j1obmwcwhz-e6krjix72pmiz118.png
>>
>> Also don't forget configure HA for your container:
>>
>>     ha-manager groupadd tgt1 --nodes pve1,pve2,pve3 --nofailback=1
>> --restricted=1
>>     ha-manager add ct:101 --group=tgt1 --max_relocate=3 --max_restart=3
>>
>> After those steps you will have shared storage for all three nodes, so
>> you can create and live-migrate your VMs without any problems.
>> And migrate iSCSI-container with short downtime.
>>
>> Cheers
>> - kvaps
>>
>>
>> On Thu, Oct 4, 2018 at 12:27 PM Yannis Milios <yannis.milios at gmail.com>
>> wrote:
>>
>>> You can, but your life will be miserable without LINSTOR managing the
>>> resources (hence the existence of it in the first place)  ... :)
>>>
>>> On Wed, 3 Oct 2018 at 13:29, M. Jahanzeb Khan <mjahanzebkhan at yahoo.com>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> I would like to know that is it possible to use drbd 9 without using
>>>> Linstor on top of LVM ?
>>>> I have a 3 nodes server and I was using drbdmanage before. But now I
>>>> just want to use drbd 9 without any additional tools.
>>>>
>>>>
>>>> Best regards,
>>>> Jaz
>>>>
>>>>
>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> Virus-free.
>>>> www.avast.com
>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>>>> <#m_7779014479370803106_m_-6057131300438041269_m_-2872065238668172421_m_-7213948685694810749_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>> _______________________________________________
>>>> drbd-user mailing list
>>>> drbd-user at lists.linbit.com
>>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>>
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user at lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20181004/9692303b/attachment-0001.htm>


More information about the drbd-user mailing list