[DRBD-user] Add new replica to existing resource

robinlee at post.cz robinlee at post.cz
Fri Mar 15 10:05:25 CET 2019


Hello,




OK I will try to explain it a little bit more, sorry for the very long 
email. This is second try, while the first didn't arrived.



"On Thu, Mar 14, 2019 at 01:12:53AM +0100, robinlee at post.cz wrote: 
> Hello, 
> 
> I installed DRBD/Linstor on proxmox with redundancy 1, after adding second

> node to the cluster I would like to expand the resources to another node =

> redundancy 2. 
> 
> The new resources seems be OK, but the older one, created with redundancy 
1 
> seems be not mirroing to another side. I just called  
> 
> linstor resource create <second-node-name> vm-108-disk-1 --storage-pool 
> drbdpool 

So far so good, would have done the same. 
"



yes this part is probably simple ;-) So here I have one proxmox machine with
working drbd and linstor, no problem so far.




I installed second node and created proxmox cluster with the first one. 
Installed linstor-sattelite on the second node and added the second node 
(from the first one) to the controller:




linstor node create pve-virt2 <someip>





and I added second nic for drbd traffic




now I create storage pool on the new node




vgcreate vg-virt2-sata10 /dev/sda4


lvcreate -L 1.5T -T vg-virt2-sata10/drbdthinpool


linstor sp c lvmthin pve-virt2 drbdpool vg-virt2-sata10/drbdthinpool





now I had two nodes with the same storage




╭───────────────────────────────────────────────────────────────────────╮ 
┊ Node      ┊ NodeType  ┊ Addresses                            ┊ State  ┊ 
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ 
┊ pve-virt1 ┊ SATELLITE ┊ 10.0.0.16,X.X.X.16:3366 (PLAIN) ┊ Online ┊ 
┊ pve-virt2 ┊ SATELLITE ┊ 10.0.0.66,X.X.X.66:3366 (PLAIN) ┊ Online ┊ 
╰───────────────────────────────────────────────────────────────────────╯



╭───────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────╮ 
┊ StoragePool ┊ Node      ┊ Driver        ┊ PoolName                     ┊ 
FreeCapacity ┊ TotalCapacity ┊ SupportsSnapshots ┊ 
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ 
┊ drbdpool    ┊ pve-virt1 ┊ LvmThinDriver ┊ drbdpool/drbdthinpool        ┊  
   1.54 TiB ┊      3.09 TiB ┊ true              ┊ 
┊ drbdpool    ┊ pve-virt2 ┊ LvmThinDriver ┊ vg-virt2-sata10/drbdthinpool ┊  
   1.22 TiB ┊      1.50 TiB ┊ true              ┊



 

it has not the same size on both sides, but this is not problem in this 
moment. I started to add the replicas like this




linstor resource create pve-virt2 vm-103-disk-1 --storage-pool drbdpool





it seemed to work, while on the empty vg-virt2-sata10/drbdthinpool were 
created the LV with correct names and even the drbdtop reported some sync.







 
"
> now I see that the resource has two volumes, but the second one is 
actually 
> empty, the content wasn't copied there. 

Where and how do you see that? "



Now it looks like this:




linstor volume-definition list





╭─────────────────────────────────────────────────────────────╮ 
┊ ResourceName  ┊ VolumeNr ┊ VolumeMinor ┊ Size       ┊ State ┊ 
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ 
┊ vm-101-disk-1 ┊ 0        ┊ 1001        ┊ 20 GiB     ┊ ok    ┊ 
┊ vm-102-disk-1 ┊ 0        ┊ 1002        ┊ 36 GiB     ┊ ok    ┊ 
┊ vm-103-disk-1 ┊ 0        ┊ 1003        ┊ 20 GiB     ┊ ok    ┊



linstor resource list





╭──────────────────────────────────────────────────────╮ 
┊ ResourceName  ┊ Node      ┊ Port ┊ Usage  ┊    State ┊ 
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ 
┊ vm-101-disk-1 ┊ pve-virt1 ┊ 7001 ┊ InUse  ┊ UpToDate ┊ 
┊ vm-101-disk-1 ┊ pve-virt2 ┊ 7001 ┊ Unused ┊ UpToDate ┊ 
┊ vm-102-disk-1 ┊ pve-virt1 ┊ 7002 ┊ InUse  ┊ UpToDate ┊ 
┊ vm-102-disk-1 ┊ pve-virt2 ┊ 7002 ┊ Unused ┊ UpToDate ┊ 
┊ vm-103-disk-1 ┊ pve-virt1 ┊ 7003 ┊ InUse  ┊ UpToDate ┊ 
┊ vm-103-disk-1 ┊ pve-virt2 ┊ 7003 ┊ Unused ┊ UpToDate ┊



The Unused volume is on the new proxmox node. And this Unused logical 
volumes are empty, livemigration of the machine leads to immediate kernel 
panic and fsck (after disconnecting LV on second node) is not able to find 
any filesystem.




linstor resource list-volumes





╭───────────────────────────────────────────────────────────────────────────
────────────────────────────────────╮ 
┊ Node      ┊ Resource      ┊ StoragePool ┊ VolumeNr ┊ MinorNr ┊ DeviceName 
   ┊ Allocated  ┊ InUse  ┊    State ┊ 
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ 
┊ pve-virt1 ┊ vm-101-disk-1 ┊ drbdpool    ┊ 0        ┊ 1001    ┊ /dev/drbd
1001 ┊ 5.68 GiB   ┊ InUse  ┊ UpToDate ┊ 
┊ pve-virt2 ┊ vm-101-disk-1 ┊ drbdpool    ┊ 0        ┊ 1001    ┊ /dev/drbd
1001 ┊ 2.48 GiB   ┊ Unused ┊ UpToDate ┊ 
┊ pve-virt1 ┊ vm-102-disk-1 ┊ drbdpool    ┊ 0        ┊ 1002    ┊ /dev/drbd
1002 ┊ 4.08 GiB   ┊ InUse  ┊ UpToDate ┊ 
┊ pve-virt2 ┊ vm-102-disk-1 ┊ drbdpool    ┊ 0        ┊ 1002    ┊ /dev/drbd
1002 ┊ 460.90 MiB ┊ Unused ┊ UpToDate ┊ 
┊ pve-virt1 ┊ vm-103-disk-1 ┊ drbdpool    ┊ 0        ┊ 1003    ┊ /dev/drbd
1003 ┊ 5.30 GiB   ┊ InUse  ┊ UpToDate ┊ 
┊ pve-virt2 ┊ vm-103-disk-1 ┊ drbdpool    ┊ 0        ┊ 1003    ┊ /dev/drbd
1003 ┊ 1.03 GiB   ┊ Unused ┊ UpToDate ┊



Here is interesting, that the allocated space isn't the same on both sides.







 
"
 
The Proxmox plugin never creates multi-volume resources. Every disk is a 
new, single-volume, DRBD resource. If not, the plugin has a bug. 
"



It wansn't made using proxmox pluing, so  there isn't any problem with 
proxmox plugin. Resp. the firt working volume was made byt plugin, but the 
second is added manualy.


 
"
If "resource create" created a second volume, which is hard to believe, 
LINSTOR has a bug. 

So please post the res file of that resource (in /var/lib/linstor.d/). 
And also the "drbdsetup status vm-108-disk-1" 
"""






the resource files are shared here - https://pastebin.com/V7mjSCar




the status output is




drbdsetup status vm-103-disk-1





first node

vm-103-disk-1 role:Primary 
 disk:UpToDate 
 pve-virt2 role:Secondary 
   peer-disk:UpToDate



second node




vm-103-disk-1 role:Secondary 
 disk:UpToDate 
 pve-virt1 role:Primary 
   peer-disk:UpToDate




 

and the last info, I tried to resync the volume and in dmesg I have this, 
please note, that the amount of data to resync after --discard-my-data is 
far smaller than the volume itself, see this paste





https://pastebin.com/k0rtZGJH





this dmesg is from the second node, where called the drbdadm connect --
discard-my-data vm-106-disk-1




Thank you for you answer




Sincerely




Robin
"
"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20190315/fc05ce0e/attachment-0001.htm>


More information about the drbd-user mailing list