<div dir="ltr">I've switched pve1,pve2 to lvm thin recently just for testing and left pve3 with zfs as a storage back end. However, I really miss some cool zfs features, compared to lvm thin, like on-the-fly compression of zero blocks and its fast,low cost, point in time snapshots... What I don't miss though, is zfs memory consumtion compared to lvm thin :-)</div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 26, 2018 at 8:26 AM Roland Kammerer <<a href="mailto:roland.kammerer@linbit.com">roland.kammerer@linbit.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, Jul 25, 2018 at 08:49:02PM +0100, Yannis Milios wrote:<br>
> Hello,<br>
> <br>
> Currently testing 9.0.15-0rc1 on a 3 node PVE cluster.<br>
> <br>
> Pkg versions:<br>
> ------------------<br>
> cat /proc/drbd<br>
> version: 9.0.15-0rc1 (api:2/proto:86-114)<br>
> GIT-hash: fc844fc366933c60f7303694ca1dea734dcb39bb build by root@pve1,<br>
> 2018-07-23 18:47:08<br>
> Transports (api:16): tcp (9.0.15-0rc1)<br>
> ii python-drbdmanage 0.99.18-1<br>
> ii drbdmanage-proxmox 2.2-1<br>
> ii drbd-utils 9.5.0-1<br>
> ---------------------<br>
> Resource=vm-122-disk-1<br>
> Replica count=3<br>
> PVE nodes=pve1,pve2,pve3<br>
> Resource is active on pve2 (Primary), the rest two nodes (pve1,pve2) are<br>
> Secondary.<br>
> <br>
> Tried to live migrate the VM from pve2 to pve3 and the process stuck just<br>
> before starting. By inspecting dmesg on both nodes (pve2,pve3), I get the<br>
> following crash..<br>
> <br>
> <br>
> pve2 (Primary) node:<br>
> <a href="https://privatebin.net/?fb5435a42b431af2#4xZpd9D5bYnB000+H3K0noZmkX20fTwGSziv5oO/Zlg=" rel="noreferrer" target="_blank">https://privatebin.net/?fb5435a42b431af2#4xZpd9D5bYnB000+H3K0noZmkX20fTwGSziv5oO/Zlg=</a><br>
> <br>
> pve3(Secondary)node:<br>
> <a href="https://privatebin.net/?d3b1638fecb6728f#2StXbwDPT0JlFUKf686RJiR+4hl52jEmmij2UTtnSjs=" rel="noreferrer" target="_blank">https://privatebin.net/?d3b1638fecb6728f#2StXbwDPT0JlFUKf686RJiR+4hl52jEmmij2UTtnSjs=</a><br>
> <br>
<br>
We will look into it closer. For now I saw "zfs" in the second trace and<br>
stopped. It is so freaking broken, it is not funny any more (it craps<br>
out with all kinds of BS in our internal infrastructure as well). For<br>
example we had to go back to a xenial kernel because the bionic ones zfs<br>
is that broken :-/ </zfs rant, which I actually really like><br>
<br>
Regards, rck<br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" rel="noreferrer" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote></div>