Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Tue, May 17, 2016 at 8:21 AM, Mats Ramnefors <mats at ramnefors.com> wrote:
> I am testing a DRBD 9 and 8.4 in simple 2 node active - passive clusters
> with NFS.
>
> Copying files form a third server to the NFS share using dd, I typically
> see an average of 20% CPU load (with v9) on the primary during transfer of
> larger files, testing with 0,5 and 2 GB.
>
> At the very end of the transfer DRBD process briefly peaks at 70 - 100%
> CPU.
>
> This causes occasional problems with Corosync believing the node is down.
> Increasing the token time for Corosync to 2000 ms fixes the symptom but I
> am wondering about the root cause and any possible fixes?
>
> This is the DRBD configuration.
>
> resource san_data {
> protocol C;
> meta-disk internal;
> device /dev/drbd1;
> disk /dev/nfs/share;
> net {
> verify-alg sha1;
> cram-hmac-alg sha1;
> shared-secret ”****************";
> after-sb-0pri discard-zero-changes;
> after-sb-1pri discard-secondary;
> after-sb-2pri disconnect;
> }
> on san1 {
> address 192.168.1.86:7789;
> }
> on san2 {
> address 192.168.1.87:7789;
> }
> }
>
> The nodes are two VM on different ESXi hosts (Dell T620). Hosts are very
> lightly loaded. Network is 1 Gb at the moment through a Catalyst switch.
> Network appears not saturated.
>
> BTW when can we expect a DRBD resource agent for v9? It took me a while to
> figure out why DRBD9 was not working with Pacemaker and then finding a
> patch to the agent :)
>
> Cheers Mats
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
Hi Mats,
Can you please share the patch if you don't mind?
Thanks,
Igor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160530/5627487e/attachment.htm>