<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.6000.16850" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2><SPAN lang=EN>
<P align=left>Hello ALL</P>
<P align=left></P>
<P align=left>Have a 3-node synchronization issue when attempting to use a T1. I
don't know if this is the result of stacked resources or something else.</P>
<P align=left></P>
<P align=left>Our offsite connection destroys our network throughput but only if
we attempt to synchronize to the offsite box at normal T1 connection speed.
Understandably a T1 could not keep up during the day, but was expected to
eventually catch up after hours at night.</P>
<P align=left></P>
<P align=left>We run 2 ESXi 4.0 servers with 8 MS servers and storage sitting on
1.5 TB. With onsite bandwidth everything works better than I could have ever
hoped. A clean copy of XP boots in less than 15 seconds. Virtual Terminal
Servers are snappy even sitting on stacked resourced DRBD. The block data
changes usually are in the order of some 6 gigs daily. This is within the
bandwidth of a T1 during the 12 hour x 695 MB window each night. We have a
3-node setup using stacked DRBD 8.3.2. We are using an Openfiler 64bit 2.3 Xeon
32gig ram server with SAS drives mirroring to another Openfiler 64bit 2.3 dual
core P4 4gig ram on a software raid connect protocol C. I have the third leg
connected to a QNAP 509pro converted to Openfiler 2.3 64bit connected over
Openvpn tunnel connected protocol A. Openvpn usually does an excellent job for
me with encryption and compression. All runs perfectly onsite with Openvpn wide
open and no shaper settings. Virtual servers can be booted and run quite nicely
even off of the little QNAP box doing disaster recovery tests, truly a beautiful
thing.</P>
<P align=left></P>
<P align=left>However, surprisingly, when I attempted to move the offsite box to
a remote office over a T1, the network throughput from the main SAN server went
into the dumpster. But, shutting down the offsite DRBD service or killing the
connection to this offsite box, immediately brings everything back to speed.
</P>
<P align=left></P>
<P align=left>So far I have tried every conceivable bandwidth setting with no
luck. Presently I have the offsite box back in-house. Any attempt to get even
close to simulating T1 speeds with the tunnel consistently brings the network to
its knees. On my last attempt I set the DRBD upper sync rate to 56K (~1/3 of
175) and the Openvpn shaper rate to 175000 which should be a normal T1 rate. The
only thing that seems to help is cranking the bandwidth back up. Why does a low
bandwidth synchronization destroy the throughput on the network as soon as
things get inconsistent? Should DRBD not be able to just trickle data over a
connection?</P>
<P align=left></P>
<P align=left>If I have all 3 machines synch’ed and I lower the bandwidth using
drbdsetup /dev/drbd3 syncer -r 56K and then lower the bandwidth to T1 speed on
the tunnel, everything is ok until drbd reports inconsistent on third leg.
Inconsistent would of course be expected here since I just did something make it
that way such as defragging a drive or whatever. Then boom the network
throughput is in the dirt again, until I break the connection to third leg and
everything pops back up and dusts itself off like nothing happened. If I
increase the bandwidth back and let it sync back up again, it works perfectly
and again it couldn’t care less. There must be some programming issue here or
there has to be a way to tweak this situation. Surely this kind of a DRBD setup
should be able to function over a T1 speed connection in protocol A. I was
hoping 8.3.2 would do better than 8.3.1 but it made no improvement. If you say I
don’t have enough bandwidth, then for argument’s sake let’s say I did have more
bandwidth and the speed on the connection dropped temporarily, we all know how
the telephone companies are. Would DRBD bring the network down because it could
not synch up? This just cannot be right.</P>
<P align=left></P>
<P align=left>Any thoughts would be most desired. I am new to list and tried to
search to see if this subject had already been addressed.</P>
<P align=left></P>
<P align=left> </P><FONT face="Courier New" size=2><FONT face="Courier New"
size=2>
<P align=left>global {</P>
<P align=left># minor-count 64;</P>
<P align=left># dialog-refresh 5; # 5 seconds</P>
<P align=left># disable-ip-verification;</P>
<P align=left>usage-count ask;</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>common {</P>
<P align=left>syncer { rate 100M; }</P>
<P align=left>}</P>
<P align=left></P>
<P align=left> </P>
<P align=left>resource data-lower </P>
<P align=left>{</P>
<P align=left>protocol C;</P>
<P align=left>startup {</P>
<P align=left>wfc-timeout 0; ## Infinite!</P>
<P align=left>degr-wfc-timeout 120; ## 2 minutes.</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>disk {</P>
<P align=left>on-io-error detach;</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>net {</P>
<P align=left># timeout 60;</P>
<P align=left># connect-int 10;</P>
<P align=left># ping-int 10;</P>
<P align=left># max-buffers 2048;</P>
<P align=left># max-epoch-size 2048;</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>syncer {</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>on sas {</P>
<P align=left>device /dev/drbd0;</P>
<P align=left>disk /dev/volgrp/mirror;</P>
<P align=left>address 10.10.10.112:7789;</P>
<P align=left>meta-disk internal;</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>on giga {</P>
<P align=left>device /dev/drbd0;</P>
<P align=left>disk /dev/volgrp/mirror;</P>
<P align=left>address 10.10.10.111:7789;</P>
<P align=left>meta-disk internal;</P>
<P align=left>}</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>resource data-upper </P>
<P align=left>{</P>
<P align=left>protocol A;</P>
<P align=left>syncer </P>
<P align=left>{</P>
<P align=left>after data-lower;</P>
<P align=left>rate 56K;</P>
<P align=left>al-extents 513;</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>net {</P>
<P align=left>shared-secret "LINBIT";</P>
<P align=left>}</P>
<P align=left></P>
<P align=left>stacked-on-top-of data-lower </P>
<P align=left>{</P>
<P align=left>device /dev/drbd3;</P>
<P align=left>address 192.168.100.1:7788; }</P>
<P align=left></P>
<P align=left>on offsite </P>
<P align=left>{</P>
<P align=left>device /dev/drbd3;</P>
<P align=left>disk /dev/volgrp/mirror;</P>
<P align=left>address 192.168.100.2:7788; # Public IP of the backup node</P>
<P align=left>meta-disk internal;</P>
<P align=left>}</P>
<P align=left>}</P></FONT></FONT>
<P align=left></P>
<P align=left> </P>
<P align=left> </P></SPAN></FONT></DIV></BODY></HTML>