Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, 11 Jan 2013 07:57:24 -0600 Shaun Thomas <sthomas at optionshouse.com> wrote: > On 01/10/2013 11:49 PM, Andy Dills wrote: > > > So, it seems to me that even with the no-disk-barrier and > > no-disk-flushes, when I am connected, I am limited (write speed) to > > the speed of the network connection. > > These options are only so you're not *also* constrained by buffer > flushing and barrier operations. In the default protocol C, all > writes must be synced on both nodes before they're successful. That > means your writes will be, at most, the speed of your network > interface. > > We use dual 10Gb NICs on all of our servers, and bond them to > separate switches for redundancy. I didn't want to go overboard so I > gave DRBD a bandwidth limit, but I can get 350MBps from each of the 2 > DRBD devices I have set up on our servers, and have plenty left over. > > I really wouldn't suggest running DRBD *without* a 10Gb NIC. While 10Gb NICs are nice, there is also the possibility of 1 or 2 1Gb Nics bonded/trunked, meta-data on a ssd and mode A. when the two machines are directly connected and have separate power (or both dual power connected to two UPSes), the chance of losing data due to protocol A over C are fairly low. And you gain lot of latency. BTW: Its nice to see high throughput with dd copying big files. But thats not what your users will do, they will do lots of small reads at many different locations. So instead of tuning the throughput spend that time tuning the latency. Thats what your users will percieve as the speed of the system. Have fun, Arnold -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130111/76324798/attachment.pgp>