Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Apr 25, 2013 at 06:07:52PM +0200, GG-Net Mailing wrote: > Hey there, > > the speed is significatnly higher due to the Samsung 840 SSD's > performance plus the performance of the raid controller. Why would you want to combine a SSD Primary with single-sindle HDD Secondary? > And according to you, if I use protocol A or C is pretty much > irrelevant to the performance of the DRBD, since it's always only as > fast as the slowest node? Protocol A changes the latency of the single IO, and is effective mainly if the working set is typically covered by the activity log, but you have a high latency link (or secondary IO subsystem). It is unlikely to have much impact on sustained throughput (how could it; replication bandwidth is still the same) or on sustained strictly random write IOPS, as under continuous load you will still hit congestion. If you have a slow link, or slow Secondary IO subsystem, protocol A (or taken further, DRBD-Proxy) cannot improve *sustained* IOPS under continuous heavy load. They can "hide" load peaks, and reduce average latency. Maybe these posts (some ascii art) help: http://article.gmane.org/gmane.linux.network.drbd/24746 http://article.gmane.org/gmane.linux.network.drbd/24894 Also have a look at http://blogs.linbit.com/p/469/843-random-writes-faster/ Or in short: Don't think you can get away with a crappy secondary because you "won't ever need it anyways"... High performance primary + crappy secondary results in crappy overall system. (except in a few very specific use cases) Also, DRBD is primarily to enable failover. To avoid a lot of pain in the failover scenarios, all nodes in a failover cluster should be equally powerful. Lars -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed