[DRBD-user] Performance

H.D. devnull at deleted.on.request
Fri Apr 27 17:05:01 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 27.04.2007 16:50, Vampire D wrote:
> We had a DRBD/Heartbeat cluster we were running for a while now using .7 at
> the time.
> For specific reasons, we broke down the cluster to a single machine by
> taking the 2nd one offline.  At that point, we are seeing 50% performance
> gain in performance.
> I was not aware the overhead of DRBD would be that high.
> 
> We were previously using Active/Passive with a single 55G drbd volume on a
> 2.4GHz 2GB dual Raptor server with private Gbit drbd repl link and public
> 100Mb link with heartbeat on both.
> Running a low usage LAMP installation.

I have limited knowledge on that topic, just what I observed:

- Using 1000HZ kernel timer has positive effects
- Jumbo frames are positive for many workloads, not all. OLTP seems not 
to benefit from it.
- Bumping up:
     net.core.wmem_max
     net.core.rmem_max
     net.ipv4.tcp_rmem
     net.ipv4.tcp_wmem
is positive.
- A good hardware controller with BBU makes a huge difference.
- echo 10 > /proc/sys/vm/dirty_ratio helps if you're running XFS and >= 
2.6.20
- In case you have a hardware controller 
http://www.3ware.com/KB/article.aspx?id=11050 might help


-- 
Regards,
H.D.



More information about the drbd-user mailing list