Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, May 27, 2010 at 10:32:58AM +0200, Peter Gyongyosi wrote: > Hi, > > I've managed to solve the issue meanwhile so I thought I'd post the > solution here for the record and to help others who might face > similar problems. It turned out that the problem originates in the > aacraid kernel driver itself, in which a default was changed > somewhere between 2.6.27 and 2.6.32 here: > > commit d8e965076514dcb16410c0d18c6c8de4dcba19fc > Author: Leubner, Achim <Achim_Leubner at adaptec.com> > Date: Wed Apr 1 07:16:08 2009 -0700 > > [SCSI] aacraid driver update > changes: > - set aac_cache=2 as default value to avoid performance problem > (Novell bugzilla #469922) > > > Backporting that patch or explicitely passing "cache=2" parameter to > the aacraid module did the trick and write performance got > reasonably high. > > Why on earth could only my specific setup with RAID5+DRBD trigger > this problem (and not native RAID5 or DRBD with any other RAID), I > have no idea, though. Any input from someone more familiar with how > DRBD and the aacraid driver works internally would be welcome. To ensure data integrity, unless explicitly disabled, DRBD uses "barriers" (BIO_RW_BARRIER) in quite a few situations. These will likely be translated to FUA and/or SYNCHRONIZE_CACHE SCSI commands. In most cases, you will have only a partial stripe, so this forces the RAID5 (or RAID6) into a full read, modify, write cycle. For RAID1, that's not so much an issue. If you configure your controller to ignore these commands, (at least as long as the battery is protecting the cache), this will improve performance, obviously. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed