Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Jun 04 2015 at 6:21pm -0400, Ming Lin <mlin at kernel.org> wrote: > On Thu, Jun 4, 2015 at 2:06 PM, Mike Snitzer <snitzer at redhat.com> wrote: > > > > We need to test on large HW raid setups like a Netapp filer (or even > > local SAS drives connected via some SAS controller). Like a 8+2 drive > > RAID6 or 8+1 RAID5 setup. Testing with MD raid on JBOD setups with 8 > > devices is also useful. It is larger RAID setups that will be more > > sensitive to IO sizes being properly aligned on RAID stripe and/or chunk > > size boundaries. > > I'll test it on large HW raid setup. > > Here is HW RAID5 setup with 19 278G HDDs on Dell R730xd(2sockets/48 > logical cpus/264G mem). > http://minggr.net/pub/20150604/hw_raid5.jpg > > The stripe size is 64K. > > I'm going to test ext4/btrfs/xfs on it. > "bs" set to 1216k(64K * 19 = 1216k) > and run 48 jobs. Definitely an odd blocksize (though 1280K full stripe is pretty common for 10+2 HW RAID6 w/ 128K chunk size). > [global] > ioengine=libaio > iodepth=64 > direct=1 > runtime=1800 > time_based > group_reporting > numjobs=48 > rw=read > > [job1] > bs=1216K > directory=/mnt > size=1G How does time_based relate to size=1G? It'll rewrite the same 1 gig file repeatedly? > Or do you have other suggestions of what tests I should run? You're welcome to run this job but I'll also check with others here to see what fio jobs we used in the recent past when assessing performance of the dm-crypt parallelization changes. Also, a lot of care needs to be taken to eliminate jitter in the system while the test is running. We got a lot of good insight from Bart Van Assche on that and put it to practice. I'll see if we can (re)summarize that too. Mike