Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, Apr 13, 2012 at 08:09:06AM +0200, Dirk Bonenkamp - ProActive wrote: > Hi All, > > I'm still having issues with backing up this file system. I've followed > the advice and used LVM under my DRBD device (Disk -> LVM -> DRBD -> OCFS2). > > I can create a snapshot (I have to run a fsck.ocfs2 on this snaphot > after cloning the volume every time). I mount the snapshot read only > with local filelocks. > > The performance of this snapshot volume is even worse than the original > volume.... Performance (on the original volume and the snapshot volume) > seems to degrade when the numbers of files in a directory rise. When I > say performance, I mean 'operations where every file needs to be > checked' like rsync or 'find . -mtime -1 print'. Performance for my > application is great (writing a couple of thousand files a day and > reading a couple of 100.000 a day) dd tests give me 200 MB/s writes and > 600 MB/s reads. > > Am I missing something here, or will this setup just never work for my > backups...? stat() can be a costly syscall. even more so on cluster file systems. Hope you have already mounted -o noatime? Even readdir respectively getdents is typically more expensive on cluster file systems. keeping the number of files per directory small-ish (whatever that may be for your context) may help, introducing hirachical "hashing" subdirectories can help with doing so. And I'm not even speaking of stat()ing cache-cold, while some other random IO plus streaming IO happens.... (adding more RAM helps for this one, as does tuning vm.vfs_cache_pressure and maybe swappiness) Nothing of this has anything to do with DRBD, or with streaming IO "performance". All of this should have been amoung the first hits when searching for "OCFS2 slow" ... Why do you think you need/want a cluster file system again? -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com