Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, 2007-03-08 at 12:03 -0600, Dan Brown wrote: > >From Kristian Knudsen on Thursday, March 08: > > > > We will look into the csync2 project and make a test setup. > > Prior we have discarded rsync and other solutions, because it > > has to be real-time or close to real-time file replication. > > There is approximately 6GB of data (~100,000 files) which is first checked, > and then synched with the other server. Each server runs the sync process > via cron jobs on opposite minutes (one on odd minutes, the other on even). > I do not know whether the number of files or the amount of data affects the > length of time it takes for csync2 to check the filesystem, but up around > 30GB of data it would take a minimum of 5 minutes to check and sync (with If it's anything like rsync, then it's limited normally by the shear number of files. File size matters, only if the file needs to be synced normally. You might be able to squeeze a serious performance bump, by using a filesystem that is faster at dealing with large numbers of files. I'm guessing the limit is not with csync, but with your filesystem. Csync will likely be stuck waiting for the filesystem to return from essentially a 'find' or an 'ls -lR' of the entire tree. My cohort here has been playing with XFS lately, and he says that it has greatly improved access speed to our mp3 pile (rsynced to everyone's house nightly, with about 60,000 files). He said it's especially helped for folders with tons of files in that one folder (our pile is broken down into folders from a -> z, so on average about 2k files in each folder.. "t" and "s" are actually quite huge)...