Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Your network is probably not the bottleneck, I agree. Disk operations probably are the bottleneck. You do one of the following: 1.read in the data 2.calculate a checksum 3.compare checksums 4.then send to the other machine or 1.read in the data 2.send the data to the other machine I think they have tested this and found that it is faster to just read/send than to go through the rsync algorithm. Again, I would check the archives to see if you find more information from someone with more authority on this area. =) Just curious, how long does it take you to rsync 2TB with average size of file being 4 kb? Jason Gray wrote: > Curtis Tiffany wrote: > >> Check the list archive, you might find discussion regarding the >> usefulness of such a feature. I believe that the concensus was that >> an rsync-like setup would only be beneficial with a >> high-latency/low-speed network. >> >> Jason Gray wrote: >> >>> The only reason I ask is that I have large arrays (2TB) and it takes >>> 3-4 days to re-sync. If there was a way to re-sync the data (like >>> rsync using the check sum switch) that only updates changes rather >>> than the whole array, it would save a considerable amount of time. >>> The only other option is the sync-skip option but that reduces the >>> data reliability. >>> Jason >>> _______________________________________________ >>> drbd-user mailing list >>> drbd-user at lists.linbit.com >>> http://lists.linbit.com/mailman/listinfo/drbd-user >>> > I have a fast network (cross-over cable on a 1GB NIC) and reasonably > fast array (160MB/s HBA and Controller). No matter how fast the network > is, it's going to take a long time time re-sync 2TB of data. I was > hoping to reduce this time frame using a incremental type of backup > system rather than re-mirroring the whole array. Of course there are > data corruption problems that might creep into this situation. > > Cheers, > > Jason >