Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi, Thank you for your reply! Op 4-4-2012 14:16, Kaloyan Kovachev schreef: > Hi, > > On Wed, 04 Apr 2012 12:15:39 +0200, Dirk Bonenkamp - ProActive > <dirk at proactive.nl> wrote: >> Hi, >> >> First off all: I posted a similar thread on the OCFS2 mailing list, but >> I didn't receive a lot of response. This list seems to be busier, maybe >> more luck over here... >> >> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I >> find this way, way slower than rsyncing a 'traditional' file system. > With clustered file systems you have additional slowdown from the locking, > so it is expected. I do expect some kind of slowdown, but this a lot... I also experience this slowdown with only 1 node on-line, this should make the locking a lot easier / faster for DLM I guess? I'm not sure how the internal of DLM work. >> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on >> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer >> speed. Read and write speeds are OK on the file system. >> >> Some figures: >> >> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5 >> million files on it in 95 directories. About 3000 new files are added >> each day, few files are changed. >> >> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this >> filesystem over a Gb connection takes 70 minutes: >> >> Number of files: 1495981 >> Number of files transferred: 2944 >> Total file size: 201701039047 bytes >> Total transferred file size: 613318155 <tel:613318155> bytes >> Literal data: 613292255 <tel:613292255> bytes >> Matched data: 25900 bytes >> File list size: 24705311 >> File list generation time: 0.001 seconds >> File list transfer time: 0.000 seconds >> Total bytes sent: 118692 >> Total bytes received: 638195567 <tel:638195567> >> >> sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57 >> bytes/sec >> total size is 201701039047 speedup is 315.99 >> >> To compare this, I have a similar system (the old, non-HA system doing >> the exact same thing), with an ext3 filesystem. This one holds 6.5 >> million files, 500 Gb, about 10.000 new files a day. Backup done with >> rsync through ssh on a 100 Mbit line takes 400 seconds. >> >> I'd like to know if somebody has encountered similar problems and maybe >> has some tips / insights for me? >> > If you are using DRBD on top of LVM you can make use of snapshots and > mount the snapshot with local locking. Here is an example from my backup > script: > > lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0 > tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot > mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/ > rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION} > umount /mnt/LVMsnapshot > lvremove -f /dev/vg0/LVMsnapshot Unfortunatly, I don't have my DRBD on top of LVM at this time. I might try it if there are no other options. Cheers, Dirk -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120404/8104613a/attachment.htm>