Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi All, I'm still having issues with backing up this file system. I've followed the advice and used LVM under my DRBD device (Disk -> LVM -> DRBD -> OCFS2). I can create a snapshot (I have to run a fsck.ocfs2 on this snaphot after cloning the volume every time). I mount the snapshot read only with local filelocks. The performance of this snapshot volume is even worse than the original volume.... Performance (on the original volume and the snapshot volume) seems to degrade when the numbers of files in a directory rise. When I say performance, I mean 'operations where every file needs to be checked' like rsync or 'find . -mtime -1 print'. Performance for my application is great (writing a couple of thousand files a day and reading a couple of 100.000 a day) dd tests give me 200 MB/s writes and 600 MB/s reads. Am I missing something here, or will this setup just never work for my backups...? Kind regards, Dirk PS: I understand that this is more an OCFS2 issue than a DRBD thing... Op 4-4-2012 14:16, Kaloyan Kovachev schreef: > Hi, > > On Wed, 04 Apr 2012 12:15:39 +0200, Dirk Bonenkamp - ProActive > <dirk at proactive.nl> wrote: >> Hi, >> >> First off all: I posted a similar thread on the OCFS2 mailing list, but >> I didn't receive a lot of response. This list seems to be busier, maybe >> more luck over here... >> >> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I >> find this way, way slower than rsyncing a 'traditional' file system. > With clustered file systems you have additional slowdown from the locking, > so it is expected. > >> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on >> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer >> speed. Read and write speeds are OK on the file system. >> >> Some figures: >> >> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5 >> million files on it in 95 directories. About 3000 new files are added >> each day, few files are changed. >> >> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this >> filesystem over a Gb connection takes 70 minutes: >> >> Number of files: 1495981 >> Number of files transferred: 2944 >> Total file size: 201701039047 bytes >> Total transferred file size: 613318155 <tel:613318155> bytes >> Literal data: 613292255 <tel:613292255> bytes >> Matched data: 25900 bytes >> File list size: 24705311 >> File list generation time: 0.001 seconds >> File list transfer time: 0.000 seconds >> Total bytes sent: 118692 >> Total bytes received: 638195567 <tel:638195567> >> >> sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57 >> bytes/sec >> total size is 201701039047 speedup is 315.99 >> >> To compare this, I have a similar system (the old, non-HA system doing >> the exact same thing), with an ext3 filesystem. This one holds 6.5 >> million files, 500 Gb, about 10.000 new files a day. Backup done with >> rsync through ssh on a 100 Mbit line takes 400 seconds. >> >> I'd like to know if somebody has encountered similar problems and maybe >> has some tips / insights for me? >> > If you are using DRBD on top of LVM you can make use of snapshots and > mount the snapshot with local locking. Here is an example from my backup > script: > > lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0 > tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot > mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/ > rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION} > umount /mnt/LVMsnapshot > lvremove -f /dev/vg0/LVMsnapshot > >> Kind regards, >> >> Dirk >> >> >> >> >> _______________________________________________ >> drbd-user mailing list >> drbd-user at lists.linbit.com >> http://lists.linbit.com/mailman/listinfo/drbd-user