[DRBD-user] [OT] rsync issues [Was Re: Read performance?]

Ross S. W. Walker rwalker at medallion.com
Tue Jun 5 21:44:07 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

> -----Original Message-----
> From: drbd-user-bounces at lists.linbit.com 
> [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of David Masover
> Sent: Tuesday, June 05, 2007 2:51 PM
> To: drbd-user at lists.linbit.com
> Subject: Re: [DRBD-user] [OT] rsync issues [Was Re: Read performance?]
> On Tuesday 05 June 2007 09:41:44 Leroy van Logchem wrote:
> > > But the "queue" directory doesn't even exist for DRBD:
> > >
> > > root at norton:~# ls /sys/block/drbd0/
> > > dev  holders  range  removable  size  slaves  stat  
> subsystem  uevent
> > > root at norton:~#
> >
> > Try grep -i sched /var/log/dmesg.
> Tells me what's compiled in, and what's the default scheduler 
> (it's deadline, 
> btw).
> > On another note: You might want to try ssync instead of 
> rsync. It starts
> > without building the filelist.
> Very nice. I might have done it that way if I had to do it over again.
> On Tuesday 05 June 2007 10:25:04 Ross S. W. Walker wrote:
> > > generally I recommend deadline.
> > > for your situation (high latency network link) I suggest
> > > small nr_requests.
> > > whether or not that is the real problem, I cannot tell, 
> this has only
> > > been an educated guess.
> >
> > Actually anticipatory might be better here as it will get the
> > read requests down first as the #1 slow-down are write requests.
> Well, it looks like it's set to deadline now, which is also 
> the default for 
> the box. Anticipatory might be better, I'll remember that if 
> I need it...

Ok, you set it on the grub kernel command line with elevator=as
(for the older kernels) elevator=anticipatory for newer ones.

I did deadline on my iSCSI storage servers for a while until I
noticed how much writes starve out reads and how important it is
to get a quick read off of storage then a write, which is mostly
done asynchronously anyways.

> > If the link is high-latency I would seriously look into using
> > asynchronous replication in an active-passive setup, then use
> > a network file system with local-cache backing store for sharing
> > the storage... I believe Solaris has NFS caching backing store.
> Erm...  If I understood what you said, I'm not sure I like that way.
> The whole point of DRBD and replication is that if the office 
> burns down, we 
> can pick up the box from offsite, physically drive it into 
> the office, and 
> bring it up as a DRBD primary, then use BackupPC to restore 
> as if we had the 
> original backup server intact.
> I don't like the idea of using a cache for that, and I really 
> don't like the 
> idea of asynchronous replication here, unless it's done 
> entirely in-order. If 
> the primary goes down, the FS image on the secondary must be 
> consistent.

I'm sorry I wasn't clear.

Here is the secenario I'm talking about. You have drbd replicating
the storage between production site A and backup site B. Either on
the same boxes in A and B or via iSCSI to other servers in sites A
and B, you run NFS servers.

Then in remote sites C, D and E you can mount these NFS shares on
Solaris boxes that provide a nice local storage backing store which
caches the files as they are accessed, verifying they are current
on each access. First access will feel the latency, but all other
accesses afterward should be quick, unless some other site changes
the file, then it will be slow untill latest cached copy is

If site A (primary production) blows up, then heartbeat will switch
over to secondary site B, with small data loss due to async
replication if it is in the middle of the production day, the NFS
clients should be able to re-establish connection to site B and

Just an idea, of course the real world implementation will bring
up other issues, but hey that's why we have a job...

> In any case, thanks for all your help, and in the end, the 
> problem is solved 
> by brute force.
> It's only the initial backups which were frustrating, as they 
> took days, and 
> any attempt to, say, edit a config file that was stored on 
> the DRBD device 
> was almost impossible (I was deliberately disconnecting them 
> just to get vim 
> to open; I always hated the vim swapfile on slow media).
> Nowdays, even with synchronous replication (over a slow DSL 
> link, over a VPN), 
> the whole backup process takes a little under an hour, mostly 
> thanks to 
> BackupPC's pooling and compression. Since this happens 
> overnight, I'm not 
> even there to notice slow reads.

If you keep backups local and replicate them behind the scenes
to a central repository that might help.


This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.

More information about the drbd-user mailing list