[DRBD-user] Slow NFS behaviour (0.7.5)

Philipp Reisner philipp.reisner at linbit.com
Wed Nov 24 11:23:21 CET 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Tuesday 23 November 2004 21:09, Todd Denniston wrote:
> Jaroslaw Zachwieja wrote:
> > On wtorek 23 listopada 2004 17:04, Philipp Reisner wrote:
> > > On Tuesday 23 November 2004 18:02, Philipp Reisner wrote:
> > > > Do you mount your local filesystems with the "sync" option ?
> > > > I do not do this. I see this as the common semantics which under
> > > > Unix/Linux.
> > >
> > > I wanted to add: There only way to run a "sync" mounted FS with
> > > tolerable performance is to use a controller with battery backed-
> > > up RAM.
> >
> > I've remounted everything with async option and noticed significant
> > performance improvement. Apparently you're right. "sync" is _not_ the
> > way to go with DRBD. "async" is.
> >
> > The loadavg dropped to the manageable range of 2-4 when using both
> > protocols B and C and I'm pretty happy with the setup now.
> >
> > Thanks for the help!
> >
> > Best regards,
> Philipp & Jaroslaw
> Are you talking about
> mount -o sync
> or
> setting an /etc/exports line "share withwho(sync)"
> Back when Lars was explaining to us on the list about data integrity I did
> some testing between _exporting_ sync vs async and doing ungraceful
> fallover in protocol = C on drbd 0.6.10, and with async it is possible to
> lose some of the most recent changes, even though your client machines may
> think it had reached stable storage.
> Note, I don't believe it was drbd that lost the data, it just never made it
> from the nfs server program to the drbd device, but the client was told by
> the nfs server "data synced" and the client dropped the data.
> when I just did an untar; make oldconfig dep bzImage of a linux-2.2.25 tree
> on an export sync and an export async nfs drives the same server machine(on
> top of drbd devices), and found that the async export is ~6.9 times faster
> than the sync export.  We chose to make drives where data integrity with
> the clients is extremely important export sync, and those where only
> temporary things (like a build that could be restarted after crash) as
> async.
> you might also want to be aware that nfs exports defaults have changed at
> some time in the past:
> man exports # on Fedora Core 1
> async  This option allows the NFS server to violate  the  NFS  protocol...
>            ...In releases of nfs-utils upto and including 1.0.0,  this
> option
>               was  the  default.   In  this  and  future releases, sync is
> the
>               default, and async must be explicit  requested  if  needed.
> To
>               help  make system adminstrators aware of this change,
> exportfs will issue a warning if neither sync nor async is specified.

Yes, every application programmer should be aware of the fact that
The return of the write() system call means that the data is in the 
OS's buffers, the return of the fsync()/fdatasync() syscall means that 
it is on disk.

If I log by ssh into my workstation, tar xvzf some.tar.gz, press the reset
while the operation is in progress, I know that the last files I see
in my ssh session will not be on the disk when the machines comes up again.

On the other hand if I do a transaction in my PostgreSQL database I know
that the new version will be permanent as soon as the prompt returns 
after the commit. Reason: PostgreSQL called fdatasync() on a file
on the same filesystem . -- The filesystem mount is still async!

Sane applications know about the Unix/Linux file semantics....

IMHO is a "sync" mounted filesystem a workaround to fix a broken

: Dipl-Ing Philipp Reisner                      Tel +43-1-8178292-50 :
: LINBIT Information Technologies GmbH          Fax +43-1-8178292-82 :
: Schönbrunnerstr 244, 1120 Vienna, Austria    http://www.linbit.com :

More information about the drbd-user mailing list