<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.6944.0">
<TITLE>RE: [DRBD-user] Reducing loadavg / iowait? a few more questions</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2>Three questions.<BR>
<BR>
Do you really see that large of a performance boost on 2.6? <BR>
Does it apply to mode C as well?<BR>
Does anyone know why it is so much faster on 2.6?<BR>
<BR>
Rob<BR>
<BR>
<BR>
-----Original Message-----<BR>
From: drbd-user-bounces@lists.linbit.com on behalf of Bernd Schubert<BR>
Sent: Tue 8/2/2005 3:20 PM<BR>
To: logch_l@wldelft.nl; drbd-user@linbit.com<BR>
Subject: Re: [DRBD-user] Reducing loadavg / iowait? a few more questions<BR>
<BR>
On Tuesday 02 August 2005 22:31, logch_l@wldelft.nl wrote:<BR>
> >> -----------------<BR>
> >> resource drbd0 {<BR>
> >> protocol C;<BR>
> ><BR>
> > Depending on your requirements regarding the data safety you could try<BR>
> > to use protocol A or B, we are using B and are quite pleased.<BR>
><BR>
> Thanks for the suggestions.<BR>
><BR>
> about nfsd) I can't export async because we need the most gracefull<BR>
> fail-over ; there are always HPC clients writing..<BR>
<BR>
Hmm pity, we began using drbd last summer and were forced after some nasty<BR>
kernel bugs to switch from 2.6. to 2.4., in the mean time there's at least a<BR>
workaround for this bug and we can using 2.6. again. Without the async nfsd<BR>
export option we probably would have been forced to disable drbd, since it<BR>
was with 2.4. by far too slow (around 11-14MB/s to the drbd device with 2.4.,<BR>
but 40-60MB/s with 2.6.). However, drbd is mostly used for the home<BR>
directory, which we mostly use for our source code, papers and other relative<BR>
small data (program i/o is stored to other non-failover devices). With 2.4.<BR>
the latency for nfs+sync+drbd was so high, that compiling our programs became<BR>
dramatically slower and would have been the main reason to disable drbd,<BR>
using the async option, the latency went back almost to the sync+2.6. numbers<BR>
on the clients.<BR>
<BR>
><BR>
> drbd protocol) I was wondering about using protocol B after some googling<BR>
> but couldnt get someone to confirm it does reduce latency. Regarding<BR>
<BR>
Well, I really admit that the async option did the biggest part, but switching<BR>
from protocol C to B and later to A had a noticable effect, at least with<BR>
2.4. on our systems.<BR>
<BR>
> safety, our drbd cluster nodes never go down at exactly the same moment<BR>
> (the only situation in which a drbd 'write' can get lost using proto B).<BR>
><BR>
> Q1: How does using B make a difference?<BR>
<BR>
I'm afraid you will have to test this yourself, I guess it strongly depends <BR>
from system to system.<BR>
<BR>
> Q2: Can one change to protocol B on-the-fly using 'drbdadm adjust'?<BR>
<BR>
No clue, maybe the linbit people (Lars?) know?<BR>
<BR>
><BR>
> drbd hot area) Currently we use 521 al-extents for a 1.4 TB device. How<BR>
> much difference does it make when we increase this number? (nr of writes<BR>
> vs. size of al-extents) And also can this be changed on-the-fly?<BR>
<BR>
Also no idea.<BR>
<BR>
I guess that for your problem NFSv3 (and v2, of course) are not the best<BR>
option. Personally I have yet no experience with NFSv4, but from<BR>
specifications, especially regarding similar client side cache as AFS has,<BR>
its probably much more suited for your situation. Pity thats still under<BR>
heavy development, however, I will test it in the near future, since we in<BR>
principle need its much better security possibilities.<BR>
Another option for you would be to use highly experimental cachefs, which is<BR>
IMHO in -mm and there are even more experimental nfs-patches for it.<BR>
Though, probably none of those experimental things is really ready for your<BR>
and our usage...<BR>
<BR>
<BR>
Cheers,<BR>
Bernd<BR>
<BR>
--<BR>
Bernd Schubert<BR>
PCI / Theoretische Chemie<BR>
Universität Heidelberg<BR>
INF 229<BR>
69120 Heidelberg<BR>
e-mail: bernd.schubert@pci.uni-heidelberg.de<BR>
_______________________________________________<BR>
drbd-user mailing list<BR>
drbd-user@lists.linbit.com<BR>
<A HREF="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</A><BR>
<BR>
<BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>