AW: [DRBD-user] benchmark / setup Questions

Bernd Oelker [HSP] Bernd.Oelker at hspg.de
Fri Jun 22 22:47:36 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hallo Lists,

I read some messages about benchmark / performance concerning different configuration(s)
Of NIC, Servers, Gigabit etc...
Is there anybody who knows, if a Ethernet MTU of 9000 while using Gigabit NIC(s) is 
Recommended (bigger Ethernet packets should transport big amount of data quicker,
While perhaps ACKS, NACKs etc are more time consuming..
Are there are any experience(s) about this Network parameter ?

Thanks

Bernd



-----Ursprüngliche Nachricht-----
Von: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] Im Auftrag von Lars Ellenberg
Gesendet: Freitag, 22. Juni 2007 17:38
An: drbd-user at lists.linbit.com
Betreff: Re: [DRBD-user] benchmark / setup Questions

On Fri, Jun 22, 2007 at 12:13:06PM +0200, 01flipstar at web.de wrote:
> Hi List,
> 
> im new in drbd and ha-linux but in the last days i have spend quite 
> some time with my little setup.
> 
> Setup 2 PC's each: 
> ~1GHz CPU
> ~ 512 MB/Ram
> 100MB LAN
> 
> DRBD-0.8.0.3
> Kernel 2.6.21.3
> Suse 10.2
> 
> What I want is a OCFS2 partition on top of a drbd (activ/activ) setup.
> 
> Everything works so fare but it was quite hard to find out that:
> 
> - in a activ/activ setup only protocoll C is supported -> found in the 
> sourcecode only (call me optimist)

hm. I thought that would be obvious ...
I'll add a not to the "allow-two-primaries" paragraph in the man page.

> The problem is that when both pc's concurrently write to one file the 
> performance gets really bad.
> 
> My benchmark writes single lines with write() into a file, for one 
> minute the overall throughput is ok but for some lines it takes about 
> 300-500msec.
> 
> I created two kernel one with preemption and timefreq=1000 and one 
> without preemption and timefreq=100 but the times did not vary.
> 
> Maybe the DLM is the bottleneck??

of course it is.

> I thought that I could improve the performance by not writing the 
> mata-data to the disk but to a ramdisk -> yes for tesing only :-)

uh oh.
well, you already said you are an optimist.

> because the meta-data gets updated by very change on the disk right??
no. but it does not matter here anyways. it is drbd meta data.
it has nothing to do with bouncing ocfs locks between nodes.

> so the disk-head has to be moved and can not write sequential???


> So I tryed to add
> 
>     meta-disk /dev/ram0[0];
> 
> to my /etc/drbd.conf.
> When I create the meta-data I get:
> 
>     Command 'drbdmeta /dev/drbd0 v08 /dev/ram0 0 write-dev-uuid 
> 587CBE437E1F2D14' terminated with exit code 255
> 
> The Ramdisk is > 128MB so this should not be the problem.
> 
> 
> Is there a way to get the meta-data on a ramdisk?

I'm not sure, I never tried, it does not make sense anyways.
and it would not help you either.

> Has somebody experiences with ocfs2 and drbd and can give me a hint 
> what to change??

when you care about performance on a cluster file system:
don't write to the same file on more than one node at a time.
try to not even read a file that is currently being written to by someone else.  if you can avoid it, don't even access the same directories from more than on node, if one node is known to create/modify many files there.

and nothing of that has anything to do with drbd, yet.
it is all about latency of lock bouncing, cache coherence, cache invalidation and therfore necessary rereads from storage.

-- 
: Lars Ellenberg                            Tel +43-1-8178292-0  :
: LINBIT Information Technologies GmbH      Fax +43-1-8178292-82 :
: Vivenotgasse 48, A-1120 Vienna/Europe    http://www.linbit.com :
__
please use the "List-Reply" function of your email client.
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user



More information about the drbd-user mailing list