Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
----- Original Message -----
> From: "Christian Balzer" <chibi at gol.com>
> To: drbd-user at lists.linbit.com
> Sent: Friday, August 12, 2011 12:58:11 AM
> Subject: Re: [DRBD-user] Directly connected GigE ports bonded together no switch
>
>
> On Wed, 10 Aug 2011 17:20:12 -0400 (EDT) Jake Smith wrote:
>
> [Huge snip]
> >
> >
> > I tuned my MTU setting on the direct link bond to 9000 and saw a
> > 10%
> > improvement on throughput. Negligible on latency though.
> >
> > I was getting consistent 180-185MB/s using the throughput testing
> > script
> > in the DRBD Users guide with mtu 1500. Iperf was 1.75-1.85Gb/s.
> > After
> > changing MTU I get 198-99MB/s consistently and highs at
> > 209-215MB/s.
> > Without DRBD my storage controller is delivering 225MB/s so now
> > there's
> > almost no cost on the throughput side. Iperf was rock solid at
> > 1.97-1.98Gb/s repeatedly.
> >
>
> These numbers match my similar (dual GigE balance-RR replication
> link)
> setup.
> And if you look back in the archives you can find my numbers for quad
> GigE
> balance-RR link).
>
> What is more than puzzling to me are these write speeds:
>
> - Initial resync happens at near wire-speed (rate was set to 200MB/s,
> ethstats output confirms this speed).
> - A makefs (ext4) happens at about the same speed (staring at
> ethstats).
> - A bonnie++ run on the mounted ext4 fs of the DRBD device clocks in
> at
> about 130-150MB/s depending on trailing winds and phase of the
> moon.
> This bonnie result matches what I see from ethstats.
> - The same bonnie++ run on the underlying backing device delivers
> about
> 350MB/s.
I've not used bonnie++ before but if I'm reading it right I got 198...
What parameters did you run bonnie++ with? Oh and here's my output so you can make sure I'm intrepreting it correctly!
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
Condor 24048M 861 99 198487 19 80007 10 3715 78 259450 17 585.5 18
Latency 9474us 263ms 1524ms 160ms 279ms 122ms
Version 1.96 ------Sequential Create------ --------Random Create--------
Condor -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 28774 35 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 464us 396us 405us 1127us 18us 47us
1.96,1.96,Condor,1,1313173265,24048M,,861,99,198487,19,80007,10,3715,78,259450,17,585.5,18,16,,,,,28774,35,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,9474us,263ms,1524ms,160ms,279ms,122ms,464us,396us,405us,1127us,18us,47us
>
> So what is different when a FS is mounted as opposed to the raw
> (DRBD) device? Where are those 50MB/s hiding or getting lost?
>
> Regards,
>
> Christian
> --
> Christian Balzer Network/Systems Engineer
> chibi at gol.com Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>