[DRBD-user] drbd resource write performance and sync speeds
chibi at gol.com
Wed Apr 9 11:56:50 CEST 2008
On Wed, 9 Apr 2008 10:49:57 +0200 Ralf Gross wrote:
> Christian Balzer schrieb:
> > > If your local I/O subsystem pulls 120MB/s, the expected max DRBD
> > > throughput is around 105 MB/s:
> > >
> > > - disk does 120,
> > > - Gigabit Ethernet realistically does 110,
> > Yeah, various network tests get near that value.
> > > - so the network is your bottleneck,
> > It's a bonded dual GigE. Initially done to give DRBD enough breathing
> > room and left in place to have more redundancy for it and heartbeat.
> Are the servers directly connected or are you using switch to connect?
> I tried the round-robin bonding mode with cisco switches, but cisco
> doesn't support rr mode. At least when I tested it. So in the end the
> connection between my 2 server was still limited by a single GbE link.
Yeah, I heard that from somebody else before, too. But no need
(distance/topology wise) to involve switches and thus more possible
points of failure in my case, so direct connections it is.
> In a test with 2 x GbE crossover connections, I was able to achive
> ~170 MB/s with the netpipe benchmark. I didn't test this with drbd,
> but other daemons (smbd, vsftp) didn't show a huge impovement from his
> raw speed gain. The limit was always ~80 MB/s.
That netpipe result matches mine here, I think I used a proftpd server
for raw transfer tests (with ssh/scp the CPU was always the limiting
factor) and it was definitely over the speed of a single link. But then
again these machines have 24GB RAM, so could transfer the 8GB test file
without ever hitting the disks. So maybe your 80MB/s are more related to
your storage system bandwidth?
Gruesse aus Tokio,
Christian Balzer Network/Systems Engineer NOC
chibi at gol.com Global OnLine Japan/Fusion Network Services
More information about the drbd-user