AW: [DRBD-user] 8.x performance vs 0.7x --- was (not urgent) Requestfor DRBD Developers

Petersen, Joerg j.petersen at msh.de
Fri May 30 16:32:04 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hallo together,

  I'm just struggling with the same issue.

With drbd 8.0.6, CentOS 5.1,
	on IBM xSeries x3650, ServeRaid-8k-l (Battery), Raid-6, 6x300GB/15k-Disks and Gig-Ethernet 
	I get ~90 Mbytes/sec write-throughput with /dev/drbd stacked on a LV.
	>>> optimal, only limited by ethernet. 
	(Without DRBD: 200 - 300 Mbytes/sec)

With drbd 8.2.6 im stuck to 18.0 Mbytes/second
or with new  parameters     no-disk-flushes and no-md-flushes
ist still limited to: 74 MB/s

So there's still some loss of speed between 8.0.8 and later versions?

Be careful: for small data quantities below 4 Gbytes the degradation
is not as visible!!

Anyone any idea?


Tested with:
-------------
drbdadm primary test;
cat /proc/drbd; 
for i in rootvg/4direct drbd1 ; do echo /dev/$i:; dd if=/dev/zero of=/dev/$i bs=1M count=4096; done

/dev/rootvg/4direct:
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 19.4024 seconds, 221 MB/s
/dev/drbd1:
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 238.107 seconds, 18.0 MB/s


  disk {
    on-io-error detach;
    no-disk-flushes;
    no-md-flushes;
  }
===>
/dev/rootvg/4direct:
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 19.5529 seconds, 220 MB/s
/dev/drbd1:
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 57.6269 seconds, 74.5 MB/s


Jörg

-----Ursprüngliche Nachricht-----
Von: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] Im Auftrag von Philipp Reisner
Gesendet: Samstag, 24. Mai 2008 21:35
An: drbd-user at lists.linbit.com
Betreff: Re: [DRBD-user] 8.x performance vs 0.7x --- was (not urgent) Requestfor DRBD Developers

[...]
> >>  Personally, I'm still on 8.0.8. Later versions at the time seemed 
> >> to show _VASTLY_ reduced performance on my hardware, so I excluded 
> >> it from the updates. I'm not experiencing any problems, so I'm sticking with it.

That is due to that change:

8.0.9 (api:86/proto:86)
--------
 * In case our backing devices support write barriers and cache
   flushes, we use these means to ensure data integrity in
   the presence of volatile disk write caches and power outages.
[...]

8.0.12 (api:86/proto:86)
--------
[...]
 * Two new config options no-disk-flushes and no-md-flushes to disable
   the use of io subsystem flushes and barrier BIOs.
[...]

But please *read* the documentation about "no-disk-flushes" and understand the implications before you set these options.

    # In case you are sure that your storage subsystem has battery
    # backed up RAM and you know from measurements that it really honors
    # flush instructions by flushing data out from its non volatile
    # write cache to disk, you have double security. You might then
    # reduce this to single security by disabling disk flushes with
    # this option. It might improve performance in this case.
    # ONLY USE THIS OPTION IF YOU KNOW WHAT YOU ARE DOING.
    # no-disk-flushes;
    # no-md-flushes;

-Phil
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user



More information about the drbd-user mailing list