[DRBD-user] Some interesting metadata stuff

Gennadiy Nerubayev parakie at gmail.com
Sat May 9 21:57:41 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Sat, May 9, 2009 at 6:14 AM, Lars Ellenberg <lars.ellenberg at linbit.com>wrote:

> On Fri, May 08, 2009 at 06:27:36PM -0400, Gennadiy Nerubayev wrote:
> > Possibly related to my earlier post, but this time I focused on two
> things
> > primarily: SSDs and metadata.
> >
> > Hardware: DRBD on top of 1 X25-e in each of the DRBD nodes, configured as
> > simple volume on a hardware raid controller, and connected via IP over
> > infiniband.
> > Workload: 100% random io writes, 8KB block size
> >
> > Direct to the disk: 77.78MB/s, average latency 2ms
> > Disconnected DRBD, metadata on ramdisk: 75.64MB/s, average latency 2ms
> > Connected DRBD, metadata on ramdisk: 50.87MB/s, average latency 4ms
> > Disconnected DRBD, metadata internal: 6.25MB/s, average latency 39ms
> > Connected DRBD, metadata internal: 6.20MB/s, average latency 39ms
> > Disconnected DRBD, metadata on a single 15K sas disk: 43.46MB/s, average
> > latency 5ms
> > Connected DRBD, metadata on a single 15K sas disk: 39.32MB/s, average
> > latency 5ms
>
> Could you add a "dm linear" to the table?
> i.e. just checking if just one small layer of "virtual" block device
> has any effect on throughput and or latency.
> dmsetup create experiment <<<"0 $[20 <<21] linear /dev/sdX"
> then do your benchmark against /dev/mapper/experiment
> (just to see if that performs any different than "raw" /dev/sdx)


Doesn't seem to work:
dmsetup create experiment <<< "0 $[20 <<21] linear /dev/sdb"
device-mapper: reload ioctl failed: Invalid argument
Command failed


> > Full resync speeds for all tests were 180-200MB/s (about what's expected
> > from a single x25-e). There is no difference between flexible and regular
> > metadata for internal or external usage (metadata was recreated for those
> > tests). Interestingly, ~6MBs is the same speed that I got when testing a
> 7
> > disk raid 0 15K sas array with internal metadata (everything else is the
> > same) and putting metadata on ramdisk moved that up to ~35MB/s.
> >
> > So for some reason even in the case of a very fast SSD, internal metadata
> > performance for random writes is really bad. Putting it on any kind of
> > external disk brings an immediate exponential performance increase..
>
> IO scheduler? try deadline and noop.
> echo deadline  > /sys/block/sdX/queue/scheduler


Oooh, aaah:

Disconnected DRBD, metadata internal, deadline: 42.59MB/s, average latency
5ms
Connected DRBD, metadata internal, deadline: 39.34MB/s, average latency 5ms
Disconnected DRBD, metadata internal, noop: 44.06MB/s, average latency 5ms
Connected DRBD, metadata internal, noop: 38.61MB/s, average latency 5ms

After running the above deadline and noop tests, I went back to cfq and got
identical results as in the original test (~6MB/s). Kernel for reference is
2.6.29.1.

-Gennadiy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090509/8c6fd874/attachment.htm>


More information about the drbd-user mailing list