[DRBD-user] Correct way to benchmark IO on DRBD
escario at azylog.net
Tue Feb 20 12:08:15 CET 2018
I'm trying to benchmark *correctly* my lab setup.
Pretty simple : 2 proxmox nodes setup, protocol C, ZFS RAID1 HDDs backend with
mirror log and cache on SSDs.
DRBD9, 10Gbps Ethernet network, tuned latency by reading a lot of papers on this.
What I'm trying : run fio with below parameters on 2 VMs running on hypervisor A
and two nodes running on hypervisor 2.
VMs are really simple Ubuntu 17.10 with 5G disks.
fio command line :
fio --filename=/tmp/test.dat --size=1G --direct=1 --rw=randrw --rwmixwrite=30
--refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=128k
--iodepth=16 --numjobs=1 --time_based --runtime=600 --group_reporting
Which means : 1G test file, read 70%, write 30%, block size 128k, iodepth 16.
I'm not really sure about others parameters.
My final goal is to get a clue of how many VMs I will be able to run on those
hypervisors with a typical workload on ~500kB/s write and ~2MB/s read.
What would be *really* cool : ability to instanciate a bunch of VMs running this
workload and see when the hypervisors overload. Even cooler : dynamic workload
with threshold (500kB/s at a time and +/-10% randomness one minute later).
Does anyone have an example of such code piece ?
How to you benchmark your disks for 'real life' workload ?
Thank you !
More information about the drbd-user