Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
>What do the experts think: Should this be sufficient to get the perfomance of a single SATA-Disk without DRBD?
Probably not, nothing will.
I'm using drbd in primary/primary mode to host KVM images on a two node cluster. (with drbd8.3.12, drbd8.4.1 has some performance issues)
I have switched to SSD's myself(in raid 5 mode). This improved the VM performance, (I guess because reading data is much faster), but drbd syncer speed did not improve. I even installed a 10G network backbone and used 10G network adapters on the servers. But still syncer speed does not go beyond 110MB/s.
I let Linbit look at this setup but they could not get a higher syncer speed with protocol C. I think the problem is that the syncer uses a single thread and therefore is limited by the processing power of one cpu. Turning off power management and IRQ balance helped a little bit, but not much.
I have spend ages trying to increase the syncer rate, for now it seems limited to 110MB/s.
The is my latest drbd.conf
#
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#
global {
minor-count 64;
usage-count yes;
}
common {
syncer {
rate 110M;
verify-alg crc32c;
#csums-alg sha1; # do not use, slow performance
al-extents 3733;
cpu-mask 3;
}
}
resource VMstore1 {
protocol C;
startup {
wfc-timeout 1800; # 30 min
degr-wfc-timeout 120; # 2 minutes.
wait-after-sb;
become-primary-on both;
}
disk {
no-disk-barrier;
no-disk-flushes;
}
net {
max-buffers 8000;
max-epoch-size 8000;
sndbuf-size 0;
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
syncer{
cpu-mask 3;
}
on vmhost6a.vdl-fittings.local {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.100.37:7788;
meta-disk internal;
}
on vmhost6b.vdl-fittings.local {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.100.38:7788;
meta-disk internal;
}
}
Best regards,
Maurits van de Lande
-----Oorspronkelijk bericht-----
Van: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] Namens Lukas Gradl
Verzonden: dinsdag 3 april 2012 11:54
Aan: drbd-user at lists.linbit.com
Onderwerp: Re: [DRBD-user] Hardware-recomendation needed
> I'm not sure I understand the question, sorry.
>
> DRBD isn't much slower than the native disk performance, provided your
> network is fast enough. So the question is less about DRBD's
> performance as it is about the performance you need from the storage.
> If a standard SATA drive's performance is fine, then it's all you need.
I followed the discussion about switch or no switch.
But I'm still stuck with my questions...
For use with KVM with automatic failover I need a primary/primary setup, so AFAIK protocol C is required.
According to my benchmarks DRBD is much slower in that setup than native HDD performance and changing the Network-Setup from 1GBit direct link to
2 bonded interfaces doesn't increase speed.
As we've just space for one 3.5" HDD (the other bay is used by the
Boot-SSD) I'm unable to install a raid5-setup.
So I think about installing two SSDs per Server using a 2x2.5" to 1x3.5"
adapter and leaving 20% of each ssd's space unpartitioned because of the lack of TRIM support.
Then I would create 2 DRBD devices, to store the KVM-Images onto.
Moneywise this is not cheap but ok with our budget.
What do the experts think: Should this be sufficient to get the perfomance of a single SATA-Disk without DRBD?
regards
Lukas
--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user