[DRBD-user] Hardware-recomendation needed

Lukas Gradl proxmox at ssn.at
Wed Apr 4 03:16:18 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Am Dienstag, den 03.04.2012, 12:03 +0200 schrieb Felix Frank:
> Hi,
> 
> On 04/03/2012 11:53 AM, Lukas Gradl wrote:
> > For use with KVM with automatic failover I need a primary/primary setup,
> > so AFAIK protocol C is required.
> 
> For dual-primary it is required, yes. You do need dual-primary for live
> migrations. You do *not* need it for automatic failover (in failure
> scenarios, live migration won't do you any good, anyway).
> 
> If live migration isn't an issue for you, single-primary is perfectly
> fine! You still want protocol C though :-)
> 
> > According to my benchmarks DRBD is much slower in that setup than native
> > HDD performance and changing the Network-Setup from 1GBit direct link to
> > 2 bonded interfaces doesn't increase speed.
> 
> Have you identified the exact bottleneck inside your DRBD setup?
> Have you done analysis according to
> http://www.drbd.org/users-guide/ch-benchmark.html?

Yes.

I benchmarked exactly as described in that doc. 

Throughput values don't change really - 85MB/s on the raw device, 83MB/s
on the drbd-device.
(average of 5 times "dd if=/dev/zero of=/dev/drbd1 bs=512M count=1
oflag=direct")

But latency drops:
for the 1000 512B blocks it took 0,05397s to write on the raw device and
12,757s on the drbd device
(average of 5 times "dd if=/dev/zero of=/dev/drbd1 bs=512 count=1000
oflag=direct")

Additionally I tried drbd with internal metadata on the sata-disk and
external metadata on the Boot-SSD - there were no significant changes.

My drbd.conf looks like this:

global { 
        usage-count no; 
}
common { 
        protocol C;
        syncer { 
                rate 120M; 
                al-extents 3389;
        } 
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
                become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect; 
                sndbuf-size 512k;
        }
}
resource r0 {
        on vm01 {
                device /dev/drbd0;
                disk /dev/sdb3;
                address 10.254.1.101:7780;
                meta-disk /dev/sda3[0];
        }
        on vm02 {
                device /dev/drbd0;
                disk /dev/sdb3;
                address 10.254.1.102:7780;
                meta-disk /dev/sda3[0];
        }
}
resource r1 {
        on vm01 {
                device /dev/drbd1;
                disk /dev/sdb1;
                address 10.254.1.101:7781;
                meta-disk internal;
        }
        on vm02 {
                device /dev/drbd1;
                disk /dev/sdb1;
                address 10.254.1.102:7781;
                meta-disk internal;
        }
}

The both nodes are linkes by a direct Gigabit connection used by drbd
exclusively.

> 
> > What do the experts think: Should this be sufficient to get the
> > perfomance of a single SATA-Disk without DRBD?
> 
> I don't really feel addressed ;-) but here's my 2 cents:
> 
> If DRBD performance with rotational disks is dissatisfactory, I wouldn't
> count on faster disks somehow solving the problem. You *may* save enough
> latency to make the setup worthwhile, but myself, I'd rather keep trying
> to root out the main problem.

I would like to do so - but I've no real idea what the problem might be.

regards
Lukas


-- 
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------




More information about the drbd-user mailing list