[DRBD-user] performance help with new servers

Roof, Morey R. MRoof at admin.nmt.edu
Fri Sep 30 00:22:33 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Michael,

 

This was quite interesting to read as it isn't that different from my
setup.  The raw device performance of the raid on writes is about
550MB/s and reads at about 750MB/s.  I haven't seen an 9260-8i with 1GB
of RAM.  Where did you find this card?  I'm using the LSI 9260-8i with
512MB.

 

I'm also using Seagate Constellation drives.  However, the big
difference between our machines is that you have a Myricom card and I've
got an Intel 82599 based 10Gbe card.  On my HP servers I always used the
Myricom cards as they work extremely well but on this order someone got
sold on saving money using systems with the Intel LOM setup.  

 

Have others had a lot of trouble using Intel based NICs?

 

Thanks,

Morey

 

________________________________

From: Kushnir, Michael (NIH/NLM/LHC) [C]
[mailto:michael.kushnir at nih.gov] 
Sent: Thursday, September 29, 2011 4:14 PM
To: Roof, Morey R.; drbd-user at lists.linbit.com
Subject: RE: performance help with new servers

 

Hi Morey,

 

It really depends on your RAID config and your network card. I am using
Dell PE-C 2100s with LSI's  9260-8i RAID card with 1GB RAM on board. I
have write-back caching enabled and read-ahead caching as well. My
RAID10 sets on each server is made up of 10 x 1TB Seagate Constellation
3Gb/s SATA drives with 1MB stripe size. My 10GbE cards are Myricom SFP+
cards. On the initial sync I was able to squeeze about 350MB/s transfer
speeds between servers. RAIDs 5 and 6 will give you worse performance.
RAID 10, 50, and 60 should do much better. Newer LSI cards also have the
option of adding SSDs as read and/or write cache for your RAID array and
various SSD protection features to make them last. You will need the LSI
GUI for Linux to find these features easily. Also, I've heard of
notoriously poor performance on 1st gen Intel cards on Linux. 

 

My setup is still pre-production so my drbd.conf file is not finalized
and has some non-advisable entries, but please take a look: 

 

global {

# minor-count 64;

# dialog-refresh 5; # 5 seconds

# disable-ip-verification;

usage-count no;

}

 

common {

 

syncer {

        rate 100M; # My bottleneck is RAID array so this is set so I can
still use storage during sync.

        #al-extents 257;

        }

 

handlers {

        #fence-peer "/usr/lib64/heartbeat/drbd-peer-outdater -t 5";

        #pri-on-incon-degr "echo O > /proc/sysrq-trigger ; halt -f";

        #pri-lost-after-sb "echo O > /proc/sysrq-trigger ; halt -f";

        #local-io-error "echo O > /proc/sysrq-trigger ; halt -f";

        }

 

 

disk {

        fencing resource-only;

        no-disk-barrier;

        #use-bmbv;

        on-io-error detach;

        no-disk-flushes;

        }

 

net {

        # data-integrity-alg md5;

        allow-two-primaries;

        after-sb-0pri discard-older-primary;

        after-sb-1pri discard-secondary;

        #after-sb-0pri disconnect;

        #after-sb-1pri disconnect;

        after-sb-2pri disconnect;

        #rr-conflict disconnect;

 

 

}

 

startup {

        wfc-timeout 5;

        degr-wfc-timeout 5

        become-primary-on both;

        }

}

 

resource drbd0 {

protocol C;

 

on **** {

  device /dev/drbd0;

  disk /dev/sdb1;

  address ****;

  meta-disk internal;

}

 

on ****{

  device /dev/drbd0;

  disk /dev/sdb1;

  address ****;

  meta-disk internal;

  }

}

 

Best,

Michael

 

 

From: Roof, Morey R. [mailto:MRoof at admin.nmt.edu] 
Sent: Thursday, September 29, 2011 1:48 PM
To: drbd-user at lists.linbit.com
Subject: [DRBD-user] performance help with new servers

 

Hi Everyone,

 

I got some new servers recently and they have the LSI Megaraid based
cards (with battery backup units) with 10K SAS drives and 10Gbe cards.
At the moment the performance is quite less than I would like with them,
about 90MB/s with DRBD protocol C.  My older HP servers do much better
than this so if anyone has some similar servers and get a lot better
performance could you share your drbd.conf file so I can help figure out
what I'm missing?

 

Thanks,

Morey

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110929/256eab2a/attachment.htm>


More information about the drbd-user mailing list