Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Tue, Jun 24, 2008 at 12:06 PM, <justin.kinney at academy.com> wrote: > > I've deployed DRBD on HP DL360's and DL380's. I've used internal > > meta w/o any noticeable loss of performance. The typical setup is > > usually (4) 72GB 15K SFF SAS drives in a RAID-10 or RAID-5 for DRBD. > > Array -> LVM -> DRBD -> file system. In all cases, there is at least > > I'm setting up a very similar system here, using DL360s. But, I'm using > MSA-30's instead of local disks. > > > 256MB of cache on the controller and the BBWC is installed and > > write-cache enabled. Drive-level cache is disabled of course. > > By write-cache enabled, do you mean enabled through the controller > configuration? or in DRBD? Through the controller. Same question for drive-level cache. Is that at the controller or kernel? Controller. > Sequential write throughput is better than 150MBps native so the > bottleneck becomes the 1Gbps xover link between the boxes -- which > is fine for these environments. I'm seeing 100MB/s during complete syncs over bonded gig-e links, so I'm > hoping for a little better performance. I have not deployed DRBD w/bonded interfaces yet so I'm not sure what to expect there. You should verify what your storage is capable of outside of DRBD. Depending on how many spindles you have in your array and what RAID level you've chosen, you will get well above 100MB/s for sequential writes. Check to make sure you have BBWC's on your controllers and that write-cache is enabled. That's where the big performance comes from. You can check right through the OS if you've installed the CLI (hpacucli -- pretty sure that will work for the MSA 30). - Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080624/c39acd81/attachment.htm>