Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Dear Lars, The backend devices are on a IBM *ServeRAID*-*8k* SAS *Controller* <http://www-132.ibm.com/webapp/wcs/stores/servlet/ProductDisplay?productId=4611686018425211442&storeId=1&langId=-1&catalogId=-840> with 256MB battery backed write cache you are saying i should not have seen a performence hit with a battery backed write cache? i now tried with internal metada , options : no-disk-flushes; no-md-flushes; and it now works almost to naive speed. results are similar as when i had the meta data on separate storage but still leaving disk flushes and md flushes on. so i dont understand , does it request a flush from the backend device every write request ? it sounds like synchronous write... Also i read that if i do have a reliable battery backed write cache i can use no-disk-flushes; no-md-flushes; , but i wonder if it is only the controller (with the battery backed write cache) that has protection? Don't hardrives have their own volatile write cache of their own ? so in a case the controller passed the data to the HD but the hd kept it in its cache and there was a power outage i would get corruption? On Fri, Jun 20, 2008 at 3:45 PM, Lars Ellenberg <lars.ellenberg at linbit.com> wrote: > On Thu, Jun 19, 2008 at 09:28:58PM +0200, Marcelo Azevedo wrote: > > After placing the metadata on a different spindle (HD) , i was able to > reach > > almost close to native speed (1-2MB/s less) > > > > with metadata internal i was reaching tops around half of the native > speed , > > 37MB/s~ : > > physical partition -> drbd -> ext3 > > and > > 63MB/s~ with physical partition -> ext3 > > 61MB/s with metadata external, now this is true for another strong > machine . > > > > This other machine has hardware RAID1 with two : > > Cheetah T10 series, > > 146GB ,Serial Attached SCSI > > Interface Speed: 3Gb/s > > Spindle Rotation Speed: 15,000 RPM > > Performance: 10K > > on an IBM 2G Xeon server with 2 dual cpu packages , each cpu with 4 cores > . IBM > > ServeRAID SCSI controllers > > 4GB of ram > > native speed -hardware raid1 -> physical partition -> ext3 is around > 110MB/s > > ( still isn't this a bit slow for this HD ? ) > > hardware raid1 -> physical partition -> drbd -> ext3 - 101MB/s with > > external metadata on a USB2 connected SATA HD 7,200 rpm > > > > now this is the crazy part - 8.3MB/s~ write speed ! with internal > metadata , > > and 150MB/s read speed.. > > this test was repeated with bonnie , iozone and dd , all showed around > same > > numbers , > > i mean why the huge jump from 8MB/s to 100MB/s when using external > metadata , > > and should this be STRESSED on the Docs or when starting the program that > > putting the metadata on external media improves performance > significantly? , > > still i don't understand why i was able to reach only 8MB/s write speed > on this > > strong server , maybe because of the hardware raid1 underneath? > > http://www.drbd.org/users-guide/ch-internals.html#s-metadata > -> internal meta data > -> disadvantages > > "head movements" aka seek time. > > if you use internal meta data on the same single spindle, > without a decent battery backed write cache, > you want to configure a large-ish al-extents, > so drbd meta data updates happen infrequently. > > -- > : Lars Ellenberg http://www.linbit.com : > : DRBD/HA support and consulting sales at linbit.com : > : LINBIT Information Technologies GmbH Tel +43-1-8178292-0 : > : Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 : > __ > please don't Cc me, but send to list -- I'm subscribed > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080620/b474c185/attachment.htm>