Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Rik, DRBD will be a bottleneck only if your hard drives can write faster that 120MB/s. This is because the theoretical maximum limit of a Gigabit card is 1000Mbps which corresponds to around 120MB/s (Megabytes per second). So if your computers are configured with a dedicated gigabit link, that is going to be your bottleneck if using protocol C on drbd. You may get increased speeds by using protocols A and B, but those are not as good as protocol C for data integrity. I am attaching some becnhmarks that I made on my servers using bonnie++. My configuration is 2 Dual core AMD Opteron 270, 4GB RAM, Areca ARC 1160 with 15 250GB hard drives. 2 HDDs are mirrored for the OS, 12 are in Raid 1 and I have 1 hot spare. Please note that my HDDs are just SATA 150, so I am really not getting the maximum speed I could get if I had used SATA 3Gbps, however, at the time the servers were put together SATA 3gbps drives were not available or too expensive. Benchmark explained: I created the biggest Raid array out of 12 drives and then made partitions every 1TB. That is why you will see part1 and part2 for Raid1 and part1, part2 and part2 for raid 5 and 6. I then ran bonnie++ 3 times on each partition and finally got the average results in the attached table. You can clearly see from the first two benchmark results how DRBD is limiting the write speed on RAID10. While writing directly to the disk with no drbd (first row in the table), I get 231,276 K/sec writes. However when using DRBD and Raid10 (row 2) I get 123,738 K/sec writes, but wait, 123,738 K/sec looks *incredibly* similar to 120MB/s which is the maximum theoretical bandwith of the Gigabit network cards I use for the drbd connection. In any case writes of 123,738 K/sec are not bad at all. That is still very fast. I can tell you that this DRBD setup beats the crappy DELL powervault 220S with SCSI drives that I have configured in cluster mode to provide HA without using DRBD. The 220S has 12 SCSI 147GB drives configured in Raid 5. My DRBD/ARECA/SATA setup beats the dell by a factor of almost 5 speedwise. The other problem of the DELL setup is that I have a single point of failure on the Powervault 220S. DRBD rocks!!! If your oracle application can live with 120MB/s writes I would say go ahead and use drbd. If it cannot, then you either need to upgrade to 10Gbps NICs and check if DRBD would support those speeds (I think I recently saw a posting where I think there was a limitation in the order of hundreds of MB/s for the drbd link, something like 500 or 700, check the mailing list) or just don't use DRBD. Diego Rik Herrin wrote: > Thanks Gary. I was planning on using a RAID 10 > configuration, so writes are a lot faster than RAID 5. > In any case, if I did get a configuration with a > powerful RAID controller capable of this type of > throughput, would DRBD in any way be a bottleneck? I > would think the deltas are a lot let than the actual > data written, so it shouldn't be a bottleneck. > However, if anyone would care to comment on their own > database experience, that would be lovely. Thanks for > your time :D > > --- "Gary W. Smith" <gary at primeexalia.com> wrote: > > >>Rik, >> >>I believe in your case it's going to be all about >>the speed of the drives. I'm not sure that even 8 >>10k rpm drives configured with stripe/parity will >>achieve this in a standard environment. >> >>The bigger question isn't the speed of the writes >>but rather the delta changes per second to the >>database. >> >> >>-----Original Message----- >>From: drbd-user-bounces at lists.linbit.com on behalf >>of Rik Herrin >>Sent: Thu 3/9/2006 3:06 AM >>To: drbd-user at lists.linbit.com >>Subject: [DRBD-user] Anyone Get Write Speeds over >>150MB/s Writes using DRBD? >> >>Hi, >> I am currently evaluating the use of drbd for >>real-time replication of an Oracle DB. The hardware >>that the Oracle DB will be running on involves 2 >>dual-core AMD Opterons, 4 GB RAM, an LSI MegaRAID >>320-2X SCSI RAID Controller with 512MB NVRAM, and >>10k >>SCSI Hard drives (8 of them). An Intel Pro/1000 MT >>Quad Pro Server Adapter card will be used for >>networking, with 2 ports dedicated to the DRBD >>connection to a similarly configured machine. Would >>drbd be a bottleneck in this configuration? The NIC >>should be able to deliver about 180MB/s or so and so >>should the SCSI RAID controller. Anyone have >>experience with this type of hardware? Thank you >>for >>your time. >> >>__________________________________________________ >>Do You Yahoo!? >>Tired of spam? Yahoo! Mail has the best spam >>protection around >>http://mail.yahoo.com >>_______________________________________________ >>drbd-user mailing list >>drbd-user at lists.linbit.com >>http://lists.linbit.com/mailman/listinfo/drbd-user >> >> > > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20060310/34d83529/attachment.html>