Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Same observations here. Large linear writes on a single-spinde backing device are throtteled to less than 50% of their non-DRBD transfer rate when using internal metadata. After moving the metadata to a SSD, almost 80-90% of the orginal write rate will be reached again (which is a performance gain of 2 in fact). But in contrast to any SSD caching solution, there is no acceleration beyond the backing devices original performance. You can only reduce the DRBD-induced losses. Regards, Holger On 27.02.2013 23:13, Arnold Krille wrote: > On Wed, 27 Feb 2013 18:32:07 +0100 Lionel Sausin<ls at numerigraphe.com> > wrote: >> I wouldn't expect anything like the gains of >> bcache/flashcache/enhencio. Normally internal metadata are just as >> fast, thanks to the write cache of your disks and RAID adapter. Those >> are much faster than SSDs and metadata are small enough. >> However you may benefit from external metadata when your those caches >> are saturated by writes (high throughput for a long time). >> If you do have an SSD and expect big writes, give it a try and please >> tell us if it really makes a difference. > My experience with an ssd for (external) meta-data says that imrovement > is quite a lot! > You won't get faster continous writes, that is still limited by the > hdd. But you get much faster random-writes and the reason is this: > - With internal meta-data on hdd, each write (or until each barrier) > is followed by a disk-seek to the end of the disk where the > meta-data lives followed by a seek back to where you are writing. > And then you mix random writes at random positions... > - With external meta-data on another hdd, your data-disk doesn't have > to seek to the end of the disk anymore, step one of improvement. > - With external meta-data on ssd, you are only left with the seeks > during your normal random writes. > > With todays disks and normal usage (unless you are netflix or google), > the real speed-improvement your users see/feel is not faster throughput > but lower latency. > > Of course, using internal meta-data with the whole partition on ssd > gives you the best performance, but not everyone can buy enough ssds to > create a mirrored 6TB array of ssd. > > 3x2TB hdd + 160G ssd (for meta-data and the fast-loving databases) > times two on the other hand is actually affordable... > > As to the original authors question: There is a manpage about drbdmeta > which describes the options to dump and restore the meta-data of an > offline drbd. So the action will be: > - stop the drbd-volume > - dump the meta-data > - change the config to point the meta-data to the new place > - restore the meta-data > - restart the drbd-volume > - wait for sync (only incremental, not a full sync) and repeat with the > other node > > I did that with several volumes when our ssds arrived. Test the steps > with a scrap-drbd-volume before doing the procedure on production-data > to be sure. > > Have fun, > > Arnold > > > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130228/0466c1a0/attachment.htm>