Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Am Mittwoch, 22. Oktober 2014, 18:48:38 schrieb Meij, Henk: > Hello All, new to the list but not drbd. We currently run a few very small > drbd setups. > > > > We are planning on moving our "research computing storage" to drbd. This > would be 4 integrated server/storage modules grouped in 2 drbd pairs on > CentOS. Each drbd unit will have 112 TB usable storage area under raid 60 > (LSI) with 2 global spares. We are planning on a straight forward setup: > /dev/sdb1 -> /dev/drsb0 -> xfs; all on 4 TB 7.2K SATA disks. > > > > Is anybody running 100+ TB drbd installations and what throughput are you > achieving on first initialization? > > Your options are: 1) Do the initial resync PRO: Everything is in sync CON: Might take weeks 2) Skip the initial resync PRO: Quick setup CON: Space not allocated by the XFS is not in sync 2a) Initialize both backend devices to a known state. I.e. dd if=/dev/zero of=/dev/sdb1 bs=$((1024*1024)) oflag=direct ...and skip the initial resync PRO: Possibly quicker than option 1 PRO: Everything is in sync CON: Still, takes a while (unit = days) 3) In case your /dev/sdb1 actually thinly provisioned: start a regular resync, discard the whole device immediately afterwards. (E.g. mkfs.xfs) The discard operations will get replicated by DRBD. (For DRBD a discard is a giant write, that happens to transmit in a blink over the network link, and happens to get executed faster than regular writes by the backing device, usually) These discard ops will contribute to the initial resync. DRBD will display a crazy resync rate of many gigabytes per second... PRO: Everything is in sync CON: None. 3a) If it is not thinly provisioned, use LVM and a thin LV to make it thinly provisioned. You will find the details for options 1 and 2 on the drbdsetup manpage, look for "new-current-uuid". PS: When you have done it, please share you experience with the list. Best regards, Phil