[DRBD-user] 14TB storage with drbd82

Andrei Neagoe anne at imc.nl
Mon Jul 28 11:19:34 CEST 2008


Actually... I'm using the same thing. If you look at pvscan output 
you'll see that all 7 drbd devices are assigned to ftp VG. As a 
filesystem I'm using xfs, online growing starting with drbd resizing, 
lvextend and finally the xfs_grow works perfectly (and really really 
fast). The thing is that in the future, if I want to expand the size and 
add an exp3000 to the existing setup, I'll need to create at least 4 
more drbd devices to support that and I don't know what the device limit 
is (if any except memory consideration).

Cheers,
Andrei.


Lee Christie wrote:
> We're running 8 separate devices. We chose to implement it slightly
> differently to you - we use LVMs with drbd on top of those to remain
> flexible with resizing. Presumably you don't want/need to resize your
> drbd devices online ?
>
> All works fine though.
>
>   
>> -----Original Message-----
>> From: drbd-user-bounces at lists.linbit.com
>> [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Andrei Neagoe
>> Sent: 28 July 2008 10:02
>> To: drbd-user at lists.linbit.com
>> Subject: [DRBD-user] 14TB storage with drbd82
>>
>> Hi,
>>
>> I've managed to build up a storage cluster using drbd82 and
>> IBM's ds3200
>> storage boxes. Since there was a limit on the available storage that
>> could be replicated on drbd82 (8TB by Lars's latest update) I had to
>> create separate devices and use LVM on top of them. Here is how the
>> setup looks like:
>>
>> [root at leviathan cluster]# cat /proc/drbd
>> version: 8.2.6 (api:88/proto:86-88)
>> GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by
>> buildsvn at c5-x8664-build, 2008-06-26 19:33:19
>>  0: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134836056 nr:1853314662 dw:1853315170 dr:2134907815 al:12
>> bm:130324 lo:0 pe:0 ua:0 ap:0 oos:0
>>  1: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134971458 nr:2118749947 dw:2118881761 dr:2134913166 al:45
>> bm:130342 lo:0 pe:0 ua:0 ap:0 oos:0
>>  2: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134847300 nr:2118119779 dw:2118120107 dr:2134913953 al:13
>> bm:130338 lo:0 pe:0 ua:0 ap:0 oos:0
>>  3: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134830488 nr:2120638053 dw:2120638113 dr:2134838392 al:1
>> bm:130301 lo:0 pe:0 ua:0 ap:0 oos:0
>>  4: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134830488 nr:2118368965 dw:2118369025 dr:2134838392 al:1
>> bm:130301 lo:0 pe:0 ua:0 ap:0 oos:0
>>  5: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134830488 nr:2118615810 dw:2118615870 dr:2134838392 al:1
>> bm:130301 lo:0 pe:0 ua:0 ap:0 oos:0
>>  6: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:2134830488 nr:2119457671 dw:2119457731 dr:2134838392 al:1
>> bm:130301 lo:0 pe:0 ua:0 ap:0 oos:0
>>
>> [root at leviathan ~]# pvscan
>>   PV /dev/drbd0   VG ftp   lvm2 [1.99 TB / 0    free]
>>   PV /dev/drbd1   VG ftp   lvm2 [1.99 TB / 0    free]
>>   PV /dev/drbd2   VG ftp   lvm2 [1.99 TB / 0    free]
>>   PV /dev/drbd3   VG ftp   lvm2 [1.99 TB / 0    free]
>>   PV /dev/drbd4   VG ftp   lvm2 [1.99 TB / 0    free]
>>   PV /dev/drbd5   VG ftp   lvm2 [1.99 TB / 0    free]
>>   PV /dev/drbd6   VG ftp   lvm2 [1.99 TB / 0    free]
>>   Total: 7 [13.92 TB] / in use: 7 [13.92 TB] / in no VG: 0 [0   ]
>>
>> [root at leviathan ~]# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> ... OUTPUT OMITTED ...
>> /dev/mapper/ftp-ftp    14T   14T  360G  98% /var/ftp
>>
>> So far everything seems to be quite stable, I managed to fill the
>> storage with dd so it's actually being used, throughoutput is around
>> 70MB/s (using 1gbit xover link).
>> My question is now about scalability, Lars mentioned that
>> it's possible
>> to do this but eventually we'd run into another limit. At
>> this point I
>> wonder what that limit could be so I can take it into account. Did
>> anybody managed to use drbd82 with more than 7 active devices? How
>> stable was it?
>>
>> Thanks,
>> Andrei.
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>>     
>
> -------------------------------------------------------------------------------
>
> This email may contain legally privileged and/or confidential information. It is solely for and is confidential for use by the addressee. Unauthorised recipients must preserve, observe and respect this confidentiality. If you have received it in error please notify us and delete it from your computer. Do not discuss, distribute or otherwise copy it.
>
> Unless expressly stated to the contrary this e-mail is not intended to, and shall not, have any contractually binding effect on the Company and its clients. We accept no liability for any reliance placed on this e-mail other than to the intended recipient. If the content is not about the business of this Company or its clients then the message is neither from nor sanctioned by the Company.
>
> We accept no liability or responsibility for any changes made to this e-mail after it was sent or any viruses transmitted through this e-mail or any attachment. It is your responsibility to satisfy yourself that this e-mail or any attachment is free from viruses and can be opened without harm to your systems.
>
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linbit.com/pipermail/drbd-user/attachments/20080728/1a7b3e42/attachment.htm 


More information about the drbd-user mailing list