[DRBD-user] xfs stripe size and drbd

Mrten mrten+drbd at ii.nl
Mon Jul 4 11:44:12 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

When I create an XFS filesystem over a drbd over a raid 0 md set, 
mkfs.xfs either uses an unexpected stripe unit and stripe width, 
or complains when forced:


mdadm --create --level=0 --chunk=32 -n2 /dev/md3 /dev/sda5 /dev/sdb5


[default drbd create sequence, creates /dev/drbd0]


mkfs.xfs -d su=32k,sw=2 -l su=32k /dev/drbd0 -f
mkfs.xfs: Specified data stripe unit 64 is not the same as the volume stripe unit 512
mkfs.xfs: Specified data stripe width 128 is not the same as the volume stripe width 1024
meta-data=/dev/drbd0             isize=256    agcount=32, agsize=41839528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=1338864896, imaxpct=5
         =                       sunit=8      swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Is this expected?


Ubuntu natty, kernel 2.6.38, 

detailed information:


root at zenith:/srv# cat /proc/drbd
version: 8.3.9 (api:88/proto:86-95)
srcversion: CF228D42875CF3A43F2945A 
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:3138434 nr:0 dw:3138434 dr:471 al:799 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0


root at zenith:/# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Mon Jul  4 00:03:06 2011
     Raid Level : raid0
     Array Size : 5355623296 (5107.52 GiB 5484.16 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Jul  4 00:03:06 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 32K

           Name : zenith:3  (local to host zenith)
           UUID : ee12b6e7:e6e6d769:492c130f:9409508e
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5



root at zenith:/# mkfs.xfs /dev/drbd0 -f
meta-data=/dev/drbd0             isize=256    agcount=32, agsize=41839552 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=1338864954, imaxpct=5
         =                       sunit=64     swidth=128 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


(sunit=64)


root at zenith:/# drbdadm down r0

root at zenith:/# mkfs.xfs /dev/md3 -f  
meta-data=/dev/md3               isize=256    agcount=32, agsize=41840808 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=1338905824, imaxpct=5
         =                       sunit=8      swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


(sunit=8)

thanks,
Mrten.



More information about the drbd-user mailing list