[DRBD-user] Using LVM with DRBD/RHEL 5

Lars Ellenberg lars.ellenberg at linbit.com
Thu May 26 11:26:39 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, May 25, 2011 at 08:31:23PM +0900, Junko IKEDA wrote:
> Hi again,
> 
> This is the issue with synchronization PV.
> We tried to setup this according to the following guide.
> http://www.drbd.org/users-guide/s-lvm-drbd-as-pv.html
> 
> DRBD version 8.3.10
> OS version RHEL5.2
> kernel version 2.6.18-92.EL5 (x86_64)
> LVM version LVM2(included in RHEL 5.2)
> Hardware information;
> I/O scheduler: deadline
> partition size: 100GB
> CPU: 6 core x 2
> Memory: 12GiB
> HDD: SAS 10000rpm, 300GiB
> 
> DRBD works well, but there are a perfomance problem.
>  We run "dd" like thie;
>  # dd if=/dev/zero of=/mnt/dd/10GB.dat bs=1M count=10000 oflag=direct
> 
>  then, "iostat" shows the strange output.
>  # iostat -xmd 10 /dev/cciss/c0d3
>  -------------------------------------------
>                    | wMB/s | w/s
>  -------------------------------------------
>  hdd              | 68.6     |  805
>  drbd             | 47.3     |  910
>  lvm on drbd   | 31.8     | 8140
>  -------------------------------------------


This is on a 2.6.38 kernel low end test system,
with drbd 8.3.10 (actually, its 8.3.11rc, but that won't matter).
io stack is: sda -> lvm -> drbd -> lvm 
going through the page cache (dd conv=fsync)

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 24329,00    0,00 1271,00     0,00   100,00   161,13     8,82    6,94   0,33  42,00
dm-6              0,00     0,00    0,00 25600,00     0,00   100,00     8,00   235,36    9,19   0,02  42,00
drbd0             0,00     0,00    0,00 25600,00     0,00   100,00     8,00   301,28    9,20   0,04  95,20
dm-11             0,00     0,00    0,00 25600,00     0,00   100,00     8,00   235,72    9,21   0,02  42,00

The page cache submits 4k (8 sectors) requests,
sda is the first queue with an actual io scheduler,
so there the bios are merged into larger (80k) requests.

Same setup, directio:
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00   400,00    0,00  495,00     0,00    99,00   409,60     1,57    3,18   0,75  37,20
dm-6              0,00     0,00    0,00  900,00     0,00   100,00   227,56     2,81    3,12   0,41  37,20
drbd0             0,00     0,00    0,00  900,00     0,00   100,00   227,56     8,62    3,12   1,11 100,00
dm-11             0,00     0,00    0,00  900,00     0,00   100,00   227,56     2,81    3,12   0,41  37,20

request size is around 128k in the bio-based virtual drivers
(limitation in drbd 8.3.10, we relax that to whatever the kernel
supports in 8.4), and still get merged to ~200 k requests on
average on sda, which in this deployment apparently has a 256k
limit per request.

(why drbd utilization seems to always shows up as 100% I don't know,
we probably do something wrong in the request accounting).

> In case of "lvm on drbd", w/s shows relatively large value compared to
> other cases, > nevertheless I/O is not so busy, CPU %util increase to 100%.
> so DRBD sync performance drops significantly.

If with direct io, you do not get larger requests than 4k in the
"virtual" layers, your (in kernel) device mapper and/or DRBD are
too old.

If they even don't get merged into larger requests in the "real"
device queue, then there is something wrong there as well.

> there is no I/O error but Pacemaker detects it as DRBD's error(monitor
> Timed out).

What exactly is timing out,
and what is the time out?

> We also tried the same test on RHEL 5.5 and RHEL 6.1,
> but the result is the same.
> Have anyone ever seen a similar problem?
> LVM2 included in RHEL 5.2 might be too old, but RHEL 6.1 is the same,
> is there some tips for physical volume settings?
> 
> Best Regards,
> Junko IKEDA
> 
> NTT DATA INTELLILINK CORPORATION

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com



More information about the drbd-user mailing list