[DRBD-user] Kernel memory leak with 8.3?

Lars Ellenberg lars.ellenberg at linbit.com
Thu May 23 17:26:58 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, May 21, 2013 at 01:08:21PM -0700, Lin Zhao wrote:
> Figured this out. It caused by huge buffer size, which is kind of like
> cache, but with disk meta data. The kernel won't release buffered memory
> unless the kernel tries to get more memory, and *buffered* memory
> accumulates over time.
> 
> I ran a program that allocates a git chunk of memory and the buffer size
> went down to less then 1MB.

echo 3 > /proc/sys/vm/drop_caches

Nothing to do with DRBD, though.

> On Tue, May 21, 2013 at 12:14 PM, Lin Zhao <lin at groupon.com> wrote:
> 
> > I've been running DRBD set up for 3 months, and recently noticed high
> > kernel memory usage on the secondary machines.
> >
> > The secondary machine runs very light user applications, but total memory
> > usage reaches as much as 60G.
> >
> > Is there a known issue with kernel message leak? Attaching top, slabtop
> > and meminfo from my backup machine. You can see that the processes from *
> > top* show very light RES size, but the system memory usage reaches 63 G.
> > Can you identify something obvious?
> >
> > top:
> > last pid:  5836;  load avg:  0.11,  0.17,  0.12;  up 250+16:35:23
> >
> >                               19:12:32
> > 326 processes: 1 running, 325 sleeping
> > CPU states:  0.0% user,  0.0% nice,  0.0% system,  100% idle,  0.0% iowait
> > Kernel: 106 ctxsw, 1019 intr
> > Memory: 63G used, 426M free, 52G buffers, 179M cached
> > Swap: 116K used, 8000M free
> >
> >   PID USERNAME  THR PRI NICE  SIZE   RES   SHR STATE   TIME    CPU COMMAND
> > 25785 ganglia     1  15    0  140M 8460K 3476K sleep  88:42  0.00% gmond
> >  4730 root        1  16    0   88M 3368K 2636K sleep   0:00  0.00% sshd
> >  4732 lin         1  15    0   88M 1836K 1084K sleep   0:00  0.00% sshd
> >  4733 lin         1  16    0   65M 1596K 1272K sleep   0:00  0.00% bash
> >  7523 root        1  15    0   65M 1596K 1272K sleep   0:00  0.00% bash
> >  5500 root        1  15    0   61M 1208K  644K sleep   2:20  0.00% sshd
> >  5834 root        1  15    0   61M  848K  336K sleep   0:00  0.00% crond
> >  8785 root        1  16    0   61M 1024K  516K sleep   0:03  0.00% crond
> >  7493 root        1  15    0   51M 1372K 1036K sleep   0:00  0.00% login
> >  5066 root        3  20    0   28M  576K  448K sleep   0:00  0.00%
> > brcm_iscsiuio
> >  8886 root        1  15    0   23M 1984K 1464K sleep   0:00  0.00% ntpd
> >  1798 root        1  11   -4   12M  776K  456K sleep   0:00  0.00% udevd
> >  5072 root        1   5  -10   12M 4452K 3164K sleep   0:00  0.00% iscsid
> >  5071 root        1  18    0   12M  652K  416K sleep   0:00  0.00% iscsid
> >  5718 lin         1  15    0   11M 1152K  848K run     0:00  0.00% top
> > 12349 root        1  15    0   11M 1532K  612K sleep   2:54  0.00% syslogd
> >     1 root        1  15    0   10M  752K  632K sleep   4:17  0.00% init
> >  5835 root        1  19    0 8688K 1072K  924K sleep   0:00  0.00% sh
> >  7301 root        1  19    0 3808K  532K  448K sleep   0:00  0.00% mingetty
> >  7300 root        1  18    0 3808K  532K  448K sleep   0:00  0.00% mingetty
> >  7299 root        1  17    0 3808K  532K  448K sleep   0:00  0.00% mingetty
> >  7298 root        1  16    0 3808K  532K  448K sleep   0:00  0.00% mingetty
> >  7302 root        1  18    0 3808K  528K  448K sleep   0:00  0.00% mingetty
> >  7303 root        1  18    0 3808K  528K  448K sleep   0:00  0.00% mingetty
> >  5836 root        1  19    0 3808K  484K  408K sleep   0:00  0.00% sleep
> >  1744 root        1  10   -5    0K    0K    0K sleep 649:36  0.00%
> > md2_raid10
> >  6586 root        1  15    0    0K    0K    0K sleep 227:57  0.00%
> > drbd1_receiver
> >  6587 root        1  -3    0    0K    0K    0K sleep  72:20  0.00%
> > drbd1_asender
> >  1740 root        1  10   -5    0K    0K    0K sleep  64:18  0.00%
> > md1_raid10
> >  1750 root        1  10   -5    0K    0K    0K sleep  16:02  0.00%
> > kjournald
> >
> > slabtop:
> > Active / Total Objects (% used)    : 108378294 / 108636165 (99.8%)
> >  Active / Total Slabs (% used)      : 2746709 / 2746710 (100.0%)
> >  Active / Total Caches (% used)     : 100 / 150 (66.7%)
> >  Active / Total Size (% used)       : 10273556.84K / 10298936.27K (99.8%)
> >  Minimum / Average / Maximum Object : 0.02K / 0.09K / 128.00K
> >
> >   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> >
> > 108327280 108091540  20%    0.09K 2708182 40  10832728K buffer_head
> > 228606 228226  99%    0.52K  32658        7    130632K radix_tree_node
> >   9856   9832  99%    0.09K    224 44  896K sysfs_dir_cache
> >   7847   3871  49%    0.06K    133 59  532K size-64
> >   7596   5889  77%    0.21K    422 18 1688K dentry_cache
> >   6300   5208  82%    0.12K    210 30  840K size-128
> >   4368   3794  86%    0.03K     39 112  156K size-32
> >   3150   2793  88%    0.25K    210 15  840K size-256
> >   3068   2563  83%    0.06K     52 59  208K Acpi-Operand
> >   2904   1253  43%    0.17K    132 22  528K vm_area_struct
> >   2376   2342  98%    1.00K    594        4 2376K size-1024
> >   2304    380  16%    0.02K     16 144        64K anon_vma
> >   2256   1852  82%    0.08K     47 48  188K selinux_inode_security
> >   2121   1943  91%    0.55K    303        7 1212K inode_cache
> >   1776   1463  82%    0.50K    222        8  888K size-512
> >   1710    705  41%    0.25K    114 15  456K filp
> >   1698   1642  96%    0.58K    283        6 1132K proc_inode_cache
> >   1632   1606  98%    2.00K    816        2 3264K size-2048
> >   1590   1147  72%    0.25K    106 15  424K skbuff_head_cache
> >   1584    324  20%    0.02K     11 144        44K numa_policy
> >   1180    359  30%    0.06K     20 59        80K delayacct_cache
> >   1140   1101  96%    0.74K    228        5  912K ext3_inode_cache
> >   1080   1049  97%    0.09K     27 40  108K drbd_ee
> >   1054   1024  97%    0.11K     31 34  124K drbd_req
> >   1010    339  33%    0.02K 5 202        20K biovec-1
> >   1008    888  88%    0.03K 9 112        36K Acpi-Namespace
> >    944    335  35%    0.06K     16 59        64K pid
> >    650    514  79%    0.75K    130        5  520K shmem_inode_cache
> >    630    542  86%    0.12K     21 30        84K bio
> >    558    353  63%    0.81K     62        9  496K signal_cache
> >    496    496 100%    4.00K    496        1 1984K size-4096
> >    410    351  85%    1.84K    205        2  820K task_struct
> >    399    355  88%    2.06K    133        3 1064K sighand_cache
> >    354     54  15%    0.06K 6 59        24K fs_cache
> >
> > memtop:
> > MemTotal:     65996216 kB
> > MemFree:        436188 kB
> > Buffers:      54272396 kB
> > Cached:         183784 kB
> > SwapCached:          0 kB
> > Active:         324660 kB
> > Inactive:     54143868 kB
> > HighTotal:           0 kB
> > HighFree:            0 kB
> > LowTotal:     65996216 kB
> > LowFree:        436188 kB
> > SwapTotal:     8192504 kB
> > SwapFree:      8192388 kB
> > Dirty:               0 kB
> > Writeback:           0 kB
> > AnonPages:       12320 kB
> > Mapped:           8312 kB
> > Slab:         10988324 kB
> > PageTables:       1584 kB
> > NFS_Unstable:        0 kB
> > Bounce:              0 kB
> > CommitLimit:  41190612 kB
> > Committed_AS:    44772 kB
> > VmallocTotal: 34359738367 kB
> > VmallocUsed:    267000 kB
> > VmallocChunk: 34359471059 kB
> > HugePages_Total:     0
> > HugePages_Free:      0
> > HugePages_Rsvd:      0
> > Hugepagesize:     2048 kB
> >
> > --
> > Lin Zhao
> > Project Lead of Messagebus
> > https://wiki.groupondev.com/Message_Bus
> > 3101 Park Blvd, Palo Alto, CA 94306
> >
> 
> 
> 
> -- 
> Lin Zhao
> Project Lead of Messagebus
> https://wiki.groupondev.com/Message_Bus
> 3101 Park Blvd, Palo Alto, CA 94306

> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list