1. 25 Oct, 2023 1 commit
  2. 13 Sep, 2023 1 commit
  3. 05 Sep, 2023 1 commit
    • Zqiang's avatar
      rcu: dump vmalloc memory info safely · c83ad36a
      Zqiang authored
      Currently, for double invoke call_rcu(), will dump rcu_head objects memory
      info, if the objects is not allocated from the slab allocator, the
      vmalloc_dump_obj() will be invoke and the vmap_area_lock spinlock need to
      be held, since the call_rcu() can be invoked in interrupt context,
      therefore, there is a possibility of spinlock deadlock scenarios.
      
      And in Preempt-RT kernel, the rcutorture test also trigger the following
      lockdep warning:
      
      BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
      in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1, name: swapper/0
      preempt_count: 1, expected: 0
      RCU nest depth: 1, expected: 1
      3 locks held by swapper/0/1:
       #0: ffffffffb534ee80 (fullstop_mutex){+.+.}-{4:4}, at: torture_init_begin+0x24/0xa0
       #1: ffffffffb5307940 (rcu_read_lock){....}-{1:3}, at: rcu_torture_init+0x1ec7/0x2370
       #2: ffffffffb536af40 (vmap_area_lock){+.+.}-{3:3}, at: find_vmap_area+0x1f/0x70
      irq event stamp: 565512
      hardirqs last  enabled at (565511): [<ffffffffb379b138>] __call_rcu_common+0x218/0x940
      hardirqs last disabled at (565512): [<ffffffffb5804262>] rcu_torture_init+0x20b2/0x2370
      softirqs last  enabled at (399112): [<ffffffffb36b2586>] __local_bh_enable_ip+0x126/0x170
      softirqs last disabled at (399106): [<ffffffffb43fef59>] inet_register_protosw+0x9/0x1d0
      Preemption disabled at:
      [<ffffffffb58040c3>] rcu_torture_init+0x1f13/0x2370
      CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W          6.5.0-rc4-rt2-yocto-preempt-rt+ #15
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
      Call Trace:
       <TASK>
       dump_stack_lvl+0x68/0xb0
       dump_stack+0x14/0x20
       __might_resched+0x1aa/0x280
       ? __pfx_rcu_torture_err_cb+0x10/0x10
       rt_spin_lock+0x53/0x130
       ? find_vmap_area+0x1f/0x70
       find_vmap_area+0x1f/0x70
       vmalloc_dump_obj+0x20/0x60
       mem_dump_obj+0x22/0x90
       __call_rcu_common+0x5bf/0x940
       ? debug_smp_processor_id+0x1b/0x30
       call_rcu_hurry+0x14/0x20
       rcu_torture_init+0x1f82/0x2370
       ? __pfx_rcu_torture_leak_cb+0x10/0x10
       ? __pfx_rcu_torture_leak_cb+0x10/0x10
       ? __pfx_rcu_torture_init+0x10/0x10
       do_one_initcall+0x6c/0x300
       ? debug_smp_processor_id+0x1b/0x30
       kernel_init_freeable+0x2b9/0x540
       ? __pfx_kernel_init+0x10/0x10
       kernel_init+0x1f/0x150
       ret_from_fork+0x40/0x50
       ? __pfx_kernel_init+0x10/0x10
       ret_from_fork_asm+0x1b/0x30
       </TASK>
      
      The previous patch fixes this by using the deadlock-safe best-effort
      version of find_vm_area.  However, in case of failure print the fact that
      the pointer was a vmalloc pointer so that we print at least something.
      
      Link: https://lkml.kernel.org/r/20230904180806.1002832-2-joel@joelfernandes.org
      Fixes: 98f18083
      
       ("mm: Make mem_dump_obj() handle vmalloc() memory")
      Signed-off-by: default avatarZqiang <qiang.zhang1211@gmail.com>
      Signed-off-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Reported-by: default avatarZhen Lei <thunder.leizhen@huaweicloud.com>
      Reviewed-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c83ad36a
  4. 24 Aug, 2023 2 commits
  5. 22 Aug, 2023 1 commit
    • Helge Deller's avatar
      parisc: Use generic mmap top-down layout and brk randomization · 3033cd43
      Helge Deller authored
      
      parisc uses a top-down layout by default that exactly fits the generic
      functions, so get rid of arch specific code and use the generic version
      by selecting ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT.
      
      Note that on parisc the stack always grows up and a "unlimited stack"
      simply means that the value as defined in CONFIG_STACK_MAX_DEFAULT_SIZE_MB
      should be used. So RLIM_INFINITY is not an indicator to use the legacy
      memory layout.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      3033cd43
  6. 18 Aug, 2023 1 commit
  7. 11 Jul, 2023 1 commit
  8. 08 Apr, 2023 1 commit
  9. 13 Feb, 2023 1 commit
  10. 19 Jan, 2023 1 commit
  11. 30 Nov, 2022 1 commit
    • Hugh Dickins's avatar
      mm,thp,rmap: simplify compound page mapcount handling · cb67f428
      Hugh Dickins authored
      Compound page (folio) mapcount calculations have been different for anon
      and file (or shmem) THPs, and involved the obscure PageDoubleMap flag. 
      And each huge mapping and unmapping of a file (or shmem) THP involved
      atomically incrementing and decrementing the mapcount of every subpage of
      that huge page, dirtying many struct page cachelines.
      
      Add subpages_mapcount field to the struct folio and first tail page, so
      that the total of subpage mapcounts is available in one place near the
      head: then page_mapcount() and total_mapcount() and page_mapped(), and
      their folio equivalents, are so quick that anon and file and hugetlb don't
      need to be optimized differently.  Delete the unloved PageDoubleMap.
      
      page_add and page_remove rmap functions must now maintain the
      subpages_mapcount as well as the subpage _mapcount, when dealing with pte
      mappings of huge pages; and correct maintenance of NR_ANON_MAPPED and
      NR_FILE_MAPPED statistics still needs reading through the subpages, using
      nr_subpages_unmapped() - but only when first or last pmd mapping finds
      subpages_mapcount raised (double-map case, not the common case).
      
      But are those counts (used to decide when to split an anon THP, and in
      vmscan's pagecache_reclaimable heuristic) correctly maintained?  Not
      quite: since page_remove_rmap() (and also split_huge_pmd()) is often
      called without page lock, there can be races when a subpage pte mapcount
      0<->1 while compound pmd mapcount 0<->1 is scanning - races which the
      previous implementation had prevented.  The statistics might become
      inaccurate, and even drift down until they underflow through 0.  That is
      not good enough, but is better dealt with in a followup patch.
      
      Update a few comments on first and second tail page overlaid fields. 
      hugepage_add_new_anon_rmap() has to "increment" compound_mapcount, but
      subpages_mapcount and compound_pincount are already correctly at 0, so
      delete its reinitialization of compound_pincount.
      
      A simple 100 X munmap(mmap(2GB, MAP_SHARED|MAP_POPULATE, tmpfs), 2GB) took
      18 seconds on small pages, and used to take 1 second on huge pages, but
      now takes 119 milliseconds on huge pages.  Mapping by pmds a second time
      used to take 860ms and now takes 92ms; mapping by pmds after mapping by
      ptes (when the scan is needed) used to take 870ms and now takes 495ms. 
      But there might be some benchmarks which would show a slowdown, because
      tail struct pages now fall out of cache until final freeing checks them.
      
      Link: https://lkml.kernel.org/r/47ad693-717-79c8-e1ba-46c3a6602e48@google.com
      
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: James Houghton <jthoughton@google.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Zach O'Keefe <zokeefe@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      cb67f428
  12. 01 Oct, 2022 1 commit
  13. 27 Sep, 2022 2 commits
  14. 12 Sep, 2022 2 commits
  15. 02 Aug, 2022 1 commit
  16. 27 Jun, 2022 1 commit
  17. 19 May, 2022 1 commit
    • Jason A. Donenfeld's avatar
      random: move randomize_page() into mm where it belongs · 5ad7dd88
      Jason A. Donenfeld authored
      
      randomize_page is an mm function. It is documented like one. It contains
      the history of one. It has the naming convention of one. It looks
      just like another very similar function in mm, randomize_stack_top().
      And it has always been maintained and updated by mm people. There is no
      need for it to be in random.c. In the "which shape does not look like
      the other ones" test, pointing to randomize_page() is correct.
      
      So move randomize_page() into mm/util.c, right next to the similar
      randomize_stack_top() function.
      
      This commit contains no actual code changes.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
      5ad7dd88
  18. 10 May, 2022 1 commit
  19. 05 May, 2022 1 commit
  20. 24 Apr, 2022 1 commit
    • Linus Torvalds's avatar
      kvmalloc: use vmalloc_huge for vmalloc allocations · 9becb688
      Linus Torvalds authored
      Since commit 559089e0 ("vmalloc: replace VM_NO_HUGE_VMAP with
      VM_ALLOW_HUGE_VMAP"), the use of hugepage mappings for vmalloc is an
      opt-in strategy, because it caused a number of problems that weren't
      noticed until x86 enabled it too.
      
      One of the issues was fixed by Nick Piggin in commit 3b8000ae
      ("mm/vmalloc: huge vmalloc backing pages should be split rather than
      compound"), but I'm still worried about page protection issues, and
      VM_FLUSH_RESET_PERMS in particular.
      
      However, like the hash table allocation case (commit f2edd118:
      "page_alloc: use vmalloc_huge for large system hash"), the use of
      kvmalloc() should be safe from any such games, since the returned
      pointer might be a SLUB allocation, and as such no user should
      reasonably be using it in any odd ways.
      
      We also know that the allocations are fairly large, since it falls back
      to the vmalloc case only when a kmalloc() fails.  So using a hugepage
      mapping seems both safe and relevant.
      
      This patch does show a weakness in the opt-in strategy: since the opt-in
      flag is in the 'vm_flags', not the usual gfp_t allocation flags, very
      few of the usual interfaces actually expose it.
      
      That's not much of an issue in this case that already used one of the
      fairly specialized low-level vmalloc interfaces for the allocation, but
      for a lot of other vmalloc() users that might want to opt in, it's going
      to be very inconvenient.
      
      We'll either have to fix any compatibility problems, or expose it in the
      gfp flags (__GFP_COMP would have made a lot of sense) to allow normal
      vmalloc() users to use hugepage mappings.  That said, the cases that
      really matter were probably already taken care of by the hash tabel
      allocation.
      
      Link: https://lore.kernel.org/all/20220415164413.2727220-1-song@kernel.org/
      Link: https://lore.kernel.org/all/CAHk-=whao=iosX1s5Z4SF-ZGa-ebAukJoAdUJFk5SPwnofV+Vg@mail.gmail.com/
      
      
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul Menzel <pmenzel@molgen.mpg.de>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9becb688
  21. 21 Mar, 2022 2 commits
  22. 08 Mar, 2022 1 commit
  23. 04 Mar, 2022 1 commit
    • Daniel Borkmann's avatar
      mm: Consider __GFP_NOWARN flag for oversized kvmalloc() calls · 0708a0af
      Daniel Borkmann authored
      syzkaller was recently triggering an oversized kvmalloc() warning via
      xdp_umem_create().
      
      The triggered warning was added back in 7661809d ("mm: don't allow
      oversized kvmalloc() calls"). The rationale for the warning for huge
      kvmalloc sizes was as a reaction to a security bug where the size was
      more than UINT_MAX but not everything was prepared to handle unsigned
      long sizes.
      
      Anyway, the AF_XDP related call trace from this syzkaller report was:
      
        kvmalloc include/linux/mm.h:806 [inline]
        kvmalloc_array include/linux/mm.h:824 [inline]
        kvcalloc include/linux/mm.h:829 [inline]
        xdp_umem_pin_pages net/xdp/xdp_umem.c:102 [inline]
        xdp_umem_reg net/xdp/xdp_umem.c:219 [inline]
        xdp_umem_create+0x6a5/0xf00 net/xdp/xdp_umem.c:252
        xsk_setsockopt+0x604/0x790 net/xdp/xsk.c:1068
        __sys_setsockopt+0x1fd/0x4e0 net/socket.c:2176
        __do_sys_setsockopt net/socket.c:2187 [inline]
        __se_sys_setsockopt net/socket.c:2184 [inline]
        __x64_sys_setsockopt+0xb5/0x150 net/socket.c:2184
        do_syscall_x64 arch/x86/entry/common.c:50 [inline]
        do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
        entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Björn mentioned that requests for >2GB allocation can still be valid:
      
        The structure that is being allocated is the page-pinning accounting.
        AF_XDP has an internal limit of U32_MAX pages, which is *a lot*, but
        still fewer than what memcg allows (PAGE_COUNTER_MAX is a LONG_MAX/
        PAGE_SIZE on 64 bit systems). [...]
      
        I could just change from U32_MAX to INT_MAX, but as I stated earlier
        that has a hacky feeling to it. [...] From my perspective, the code
        isn't broken, with the memcg limits in consideration. [...]
      
      Linus says:
      
        [...] Pretty much every time this has come up, the kernel warning has
        shown that yes, the code was broken and there really wasn't a reason
        for doing allocations that big.
      
        Of course, some people would be perfectly fine with the allocation
        failing, they just don't want the warning. I didn't want __GFP_NOWARN
        to shut it up originally because I wanted people to see all those
        cases, but these days I think we can just say "yeah, people can shut
        it up explicitly by saying 'go ahead and fail this allocation, don't
        warn about it'".
      
        So enough time has passed that by now I'd certainly be ok with [it].
      
      Thus allow call-sites to silence such userspace triggered splats if the
      allocation requests have __GFP_NOWARN. For xdp_umem_pin_pages()'s call
      to kvcalloc() this is already the case, so nothing else needed there.
      
      Fixes: 7661809d
      
       ("mm: don't allow oversized kvmalloc() calls")
      Reported-by: syzbot+11421fbbff99b989670e@syzkaller.appspotmail.com
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: syzbot+11421fbbff99b989670e@syzkaller.appspotmail.com
      Cc: Björn Töpel <bjorn@kernel.org>
      Cc: Magnus Karlsson <magnus.karlsson@intel.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Andrii Nakryiko <andrii@kernel.org>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: David S. Miller <davem@davemloft.net>
      Link: https://lore.kernel.org/bpf/CAJ+HfNhyfsT5cS_U9EC213ducHs9k9zNxX9+abqC0kTrPbQ0gg@mail.gmail.com
      Link: https://lore.kernel.org/bpf/20211201202905.b9892171e3f5b9a60f9da251@linux-foundation.org
      
      Reviewed-by: default avatarLeon Romanovsky <leonro@nvidia.com>
      Ackd-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0708a0af
  24. 15 Jan, 2022 1 commit
  25. 17 Nov, 2021 1 commit
  26. 18 Oct, 2021 2 commits
  27. 27 Sep, 2021 3 commits
  28. 24 Sep, 2021 1 commit
    • Chen Jun's avatar
      mm: fix uninitialized use in overcommit_policy_handler · bcbda810
      Chen Jun authored
      We get an unexpected value of /proc/sys/vm/overcommit_memory after
      running the following program:
      
        int main()
        {
            int fd = open("/proc/sys/vm/overcommit_memory", O_RDWR);
            write(fd, "1", 1);
            write(fd, "2", 1);
            close(fd);
        }
      
      write(fd, "2", 1) will pass *ppos = 1 to proc_dointvec_minmax.
      proc_dointvec_minmax will return 0 without setting new_policy.
      
        t.data = &new_policy;
        ret = proc_dointvec_minmax(&t, write, buffer, lenp, ppos)
            -->do_proc_dointvec
               -->__do_proc_dointvec
                    if (write) {
                      if (proc_first_pos_non_zero_ignore(ppos, table))
                        goto out;
      
        sysctl_overcommit_memory = new_policy;
      
      so sysctl_overcommit_memory will be set to an uninitialized value.
      
      Check whether new_policy has been changed by proc_dointvec_minmax.
      
      Link: https://lkml.kernel.org/r/20210923020524.13289-1-chenjun102@huawei.com
      Fixes: 56f3547b ("mm: adjust vm_committed_as_batch according to vm...
      bcbda810
  29. 02 Sep, 2021 1 commit
    • Linus Torvalds's avatar
      mm: don't allow oversized kvmalloc() calls · 7661809d
      Linus Torvalds authored
      
      'kvmalloc()' is a convenience function for people who want to do a
      kmalloc() but fall back on vmalloc() if there aren't enough physically
      contiguous pages, or if the allocation is larger than what kmalloc()
      supports.
      
      However, let's make sure it doesn't get _too_ easy to do crazy things
      with it.  In particular, don't allow big allocations that could be due
      to integer overflow or underflow.  So make sure the allocation size fits
      in an 'int', to protect against trivial integer conversion issues.
      Acked-by: default avatarWilly Tarreau <w@1wt.eu>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7661809d
  30. 09 Aug, 2021 1 commit
    • Dave Chinner's avatar
      mm: Add kvrealloc() · de2860f4
      Dave Chinner authored
      During log recovery of an XFS filesystem with 64kB directory
      buffers, rebuilding a buffer split across two log records results
      in a memory allocation warning from krealloc like this:
      
      xfs filesystem being mounted at /mnt/scratch supports timestamps until 2038 (0x7fffffff)
      XFS (dm-0): Unmounting Filesystem
      XFS (dm-0): Mounting V5 Filesystem
      XFS (dm-0): Starting recovery (logdev: internal)
      ------------[ cut here ]------------
      WARNING: CPU: 5 PID: 3435170 at mm/page_alloc.c:3539 get_page_from_freelist+0xdee/0xe40
      .....
      RIP: 0010:get_page_from_freelist+0xdee/0xe40
      Call Trace:
       ? complete+0x3f/0x50
       __alloc_pages+0x16f/0x300
       alloc_pages+0x87/0x110
       kmalloc_order+0x2c/0x90
       kmalloc_order_trace+0x1d/0x90
       __kmalloc_track_caller+0x215/0x270
       ? xlog_recover_add_to_cont_trans+0x63/0x1f0
       krealloc+0x54/0xb0
       xlog_recover_add_to_cont_trans+0x63/0x1f0
       xlog_recovery_process_trans+0xc1/0xd0
       xlog_recover_process_ophdr+0x86/0x130
       xlog_recover_process_data+0x9f/0x160
       xlog_recover_process+0xa2/0x120
       xlog_do_recovery_pass+0x40b/0x7d0
       ? __irq_work_queue_local+0x4f/0x60
       ? irq_work_queue+0x3a/0x50
       xlog_do_log_recovery+0x70/0x150
       xlog_do_recover+0x38/0x1d0
       xlog_recover+0xd8/0x170
       xfs_log_mount+0x181/0x300
       xfs_mountfs+0x4a1/0x9b0
       xfs_fs_fill_super+0x3c0/0x7b0
       get_tree_bdev+0x171/0x270
       ? suffix_kstrtoint.constprop.0+0xf0/0xf0
       xfs_fs_get_tree+0x15/0x20
       vfs_get_tree+0x24/0xc0
       path_mount+0x2f5/0xaf0
       __x64_sys_mount+0x108/0x140
       do_syscall_64+0x3a/0x70
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Essentially, we are taking a multi-order allocation from kmem_alloc()
      (which has an open coded no fail, no warn loop) and then
      reallocating it out to 64kB using krealloc(__GFP_NOFAIL) and that is
      then triggering the above warning.
      
      This is a regression caused by converting this code from an open
      coded no fail/no warn reallocation loop to using __GFP_NOFAIL.
      
      What we actually need here is kvrealloc(), so that if contiguous
      page allocation fails we fall back to vmalloc() and we don't
      get nasty warnings happening in XFS.
      
      Fixes: 771915c4
      
       ("xfs: remove kmem_realloc()")
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      de2860f4
  31. 12 Jul, 2021 1 commit
  32. 01 Jul, 2021 1 commit
    • David Hildenbrand's avatar
      mm: introduce page_offline_(begin|end|freeze|thaw) to synchronize setting PageOffline() · 82840451
      David Hildenbrand authored
      A driver might set a page logically offline -- PageOffline() -- and turn
      the page inaccessible in the hypervisor; after that, access to page
      content can be fatal.  One example is virtio-mem; while unplugged memory
      -- marked as PageOffline() can currently be read in the hypervisor, this
      will no longer be the case in the future; for example, when having a
      virtio-mem device backed by huge pages in the hypervisor.
      
      Some special PFN walkers -- i.e., /proc/kcore -- read content of random
      pages after checking PageOffline(); however, these PFN walkers can race
      with drivers that set PageOffline().
      
      Let's introduce page_offline_(begin|end|freeze|thaw) for synchronizing.
      
      page_offline_freeze()/page_offline_thaw() allows for a subsystem to
      synchronize with such drivers, achieving that a page cannot be set
      PageOffline() while frozen.
      
      page_offline_begin()/page_offline_end() is used by drivers that care about
      such races when setting a page PageOffline().
      
      For simplicity, use a rwsem for now; neither drivers nor users are
      performance sensitive.
      
      Link: https://lkml.kernel.org/r/20210526093041.8800-5-david@redhat.com
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarOscar Salvador <osalvador@suse.de>
      Cc: Aili Yao <yaoaili@kingsoft.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Alex Shi <alex.shi@linux.alibaba.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Jiri Bohac <jbohac@suse.cz>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Wei Liu <wei.liu@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      82840451
  33. 10 May, 2021 1 commit
    • Maninder Singh's avatar
      mm/slub: Add Support for free path information of an object · e548eaa1
      Maninder Singh authored
      
      This commit adds enables a stack dump for the last free of an object:
      
      slab kmalloc-64 start c8ab0140 data offset 64 pointer offset 0 size 64 allocated at meminfo_proc_show+0x40/0x4fc
      [   20.192078]     meminfo_proc_show+0x40/0x4fc
      [   20.192263]     seq_read_iter+0x18c/0x4c4
      [   20.192430]     proc_reg_read_iter+0x84/0xac
      [   20.192617]     generic_file_splice_read+0xe8/0x17c
      [   20.192816]     splice_direct_to_actor+0xb8/0x290
      [   20.193008]     do_splice_direct+0xa0/0xe0
      [   20.193185]     do_sendfile+0x2d0/0x438
      [   20.193345]     sys_sendfile64+0x12c/0x140
      [   20.193523]     ret_fast_syscall+0x0/0x58
      [   20.193695]     0xbeeacde4
      [   20.193822]  Free path:
      [   20.193935]     meminfo_proc_show+0x5c/0x4fc
      [   20.194115]     seq_read_iter+0x18c/0x4c4
      [   20.194285]     proc_reg_read_iter+0x84/0xac
      [   20.194475]     generic_file_splice_read+0xe8/0x17c
      [   20.194685]     splice_direct_to_actor+0xb8/0x290
      [   20.194870]     do_splice_direct+0xa0/0xe0
      [   20.195014]     do_sendfile+0x2d0/0x438
      [   20.195174]     sys_sendfile64+0x12c/0x140
      [   20.195336]     ret_fast_syscall+0x0/0x58
      [   20.195491]     0xbeeacde4
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Co-developed-by: default avatarVaneet Narang <v.narang@samsung.com>
      Signed-off-by: default avatarVaneet Narang <v.narang@samsung.com>
      Signed-off-by: default avatarManinder Singh <maninder1.s@samsung.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      e548eaa1