1. 06 Apr, 2022 3 commits
    • Oliver Glitta's avatar
      mm/slub: use stackdepot to save stack trace in objects · 5cf909c5
      Oliver Glitta authored
      Many stack traces are similar so there are many similar arrays.
      Stackdepot saves each unique stack only once.
      
      Replace field addrs in struct track with depot_stack_handle_t handle.  Use
      stackdepot to save stack trace.
      
      The benefits are smaller memory overhead and possibility to aggregate
      per-cache statistics in the following patch using the stackdepot handle
      instead of matching stacks manually.
      
      [ vbabka@suse.cz: rebase to 5.17-rc1 and adjust accordingly ]
      
      This was initially merged as commit 78869146 and reverted by commit
      ae14c63a due to several issues, that should now be fixed.
      The problem of unconditional memory overhead by stackdepot has been
      addressed by commit 2dba5eb1 ("lib/stackdepot: allow optional init
      and stack_table allocation by kvmalloc()"), so the dependency on
      stackdepot will result in extra memory usage only when a slab cache
      tracking is actually enabled, and not for all CONFIG_SLUB_DEBUG builds.
      The build failures on some architectures were also addressed, and the
      reported issue with xfs/433 test did not reproduce on 5.17-rc1 with this
      patch.
      Signed-off-by: default avatarOliver Glitta <glittao@gmail.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-and-tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      5cf909c5
    • Vlastimil Babka's avatar
      mm/slub: move struct track init out of set_track() · 0cd1a029
      Vlastimil Babka authored
      set_track() either zeroes out the struct track or fills it, depending on
      the addr parameter. This is unnecessary as there's only one place that
      calls it for the initialization - init_tracking(). We can simply do the
      zeroing there, with a single memset() that covers both TRACK_ALLOC and
      TRACK_FREE as they are adjacent.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-and-tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      0cd1a029
    • Vlastimil Babka's avatar
      lib/stackdepot: allow requesting early initialization dynamically · a5f1783b
      Vlastimil Babka authored
      In a later patch we want to add stackdepot support for object owner
      tracking in slub caches, which is enabled by slub_debug boot parameter.
      This creates a bootstrap problem as some caches are created early in
      boot when slab_is_available() is false and thus stack_depot_init()
      tries to use memblock. But, as reported by Hyeonggon Yoo [1] we are
      already beyond memblock_free_all(). Ideally memblock allocation should
      fail, yet it succeeds, but later the system crashes, which is a
      separately handled issue.
      
      To resolve this boostrap issue in a robust way, this patch adds another
      way to request stack_depot_early_init(), which happens at a well-defined
      point of time. In addition to build-time CONFIG_STACKDEPOT_ALWAYS_INIT,
      code that's e.g. processing boot parameters (which happens early enough)
      can call a new function stack_depot_want_early_init(), which sets a flag
      that stack_depot_early_init() will check.
      
      In this patch we also convert page_owner to this approach. While it
      doesn't have the bootstrap issue as slub, it's also a functionality
      enabled by a boot param and can thus request stack_depot_early_init()
      with memblock allocation instead of later initialization with
      kvmalloc().
      
      As suggested by Mike, make stack_depot_early_init() only attempt
      memblock allocation and stack_depot_init() only attempt kvmalloc().
      Also change the latter to kvcalloc(). In both cases we can lose the
      explicit array zeroing, which the allocations do already.
      
      As suggested by Marco, provide empty implementations of the init
      functions for !CONFIG_STACKDEPOT builds to simplify the callers.
      
      [1] https://lore.kernel.org/all/YhnUcqyeMgCrWZbd@ip-172-31-19-208.ap-northeast-1.compute.internal/Reported-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Suggested-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Suggested-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarMarco Elver <elver@google.com>
      Reviewed-and-tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      a5f1783b
  2. 03 Apr, 2022 8 commits
  3. 02 Apr, 2022 29 commits