1. 30 Nov, 2022 4 commits
    • Vlastimil Babka's avatar
      mm, slob: rename CONFIG_SLOB to CONFIG_SLOB_DEPRECATED · 149b6fa2
      Vlastimil Babka authored
      As explained in [1], we would like to remove SLOB if possible.
      
      - There are no known users that need its somewhat lower memory footprint
        so much that they cannot handle SLUB (after some modifications by the
        previous patches) instead.
      
      - It is an extra maintenance burden, and a number of features are
        incompatible with it.
      
      - It blocks the API improvement of allowing kfree() on objects allocated
        via kmem_cache_alloc().
      
      As the first step, rename the CONFIG_SLOB option in the slab allocator
      configuration choice to CONFIG_SLOB_DEPRECATED. Add CONFIG_SLOB
      depending on CONFIG_SLOB_DEPRECATED as an internal option to avoid code
      churn. This will cause existing .config files and defconfigs with
      CONFIG_SLOB=y to silently switch to the default (and recommended
      replacement) SLUB, while still allowing SLOB to be configured by anyone
      that notices and needs it. But those should contact the slab maintainers
      and linux-mm@kvack.org as explained in the updated help. With no valid
      objections, the plan is to update the existing defconfigs to SLUB and
      remove SLOB in a few cycles.
      
      To make SLUB more suitable replacement for SLOB, a CONFIG_SLUB_TINY
      option was introduced to limit SLUB's memory overhead.
      There is a number of defconfigs specifying CONFIG_SLOB=y. As part of
      this patch, update them to select CONFIG_SLUB and CONFIG_SLUB_TINY.
      
      [1] https://lore.kernel.org/all/b35c3f82-f67b-2103-7d82-7a7ba7521439@suse.cz/
      
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
      Cc: Janusz Krzysztofik <jmkrzyszt@gmail.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Conor Dooley <conor@kernel.org>
      Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: Aaro Koskinen <aaro.koskinen@iki.fi> # OMAP1
      Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> # riscv k210
      Acked-by: Arnd Bergmann <arnd@arndb.de> # arm
      Acked-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
      149b6fa2
    • Vlastimil Babka's avatar
      mm, slub: don't aggressively inline with CONFIG_SLUB_TINY · be784ba8
      Vlastimil Babka authored
      SLUB fastpaths use __always_inline to avoid function calls. With
      CONFIG_SLUB_TINY we would rather save the memory. Add a
      __fastpath_inline macro that's __always_inline normally but empty with
      CONFIG_SLUB_TINY.
      
      bloat-o-meter results on x86_64 mm/slub.o:
      
      add/remove: 3/1 grow/shrink: 1/8 up/down: 865/-1784 (-919)
      Function                                     old     new   delta
      kmem_cache_free                               20     281    +261
      slab_alloc_node.isra                           -     245    +245
      slab_free.constprop.isra                       -     231    +231
      __kmem_cache_alloc_lru.isra                    -     128    +128
      __kmem_cache_release                          88      83      -5
      __kmem_cache_create                         1446    1436     -10
      __kmem_cache_free                            271     142    -129
      kmem_cache_alloc_node                        330     127    -203
      kmem_cache_free_bulk.part                    826     613    -213
      __kmem_cache_alloc_node                      230      10    -220
      kmem_cache_alloc_lru                         325      12    -313
      kmem_cache_alloc                             325      10    -315
      kmem_cache_free.part                         376       -    -376
      Total: Before=26103, After=25184, chg -3.52%
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      be784ba8
    • Vlastimil Babka's avatar
      mm, slub: remove percpu slabs with CONFIG_SLUB_TINY · 0af8489b
      Vlastimil Babka authored
      SLUB gets most of its scalability by percpu slabs. However for
      CONFIG_SLUB_TINY the goal is minimal memory overhead, not scalability.
      Thus, #ifdef out the whole kmem_cache_cpu percpu structure and
      associated code. Additionally to the slab page savings, this reduces
      percpu allocator usage, and code size.
      
      This change builds on recent commit c7323a5a ("mm/slub: restrict
      sysfs validation to debug caches and make it safe"), as caches with
      enabled debugging also avoid percpu slabs and all allocations and
      freeing ends up working with the partial list. With a bit more
      refactoring by the preceding patches, use the same code paths with
      CONFIG_SLUB_TINY.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
      0af8489b
    • Vlastimil Babka's avatar
      mm, slub: split out allocations from pre/post hooks · 56d5a2b9
      Vlastimil Babka authored
      In the following patch we want to introduce CONFIG_SLUB_TINY allocation
      paths that don't use the percpu slab. To prepare, refactor the
      allocation functions:
      
      Split out __slab_alloc_node() from slab_alloc_node() where the former
      does the actual allocation and the latter calls the pre/post hooks.
      
      Analogically, split out __kmem_cache_alloc_bulk() from
      kmem_cache_alloc_bulk().
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      56d5a2b9
  2. 27 Nov, 2022 8 commits
  3. 07 Nov, 2022 1 commit
  4. 06 Nov, 2022 1 commit
    • Kees Cook's avatar
      mm/slab_common: Restore passing "caller" for tracing · 32868715
      Kees Cook authored
      The "caller" argument was accidentally being ignored in a few places
      that were recently refactored. Restore these "caller" arguments, instead
      of _RET_IP_.
      
      Fixes: 11e9734b ("mm/slab_common: unify NUMA and UMA version of tracepoints")
      Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: linux-mm@kvack.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      32868715
  5. 04 Nov, 2022 1 commit
  6. 03 Nov, 2022 1 commit
  7. 23 Oct, 2022 9 commits
  8. 22 Oct, 2022 15 commits