1. 06 Jan, 2022 24 commits
    • Matthew Wilcox (Oracle)'s avatar
      mm/kasan: Convert to struct folio and struct slab · 6e48a966
      Matthew Wilcox (Oracle) authored
      KASAN accesses some slab related struct page fields so we need to
      convert it to struct slab. Some places are a bit simplified thanks to
      kasan_addr_to_slab() encapsulating the PageSlab flag check through
      virt_to_slab().  When resolving object address to either a real slab or
      a large kmalloc, use struct folio as the intermediate type for testing
      the slab flag to avoid unnecessary implicit compound_head().
      
      [ vbabka@suse.cz: use struct folio, adjust to differences in previous
        patches ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Tested-by: default avatarHyeongogn Yoo <42.hyeyoo@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: <kasan-dev@googlegroups.com>
      6e48a966
    • Matthew Wilcox (Oracle)'s avatar
      mm/slob: Convert SLOB to use struct slab and struct folio · 50757018
      Matthew Wilcox (Oracle) authored
      Use struct slab throughout the slob allocator. Where non-slab page can
      appear use struct folio instead of struct page.
      
      [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h
        just for the single callers being wrappers in mm/slob.c ]
      
      [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      50757018
    • Vlastimil Babka's avatar
      mm/memcg: Convert slab objcgs from struct page to struct slab · 4b5f8d9a
      Vlastimil Babka authored
      page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
      so convert all the related infrastructure to struct slab. Also use
      struct folio instead of struct page when resolving object pointers.
      
      This is not just mechanistic changing of types and names. Now in
      mem_cgroup_from_obj() we use folio_test_slab() to decide if we interpret
      the folio as a real slab instead of a large kmalloc, instead of relying
      on MEMCG_DATA_OBJCGS bit that used to be checked in page_objcgs_check().
      Similarly in memcg_slab_free_hook() where we can encounter
      kmalloc_large() pages (here the folio slab flag check is implied by
      virt_to_slab()). As a result, page_objcgs_check() can be dropped instead
      of converted.
      
      To avoid include cycles, move the inline definition of slab_objcgs()
      from memcontrol.h to mm/slab.h.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <cgroups@vger.kernel.org>
      4b5f8d9a
    • Vlastimil Babka's avatar
      mm: Convert struct page to struct slab in functions used by other subsystems · 40f3bf0c
      Vlastimil Babka authored
      KASAN, KFENCE and memcg interact with SLAB or SLUB internals through
      functions nearest_obj(), obj_to_index() and objs_per_slab() that use
      struct page as parameter. This patch converts it to struct slab
      including all callers, through a coccinelle semantic patch.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
       { ... }
      
      @@
      @@
      
      -objs_per_slab_page(
      +objs_per_slab(
       ...
       )
      
      @@
      identifier fn =~ "obj_to_index|objs_per_slab";
      @@
      
       fn(...,
      -   const struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj";
      @@
      
       fn(...,
      -   struct page *page
      +   const struct slab *slab
          ,...)
       {
      <...
      (
      - page_address(page)
      + slab_address(slab)
      |
      - page
      + slab
      )
      ...>
       }
      
      @@
      identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
      expression E;
      @@
      
       fn(...,
      (
      - slab_page(E)
      + E
      |
      - virt_to_page(E)
      + virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + page_slab(page)
      )
        ,...)
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: <kasan-dev@googlegroups.com>
      Cc: <cgroups@vger.kernel.org>
      40f3bf0c
    • Vlastimil Babka's avatar
      mm/slab: Finish struct page to struct slab conversion · dd35f71a
      Vlastimil Babka authored
      Change cache_free_alien() to use slab_nid(virt_to_slab()). Otherwise
      just update of comments and some remaining variable names.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      dd35f71a
    • Vlastimil Babka's avatar
      mm/slab: Convert most struct page to struct slab by spatch · 7981e67e
      Vlastimil Babka authored
      The majority of conversion from struct page to struct slab in SLAB
      internals can be delegated to a coccinelle semantic patch. This includes
      renaming of variables with 'page' in name to 'slab', and similar.
      
      Big thanks to Julia Lawall and Luis Chamberlain for help with
      coccinelle.
      
      // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
      // embedded script
      
      // build list of functions for applying the next rule
      @initialize:ocaml@
      @@
      
      let ok_function p =
        not (List.mem (List.hd p).current_element ["kmem_getpages";"kmem_freepages"])
      
      // convert the type in selected functions
      @@
      position p : script:ocaml() { ok_function p };
      @@
      
      - struct page@p
      + struct slab
      
      @@
      @@
      
      -PageSlabPfmemalloc(page)
      +slab_test_pfmemalloc(slab)
      
      @@
      @@
      
      -ClearPageSlabPfmemalloc(page)
      +slab_clear_pfmemalloc(slab)
      
      @@
      @@
      
      obj_to_index(
       ...,
      - page
      + slab_page(slab)
      ,...)
      
      // for all functions, change any "struct slab *page" parameter to "struct slab
      // *slab" in the signature, and generally all occurences of "page" to "slab" in
      // the body - with some special cases.
      @@
      identifier fn;
      expression E;
      @@
      
       fn(...,
      -   struct slab *page
      +   struct slab *slab
          ,...)
       {
      <...
      (
      - int page_node;
      + int slab_node;
      |
      - page_node
      + slab_node
      |
      - page_slab(page)
      + slab
      |
      - page_address(page)
      + slab_address(slab)
      |
      - page_size(page)
      + slab_size(slab)
      |
      - page_to_nid(page)
      + slab_nid(slab)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + slab
      )
      ...>
       }
      
      // rename a function parameter
      @@
      identifier fn;
      expression E;
      @@
      
       fn(...,
      -   int page_node
      +   int slab_node
          ,...)
       {
      <...
      - page_node
      + slab_node
      ...>
       }
      
      // functions converted by previous rules that were temporarily called using
      // slab_page(E) so we want to remove the wrapper now that they accept struct
      // slab ptr directly
      @@
      identifier fn =~ "index_to_obj";
      expression E;
      @@
      
       fn(...,
      - slab_page(E)
      + E
       ,...)
      
      // functions that were returning struct page ptr and now will return struct
      // slab ptr, including slab_page() wrapper removal
      @@
      identifier fn =~ "cache_grow_begin|get_valid_first_slab|get_first_slab";
      expression E;
      @@
      
       fn(...)
       {
      <...
      - slab_page(E)
      + E
      ...>
       }
      
      // rename any former struct page * declarations
      @@
      @@
      
      struct slab *
      -page
      +slab
      ;
      
      // all functions (with exceptions) with a local "struct slab *page" variable
      // that will be renamed to "struct slab *slab"
      @@
      identifier fn !~ "kmem_getpages|kmem_freepages";
      expression E;
      @@
      
       fn(...)
       {
      <...
      (
      - page_slab(page)
      + slab
      |
      - page_to_nid(page)
      + slab_nid(slab)
      |
      - kasan_poison_slab(page)
      + kasan_poison_slab(slab_page(slab))
      |
      - page_address(page)
      + slab_address(slab)
      |
      - page_size(page)
      + slab_size(slab)
      |
      - page->pages
      + slab->slabs
      |
      - page = virt_to_head_page(E)
      + slab = virt_to_slab(E)
      |
      - virt_to_head_page(E)
      + virt_to_slab(E)
      |
      - page
      + slab
      )
      ...>
       }
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      7981e67e
    • Vlastimil Babka's avatar
      mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab · 42c0faac
      Vlastimil Babka authored
      These functions sit at the boundary to page allocator. Also use folio
      internally to avoid extra compound_head() when dealing with page flags.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      42c0faac
    • Vlastimil Babka's avatar
      mm/slub: Finish struct page to struct slab conversion · c2092c12
      Vlastimil Babka authored
      Update comments mentioning pages to mention slabs where appropriate.
      Also some goto labels.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      c2092c12
    • Vlastimil Babka's avatar
      mm/slub: Convert most struct page to struct slab by spatch · bb192ed9
      Vlastimil Babka authored
      The majority of conversion from struct page to struct slab in SLUB
      internals can be delegated to a coccinelle semantic patch. This includes
      renaming of variables with 'page' in name to 'slab', and similar.
      
      Big thanks to Julia Lawall and Luis Chamberlain for help with
      coccinelle.
      
      // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
      // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
      // embedded script
      
      // build list of functions to exclude from applying the next rule
      @initialize:ocaml@
      @@
      
      let ok_function p =
        not (List.mem (List.hd p).current_element ["nearest_obj";"obj_to_index";"objs_per_slab_page";"__slab_lock";"__slab_unlock";"free_nonslab_page";"kmalloc_large_node"])
      
      // convert the type from struct page to struct page in all functions except the
      // list from previous rule
      // this also affects struct kmem_cache_cpu, but that's ok
      @@
      position p : script:ocaml() { ok_function p };
      @@
      
      - struct page@p
      + struct slab
      
      // in struct kmem_cache_cpu, change the name from page to slab
      // the type was already converted by the previous rule
      @@
      @@
      
      struct kmem_cache_cpu {
      ...
      -struct slab *page;
      +struct slab *slab;
      ...
      }
      
      // there are many places that use c->page which is now c->slab after the
      // previous rule
      @@
      struct kmem_cache_cpu *c;
      @@
      
      -c->page
      +c->slab
      
      @@
      @@
      
      struct kmem_cache {
      ...
      - unsigned int cpu_partial_pages;
      + unsigned int cpu_partial_slabs;
      ...
      }
      
      @@
      struct kmem_cache *s;
      @@
      
      - s->cpu_partial_pages
      + s->cpu_partial_slabs
      
      @@
      @@
      
      static void
      - setup_page_debug(
      + setup_slab_debug(
       ...)
       {...}
      
      @@
      @@
      
      - setup_page_debug(
      + setup_slab_debug(
       ...);
      
      // for all functions (with exceptions), change any "struct slab *page"
      // parameter to "struct slab *slab" in the signature, and generally all
      // occurences of "page" to "slab" in the body - with some special cases.
      
      @@
      identifier fn !~ "free_nonslab_page|obj_to_index|objs_per_slab_page|nearest_obj";
      @@
       fn(...,
      -   struct slab *page
      +   struct slab *slab
          ,...)
       {
      <...
      - page
      + slab
      ...>
       }
      
      // similar to previous but the param is called partial_page
      @@
      identifier fn;
      @@
      
       fn(...,
      -   struct slab *partial_page
      +   struct slab *partial_slab
          ,...)
       {
      <...
      - partial_page
      + partial_slab
      ...>
       }
      
      // similar to previous but for functions that take pointer to struct page ptr
      @@
      identifier fn;
      @@
      
       fn(...,
      -   struct slab **ret_page
      +   struct slab **ret_slab
          ,...)
       {
      <...
      - ret_page
      + ret_slab
      ...>
       }
      
      // functions converted by previous rules that were temporarily called using
      // slab_page(E) so we want to remove the wrapper now that they accept struct
      // slab ptr directly
      @@
      identifier fn =~ "slab_free|do_slab_free";
      expression E;
      @@
      
       fn(...,
      - slab_page(E)
      + E
        ,...)
      
      // similar to previous but for another pattern
      @@
      identifier fn =~ "slab_pad_check|check_object";
      @@
      
       fn(...,
      - folio_page(folio, 0)
      + slab
        ,...)
      
      // functions that were returning struct page ptr and now will return struct
      // slab ptr, including slab_page() wrapper removal
      @@
      identifier fn =~ "allocate_slab|new_slab";
      expression E;
      @@
      
       static
      -struct slab *
      +struct slab *
       fn(...)
       {
      <...
      - slab_page(E)
      + E
      ...>
       }
      
      // rename any former struct page * declarations
      @@
      @@
      
      struct slab *
      (
      - page
      + slab
      |
      - partial_page
      + partial_slab
      |
      - oldpage
      + oldslab
      )
      ;
      
      // this has to be separate from previous rule as page and page2 appear at the
      // same line
      @@
      @@
      
      struct slab *
      -page2
      +slab2
      ;
      
      // similar but with initial assignment
      @@
      expression E;
      @@
      
      struct slab *
      (
      - page
      + slab
      |
      - flush_page
      + flush_slab
      |
      - discard_page
      + slab_to_discard
      |
      - page_to_unfreeze
      + slab_to_unfreeze
      )
      = E;
      
      // convert most of struct page to struct slab usage inside functions (with
      // exceptions), including specific variable renames
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      expression E;
      @@
      
       fn(...)
       {
      <...
      (
      - int pages;
      + int slabs;
      |
      - int pages = E;
      + int slabs = E;
      |
      - page
      + slab
      |
      - flush_page
      + flush_slab
      |
      - partial_page
      + partial_slab
      |
      - oldpage->pages
      + oldslab->slabs
      |
      - oldpage
      + oldslab
      |
      - unsigned int nr_pages;
      + unsigned int nr_slabs;
      |
      - nr_pages
      + nr_slabs
      |
      - unsigned int partial_pages = E;
      + unsigned int partial_slabs = E;
      |
      - partial_pages
      + partial_slabs
      )
      ...>
       }
      
      // this has to be split out from the previous rule so that lines containing
      // multiple matching changes will be fully converted
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      @@
      
       fn(...)
       {
      <...
      (
      - slab->pages
      + slab->slabs
      |
      - pages
      + slabs
      |
      - page2
      + slab2
      |
      - discard_page
      + slab_to_discard
      |
      - page_to_unfreeze
      + slab_to_unfreeze
      )
      ...>
       }
      
      // after we simply changed all occurences of page to slab, some usages need
      // adjustment for slab-specific functions, or use slab_page() wrapper
      @@
      identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
      @@
      
       fn(...)
       {
      <...
      (
      - page_slab(slab)
      + slab
      |
      - kasan_poison_slab(slab)
      + kasan_poison_slab(slab_page(slab))
      |
      - page_address(slab)
      + slab_address(slab)
      |
      - page_size(slab)
      + slab_size(slab)
      |
      - PageSlab(slab)
      + folio_test_slab(slab_folio(slab))
      |
      - page_to_nid(slab)
      + slab_nid(slab)
      |
      - compound_order(slab)
      + slab_order(slab)
      )
      ...>
       }
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Cc: Julia Lawall <julia.lawall@inria.fr>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      bb192ed9
    • Matthew Wilcox (Oracle)'s avatar
      mm/slub: Convert pfmemalloc_match() to take a struct slab · 01b34d16
      Matthew Wilcox (Oracle) authored
      Preparatory for mass conversion. Use the new slab_test_pfmemalloc()
      helper.  As it doesn't do VM_BUG_ON(!PageSlab()) we no longer need the
      pfmemalloc_match_unsafe() variant.
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      01b34d16
    • Vlastimil Babka's avatar
      mm/slub: Convert __free_slab() to use struct slab · 4020b4a2
      Vlastimil Babka authored
      __free_slab() is on the boundary of distinguishing struct slab and
      struct page so start with struct slab but convert to folio for working
      with flags and folio_page() to call functions that require struct page.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      4020b4a2
    • Vlastimil Babka's avatar
      mm/slub: Convert alloc_slab_page() to return a struct slab · 45387b8c
      Vlastimil Babka authored
      Preparatory, callers convert back to struct page for now.
      
      Also move setting page flags to alloc_slab_page() where we still operate
      on a struct page. This means the page->slab_cache pointer is now set
      later than the PageSlab flag, which could theoretically confuse some pfn
      walker assuming PageSlab means there would be a valid cache pointer. But
      as the code had no barriers and used __set_bit() anyway, it could have
      happened already, so there shouldn't be such a walker.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      45387b8c
    • Matthew Wilcox (Oracle)'s avatar
      mm/slub: Convert print_page_info() to print_slab_info() · fb012e27
      Matthew Wilcox (Oracle) authored
      Improve the type safety and prepare for further conversion. For flags
      access, convert to folio internally.
      
      [ vbabka@suse.cz: access flags via folio_flags() ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      fb012e27
    • Vlastimil Babka's avatar
      mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab · 0393895b
      Vlastimil Babka authored
      These functions operate on the PG_locked page flag, but make them accept
      struct slab to encapsulate this implementation detail.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      0393895b
    • Matthew Wilcox (Oracle)'s avatar
      mm/slub: Convert kfree() to use a struct slab · d835eef4
      Matthew Wilcox (Oracle) authored
      Convert kfree(), kmem_cache_free() and ___cache_free() to resolve object
      addresses to struct slab, using folio as intermediate step where needed.
      Keep passing the result as struct page for now in preparation for mass
      conversion of internal functions.
      
      [ vbabka@suse.cz: Use folio as intermediate step when checking for
        large kmalloc pages, and when freeing them - rename
        free_nonslab_page() to free_large_kmalloc() that takes struct folio ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      d835eef4
    • Matthew Wilcox (Oracle)'s avatar
      mm/slub: Convert detached_freelist to use a struct slab · cc465c3b
      Matthew Wilcox (Oracle) authored
      This gives us a little bit of extra typesafety as we know that nobody
      called virt_to_page() instead of virt_to_head_page().
      
      [ vbabka@suse.cz: Use folio as intermediate step when filtering out
        large kmalloc pages ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      cc465c3b
    • Matthew Wilcox (Oracle)'s avatar
      mm: Convert check_heap_object() to use struct slab · 0b3eb091
      Matthew Wilcox (Oracle) authored
      Ensure that we're not seeing a tail page inside __check_heap_object() by
      converting to a slab instead of a page.  Take the opportunity to mark
      the slab as const since we're not modifying it.  Also move the
      declaration of __check_heap_object() to mm/slab.h so it's not available
      to the wider kernel.
      
      [ vbabka@suse.cz: in check_heap_object() only convert to struct slab for
        actual PageSlab pages; use folio as intermediate step instead of page ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      0b3eb091
    • Matthew Wilcox (Oracle)'s avatar
      mm: Use struct slab in kmem_obj_info() · 7213230a
      Matthew Wilcox (Oracle) authored
      All three implementations of slab support kmem_obj_info() which reports
      details of an object allocated from the slab allocator.  By using the
      slab type instead of the page type, we make it obvious that this can
      only be called for slabs.
      
      [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      7213230a
    • Matthew Wilcox (Oracle)'s avatar
      mm: Convert __ksize() to struct slab · 0c24811b
      Matthew Wilcox (Oracle) authored
      In SLUB, use folios, and struct slab to access slab_cache field.
      In SLOB, use folios to properly resolve pointers beyond
      PAGE_SIZE offset of the object.
      
      [ vbabka@suse.cz: use folios, and only convert folio_test_slab() == true
        folios to struct slab ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      0c24811b
    • Matthew Wilcox (Oracle)'s avatar
      mm: Convert virt_to_cache() to use struct slab · 82c1775d
      Matthew Wilcox (Oracle) authored
      This function is entirely self-contained, so can be converted from page
      to slab.
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      82c1775d
    • Matthew Wilcox (Oracle)'s avatar
      mm: Convert [un]account_slab_page() to struct slab · b918653b
      Matthew Wilcox (Oracle) authored
      Convert the parameter of these functions to struct slab instead of
      struct page and drop _page from the names. For now their callers just
      convert page to slab.
      
      [ vbabka@suse.cz: replace existing functions instead of calling them ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      b918653b
    • Matthew Wilcox (Oracle)'s avatar
      mm: Split slab into its own type · d122019b
      Matthew Wilcox (Oracle) authored
      Make struct slab independent of struct page. It still uses the
      underlying memory in struct page for storing slab-specific data, but
      slab and slub can now be weaned off using struct page directly.  Some of
      the wrapper functions (slab_address() and slab_order()) still need to
      cast to struct folio, but this is a significant disentanglement.
      
      [ vbabka@suse.cz: Rebase on folios, use folio instead of page where
        possible.
      
        Do not duplicate flags field in struct slab, instead make the related
        accessors go through slab_folio(). For testing pfmemalloc use the
        folio_*_active flag accessors directly so the PageSlabPfmemalloc
        wrappers can be removed later.
      
        Make folio_slab() expect only folio_test_slab() == true folios and
        virt_to_slab() return NULL when folio_test_slab() == false.
      
        Move struct slab to mm/slab.h.
      
        Don't represent with struct slab pages that are not true slab pages,
        but just a compound page obtained directly rom page allocator (with
        large kmalloc() for SLUB and SLOB). ]
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      d122019b
    • Vlastimil Babka's avatar
      mm/slub: Make object_err() static · ae16d059
      Vlastimil Babka authored
      There are no callers outside of mm/slub.c anymore.
      
      Move freelist_corrupted() that calls object_err() to avoid a need for
      forward declaration.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      ae16d059
    • Vlastimil Babka's avatar
      mm/slab: Dissolve slab_map_pages() in its caller · c7981543
      Vlastimil Babka authored
      The function no longer does what its name and comment suggests, and just
      sets two struct page fields, which can be done directly in its sole
      caller.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      c7981543
  2. 20 Dec, 2021 1 commit
  3. 19 Dec, 2021 14 commits
  4. 18 Dec, 2021 1 commit