Commit c91bdc93 authored by Johannes Weiner's avatar Johannes Weiner Committed by Andrew Morton

mm: memcontrol: don't allocate cgroup swap arrays when memcg is disabled

Patch series "memcg swap fix & cleanups".


This patch (of 4):

Since commit 2d1c4980 ("mm: memcontrol: make swap tracking an integral
part of memory control"), the cgroup swap arrays are used to track memory
ownership at the time of swap readahead and swapoff, even if swap space
*accounting* has been turned off by the user via swapaccount=0 (which sets
cgroup_memory_noswap).

However, the patch was overzealous: by simply dropping the
cgroup_memory_noswap conditionals in the swapon, swapoff and uncharge
path, it caused the cgroup arrays being allocated even when the memory
controller as a whole is disabled.  This is a waste of that memory.

Restore mem_cgroup_disabled() checks, implied previously by
cgroup_memory_noswap, in the swapon, swapoff, and swap_entry_free
callbacks.

Link: https://lkml.kernel.org/r/20220926135704.400818-1-hannes@cmpxchg.org
Link: https://lkml.kernel.org/r/20220926135704.400818-2-hannes@cmpxchg.org
Fixes: 2d1c4980 ("mm: memcontrol: make swap tracking an integral part of memory control")
Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Reported-by: default avatarHugh Dickins <hughd@google.com>
Reviewed-by: default avatarShakeel Butt <shakeelb@google.com>
Acked-by: default avatarHugh Dickins <hughd@google.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent f7c5b1aa
...@@ -7459,6 +7459,9 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) ...@@ -7459,6 +7459,9 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
struct mem_cgroup *memcg; struct mem_cgroup *memcg;
unsigned short id; unsigned short id;
if (mem_cgroup_disabled())
return;
id = swap_cgroup_record(entry, 0, nr_pages); id = swap_cgroup_record(entry, 0, nr_pages);
rcu_read_lock(); rcu_read_lock();
memcg = mem_cgroup_from_id(id); memcg = mem_cgroup_from_id(id);
......
...@@ -170,6 +170,9 @@ int swap_cgroup_swapon(int type, unsigned long max_pages) ...@@ -170,6 +170,9 @@ int swap_cgroup_swapon(int type, unsigned long max_pages)
unsigned long length; unsigned long length;
struct swap_cgroup_ctrl *ctrl; struct swap_cgroup_ctrl *ctrl;
if (mem_cgroup_disabled())
return 0;
length = DIV_ROUND_UP(max_pages, SC_PER_PAGE); length = DIV_ROUND_UP(max_pages, SC_PER_PAGE);
array = vcalloc(length, sizeof(void *)); array = vcalloc(length, sizeof(void *));
...@@ -204,6 +207,9 @@ void swap_cgroup_swapoff(int type) ...@@ -204,6 +207,9 @@ void swap_cgroup_swapoff(int type)
unsigned long i, length; unsigned long i, length;
struct swap_cgroup_ctrl *ctrl; struct swap_cgroup_ctrl *ctrl;
if (mem_cgroup_disabled())
return;
mutex_lock(&swap_cgroup_mutex); mutex_lock(&swap_cgroup_mutex);
ctrl = &swap_cgroup_ctrl[type]; ctrl = &swap_cgroup_ctrl[type];
map = ctrl->map; map = ctrl->map;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment