Commit 3fcebf90 authored by David Hildenbrand's avatar David Hildenbrand Committed by Linus Torvalds

mm/memory_hotplug: improved dynamic memory group aware "auto-movable" online policy

Currently, the "auto-movable" online policy does not allow for hotplugged
KERNEL (ZONE_NORMAL) memory to increase the amount of MOVABLE memory we
can have, primarily, because there is no coordiantion across memory
devices and we don't want to create zone-imbalances accidentially when
unplugging memory.

However, within a single memory device it's different.  Let's allow for
KERNEL memory within a dynamic memory group to allow for more MOVABLE
within the same memory group.  The only thing we have to take care of is
that the managing driver avoids zone imbalances by unplugging MOVABLE
memory first, otherwise there can be corner cases where unplug of memory
could result in (accidential) zone imbalances.

virtio-mem is the only user of dynamic memory groups and recently added
support for prioritizing unplug of ZONE_MOVABLE over ZONE_NORMAL, so we
don't need a new toggle to enable it for dynamic memory groups.

We limit this handling to dynamic memory groups, because:

* We want to keep the runtime overhead for collecting stats when
  onlining a single memory block small.  We tend to have only a handful of
  dynamic memory groups, but we can have quite some static memory groups
  (e.g., 256 DIMMs).

* It doesn't make too much sense for static memory groups, as we try
  onlining all applicable memory blocks either completely to ZONE_MOVABLE
  or not.  In ordinary operation, we won't have a mixture of zones within
  a static memory group.

When adding memory to a dynamic memory group, we'll first online memory to
ZONE_MOVABLE as long as early KERNEL memory allows for it.  Then, we'll
online the next unit(s) to ZONE_NORMAL, until we can online the next
unit(s) to ZONE_MOVABLE.

For a simple virtio-mem device with a MOVABLE:KERNEL ratio of 3:1, it will
result in a layout like:

  [M][M][M][M][M][M][M][M][N][M][M][M][N][M][M][M]...
  ^ movable memory due to early kernel memory
			   ^ allows for more movable memory ...
			      ^-----^ ... here
				       ^ allows for more movable memory ...
				          ^-----^ ... here

While the created layout is sub-optimal when it comes to contiguous zones,
it gives us the maximum flexibility when dynamically growing/shrinking a
device; we can grow small VMs really big in small steps, and still shrink
reliably to e.g., 1/4 of the maximum VM size in this example, removing
full memory blocks along with meta data more reliably.

Mark dynamic memory groups in the xarray such that we can efficiently
iterate over them when collecting stats.  In usual setups, we have one
virtio-mem device per NUMA node, and usually only a small number of NUMA
nodes.

Note: for now, there seems to be no compelling reason to make this
behavior configurable.

Link: https://lkml.kernel.org/r/20210806124715.17090-10-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hui Zhu <teawater@gmail.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 445fcf7c
...@@ -86,6 +86,7 @@ static DEFINE_XARRAY(memory_blocks); ...@@ -86,6 +86,7 @@ static DEFINE_XARRAY(memory_blocks);
* Memory groups, indexed by memory group id (mgid). * Memory groups, indexed by memory group id (mgid).
*/ */
static DEFINE_XARRAY_FLAGS(memory_groups, XA_FLAGS_ALLOC); static DEFINE_XARRAY_FLAGS(memory_groups, XA_FLAGS_ALLOC);
#define MEMORY_GROUP_MARK_DYNAMIC XA_MARK_1
static BLOCKING_NOTIFIER_HEAD(memory_chain); static BLOCKING_NOTIFIER_HEAD(memory_chain);
...@@ -939,6 +940,8 @@ static int memory_group_register(struct memory_group group) ...@@ -939,6 +940,8 @@ static int memory_group_register(struct memory_group group)
if (ret) { if (ret) {
kfree(new_group); kfree(new_group);
return ret; return ret;
} else if (group.is_dynamic) {
xa_set_mark(&memory_groups, mgid, MEMORY_GROUP_MARK_DYNAMIC);
} }
return mgid; return mgid;
} }
...@@ -1044,3 +1047,30 @@ struct memory_group *memory_group_find_by_id(int mgid) ...@@ -1044,3 +1047,30 @@ struct memory_group *memory_group_find_by_id(int mgid)
{ {
return xa_load(&memory_groups, mgid); return xa_load(&memory_groups, mgid);
} }
/*
* This is an internal helper only to be used in core memory hotplug code to
* walk all dynamic memory groups excluding a given memory group, either
* belonging to a specific node, or belonging to any node.
*/
int walk_dynamic_memory_groups(int nid, walk_memory_groups_func_t func,
struct memory_group *excluded, void *arg)
{
struct memory_group *group;
unsigned long index;
int ret = 0;
xa_for_each_marked(&memory_groups, index, group,
MEMORY_GROUP_MARK_DYNAMIC) {
if (group == excluded)
continue;
#ifdef CONFIG_NUMA
if (nid != NUMA_NO_NODE && group->nid != nid)
continue;
#endif /* CONFIG_NUMA */
ret = func(group, arg);
if (ret)
break;
}
return ret;
}
...@@ -146,6 +146,9 @@ extern int memory_group_register_static(int nid, unsigned long max_pages); ...@@ -146,6 +146,9 @@ extern int memory_group_register_static(int nid, unsigned long max_pages);
extern int memory_group_register_dynamic(int nid, unsigned long unit_pages); extern int memory_group_register_dynamic(int nid, unsigned long unit_pages);
extern int memory_group_unregister(int mgid); extern int memory_group_unregister(int mgid);
struct memory_group *memory_group_find_by_id(int mgid); struct memory_group *memory_group_find_by_id(int mgid);
typedef int (*walk_memory_groups_func_t)(struct memory_group *, void *);
int walk_dynamic_memory_groups(int nid, walk_memory_groups_func_t func,
struct memory_group *excluded, void *arg);
#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
#ifdef CONFIG_MEMORY_HOTPLUG #ifdef CONFIG_MEMORY_HOTPLUG
......
...@@ -752,11 +752,44 @@ static void auto_movable_stats_account_zone(struct auto_movable_stats *stats, ...@@ -752,11 +752,44 @@ static void auto_movable_stats_account_zone(struct auto_movable_stats *stats,
#endif /* CONFIG_CMA */ #endif /* CONFIG_CMA */
} }
} }
struct auto_movable_group_stats {
unsigned long movable_pages;
unsigned long req_kernel_early_pages;
};
static bool auto_movable_can_online_movable(int nid, unsigned long nr_pages) static int auto_movable_stats_account_group(struct memory_group *group,
void *arg)
{
const int ratio = READ_ONCE(auto_movable_ratio);
struct auto_movable_group_stats *stats = arg;
long pages;
/*
* We don't support modifying the config while the auto-movable online
* policy is already enabled. Just avoid the division by zero below.
*/
if (!ratio)
return 0;
/*
* Calculate how many early kernel pages this group requires to
* satisfy the configured zone ratio.
*/
pages = group->present_movable_pages * 100 / ratio;
pages -= group->present_kernel_pages;
if (pages > 0)
stats->req_kernel_early_pages += pages;
stats->movable_pages += group->present_movable_pages;
return 0;
}
static bool auto_movable_can_online_movable(int nid, struct memory_group *group,
unsigned long nr_pages)
{ {
struct auto_movable_stats stats = {};
unsigned long kernel_early_pages, movable_pages; unsigned long kernel_early_pages, movable_pages;
struct auto_movable_group_stats group_stats = {};
struct auto_movable_stats stats = {};
pg_data_t *pgdat = NODE_DATA(nid); pg_data_t *pgdat = NODE_DATA(nid);
struct zone *zone; struct zone *zone;
int i; int i;
...@@ -777,6 +810,21 @@ static bool auto_movable_can_online_movable(int nid, unsigned long nr_pages) ...@@ -777,6 +810,21 @@ static bool auto_movable_can_online_movable(int nid, unsigned long nr_pages)
kernel_early_pages = stats.kernel_early_pages; kernel_early_pages = stats.kernel_early_pages;
movable_pages = stats.movable_pages; movable_pages = stats.movable_pages;
/*
* Kernel memory inside dynamic memory group allows for more MOVABLE
* memory within the same group. Remove the effect of all but the
* current group from the stats.
*/
walk_dynamic_memory_groups(nid, auto_movable_stats_account_group,
group, &group_stats);
if (kernel_early_pages <= group_stats.req_kernel_early_pages)
return false;
kernel_early_pages -= group_stats.req_kernel_early_pages;
movable_pages -= group_stats.movable_pages;
if (group && group->is_dynamic)
kernel_early_pages += group->present_kernel_pages;
/* /*
* Test if we could online the given number of pages to ZONE_MOVABLE * Test if we could online the given number of pages to ZONE_MOVABLE
* and still stay in the configured ratio. * and still stay in the configured ratio.
...@@ -834,6 +882,10 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn ...@@ -834,6 +882,10 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
* with unmovable allocations). While there are corner cases where it might * with unmovable allocations). While there are corner cases where it might
* still work, it is barely relevant in practice. * still work, it is barely relevant in practice.
* *
* Exceptions are dynamic memory groups, which allow for more MOVABLE
* memory within the same memory group -- because in that case, there is
* coordination within the single memory device managed by a single driver.
*
* We rely on "present pages" instead of "managed pages", as the latter is * We rely on "present pages" instead of "managed pages", as the latter is
* highly unreliable and dynamic in virtualized environments, and does not * highly unreliable and dynamic in virtualized environments, and does not
* consider boot time allocations. For example, memory ballooning adjusts the * consider boot time allocations. For example, memory ballooning adjusts the
...@@ -899,12 +951,12 @@ static struct zone *auto_movable_zone_for_pfn(int nid, ...@@ -899,12 +951,12 @@ static struct zone *auto_movable_zone_for_pfn(int nid,
* nobody interferes, all will be MOVABLE if possible. * nobody interferes, all will be MOVABLE if possible.
*/ */
nr_pages = max_pages - online_pages; nr_pages = max_pages - online_pages;
if (!auto_movable_can_online_movable(NUMA_NO_NODE, nr_pages)) if (!auto_movable_can_online_movable(NUMA_NO_NODE, group, nr_pages))
goto kernel_zone; goto kernel_zone;
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
if (auto_movable_numa_aware && if (auto_movable_numa_aware &&
!auto_movable_can_online_movable(nid, nr_pages)) !auto_movable_can_online_movable(nid, group, nr_pages))
goto kernel_zone; goto kernel_zone;
#endif /* CONFIG_NUMA */ #endif /* CONFIG_NUMA */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment