Commit c78e9363 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

mm: do not walk all of system memory during show_mem

It has been reported on very large machines that show_mem is taking almost
5 minutes to display information.  This is a serious problem if there is
an OOM storm.  The bulk of the cost is in show_mem doing a very expensive
PFN walk to give us the following information

  Total RAM:       Also available as totalram_pages
  Highmem pages:   Also available as totalhigh_pages
  Reserved pages:  Can be inferred from the zone structure
  Shared pages:    PFN walk required
  Unshared pages:  PFN walk required
  Quick pages:     Per-cpu walk required

Only the shared/unshared pages requires a full PFN walk but that
information is useless.  It is also inaccurate as page pins of unshared
pages would be accounted for as shared.  Even if the information was
accurate, I'm struggling to think how the shared/unshared information
could be useful for debugging OOM conditions.  Maybe it was useful before
rmap existed when reclaiming shared pages was costly but it is less
relevant today.

The PFN walk could be optimised a bit but why bother as the information is
useless.  This patch deletes the PFN walker and infers the total RAM,
highmem and reserved pages count from struct zone.  It omits the
shared/unshared page usage on the grounds that it is useless.  It also
corrects the reporting of HighMem as HighMem/MovableOnly as ZONE_MOVABLE
has similar problems to HighMem with respect to lowmem/highmem exhaustion.
Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
Cc: David Rientjes <rientjes@google.com>
Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4a099fb4
...@@ -12,8 +12,7 @@ ...@@ -12,8 +12,7 @@
void show_mem(unsigned int filter) void show_mem(unsigned int filter)
{ {
pg_data_t *pgdat; pg_data_t *pgdat;
unsigned long total = 0, reserved = 0, shared = 0, unsigned long total = 0, reserved = 0, highmem = 0;
nonshared = 0, highmem = 0;
printk("Mem-Info:\n"); printk("Mem-Info:\n");
show_free_areas(filter); show_free_areas(filter);
...@@ -22,43 +21,27 @@ void show_mem(unsigned int filter) ...@@ -22,43 +21,27 @@ void show_mem(unsigned int filter)
return; return;
for_each_online_pgdat(pgdat) { for_each_online_pgdat(pgdat) {
unsigned long i, flags; unsigned long flags;
int zoneid;
pgdat_resize_lock(pgdat, &flags); pgdat_resize_lock(pgdat, &flags);
for (i = 0; i < pgdat->node_spanned_pages; i++) { for (zoneid = 0; zoneid < MAX_NR_ZONES; zoneid++) {
struct page *page; struct zone *zone = &pgdat->node_zones[zoneid];
unsigned long pfn = pgdat->node_start_pfn + i; if (!populated_zone(zone))
if (unlikely(!(i % MAX_ORDER_NR_PAGES)))
touch_nmi_watchdog();
if (!pfn_valid(pfn))
continue; continue;
page = pfn_to_page(pfn); total += zone->present_pages;
reserved = zone->present_pages - zone->managed_pages;
if (PageHighMem(page))
highmem++;
if (PageReserved(page)) if (is_highmem_idx(zoneid))
reserved++; highmem += zone->present_pages;
else if (page_count(page) == 1)
nonshared++;
else if (page_count(page) > 1)
shared += page_count(page) - 1;
total++;
} }
pgdat_resize_unlock(pgdat, &flags); pgdat_resize_unlock(pgdat, &flags);
} }
printk("%lu pages RAM\n", total); printk("%lu pages RAM\n", total);
#ifdef CONFIG_HIGHMEM printk("%lu pages HighMem/MovableOnly\n", highmem);
printk("%lu pages HighMem\n", highmem);
#endif
printk("%lu pages reserved\n", reserved); printk("%lu pages reserved\n", reserved);
printk("%lu pages shared\n", shared);
printk("%lu pages non-shared\n", nonshared);
#ifdef CONFIG_QUICKLIST #ifdef CONFIG_QUICKLIST
printk("%lu pages in pagetable cache\n", printk("%lu pages in pagetable cache\n",
quicklist_total_size()); quicklist_total_size());
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment