Commit 436c6541 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

memcgroup: fix zone isolation OOM

mem_cgroup_charge_common shows a tendency to OOM without good reason, when
a memhog goes well beyond its rss limit but with plenty of swap available.
Seen on x86 but not on PowerPC; seen when the next patch omits swapcache
from memcgroup, but we presume it can happen without.

mem_cgroup_isolate_pages is not quite satisfying reclaim's criteria for OOM
avoidance.  Already it has to scan beyond the nr_to_scan limit when it
finds a !LRU page or an active page when handling inactive or an inactive
page when handling active.  It needs to do exactly the same when it finds a
page from the wrong zone (the x86 tests had two zones, the PowerPC tests
had only one).

Don't increment scan and then decrement it in these cases, just move the
incrementation down.  Fix recent off-by-one when checking against
nr_to_scan.  Cut out "Check if the meta page went away from under us",
presumably left over from early debugging: no amount of such checks could
save us if this list really were being updated without locking.

This change does make the unlimited scan while holding two spinlocks
even worse - bad for latency and bad for containment; but that's a
separate issue which is better left to be fixed a little later.
Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
Cc: Pavel Emelianov <xemul@openvz.org>
Acked-by: default avatarBalbir Singh <balbir@linux.vnet.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Kirill Korotaev <dev@sw.ru>
Cc: Herbert Poetzl <herbert@13thfloor.at>
Cc: David Rientjes <rientjes@google.com>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ff7283fa
...@@ -260,24 +260,20 @@ unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan, ...@@ -260,24 +260,20 @@ unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
spin_lock(&mem_cont->lru_lock); spin_lock(&mem_cont->lru_lock);
scan = 0; scan = 0;
list_for_each_entry_safe_reverse(pc, tmp, src, lru) { list_for_each_entry_safe_reverse(pc, tmp, src, lru) {
if (scan++ > nr_to_scan) if (scan >= nr_to_scan)
break; break;
page = pc->page; page = pc->page;
VM_BUG_ON(!pc); VM_BUG_ON(!pc);
if (unlikely(!PageLRU(page))) { if (unlikely(!PageLRU(page)))
scan--;
continue; continue;
}
if (PageActive(page) && !active) { if (PageActive(page) && !active) {
__mem_cgroup_move_lists(pc, true); __mem_cgroup_move_lists(pc, true);
scan--;
continue; continue;
} }
if (!PageActive(page) && active) { if (!PageActive(page) && active) {
__mem_cgroup_move_lists(pc, false); __mem_cgroup_move_lists(pc, false);
scan--;
continue; continue;
} }
...@@ -288,13 +284,8 @@ unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan, ...@@ -288,13 +284,8 @@ unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
if (page_zone(page) != z) if (page_zone(page) != z)
continue; continue;
/* scan++;
* Check if the meta page went away from under us
*/
if (!list_empty(&pc->lru))
list_move(&pc->lru, &pc_list); list_move(&pc->lru, &pc_list);
else
continue;
if (__isolate_lru_page(page, mode) == 0) { if (__isolate_lru_page(page, mode) == 0) {
list_move(&page->lru, dst); list_move(&page->lru, dst);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment