Commit f98192b6 authored by Oleg Nesterov's avatar Oleg Nesterov Committed by Sasha Levin

uprobes: Fix the memcg accounting

[ Upstream commit 6c4687cc ]

__replace_page() wronlgy calls mem_cgroup_cancel_charge() in "success" path,
it should only do this if page_check_address() fails.

This means that every enable/disable leads to unbalanced mem_cgroup_uncharge()
from put_page(old_page), it is trivial to underflow the page_counter->count
and trigger OOM.
Reported-and-tested-by: default avatarBrenden Blanco <bblanco@plumgrid.com>
Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
Reviewed-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarMichal Hocko <mhocko@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: stable@vger.kernel.org # 3.17+
Fixes: 00501b53 ("mm: memcontrol: rewrite charge API")
Link: http://lkml.kernel.org/r/20160817153629.GB29724@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
parent cc2082d1
...@@ -179,8 +179,10 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, ...@@ -179,8 +179,10 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
err = -EAGAIN; err = -EAGAIN;
ptep = page_check_address(page, mm, addr, &ptl, 0); ptep = page_check_address(page, mm, addr, &ptl, 0);
if (!ptep) if (!ptep) {
mem_cgroup_cancel_charge(kpage, memcg);
goto unlock; goto unlock;
}
get_page(kpage); get_page(kpage);
page_add_new_anon_rmap(kpage, vma, addr); page_add_new_anon_rmap(kpage, vma, addr);
...@@ -207,7 +209,6 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, ...@@ -207,7 +209,6 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
err = 0; err = 0;
unlock: unlock:
mem_cgroup_cancel_charge(kpage, memcg);
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
unlock_page(page); unlock_page(page);
return err; return err;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment