Commit 6b4ebc3a authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Linus Torvalds

mm,vmacache: optimize overflow system-wide flushing

For single threaded workloads, we can avoid flushing and iterating through
the entire list of tasks, making the whole function a lot faster,
requiring only a single atomic read for the mm_users.
Signed-off-by: default avatarDavidlohr Bueso <davidlohr@hp.com>
Suggested-by: default avatarOleg Nesterov <oleg@redhat.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4f115147
...@@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm) ...@@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
{ {
struct task_struct *g, *p; struct task_struct *g, *p;
/*
* Single threaded tasks need not iterate the entire
* list of process. We can avoid the flushing as well
* since the mm's seqnum was increased and don't have
* to worry about other threads' seqnum. Current's
* flush will occur upon the next lookup.
*/
if (atomic_read(&mm->mm_users) == 1)
return;
rcu_read_lock(); rcu_read_lock();
for_each_process_thread(g, p) { for_each_process_thread(g, p) {
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment