Commit 421d9919 authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

quicklist: Set tlb->need_flush if pages are remaining in quicklist 0

This ensures that the quicklists are drained. Otherwise draining may only
occur when the processor reaches an idle state.

Fixes fatal leakage of pgd_t's on 2.6.22 and later.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Reported-by: default avatarDhaval Giani <dhaval@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 3811dbf6
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#define _ASM_GENERIC__TLB_H #define _ASM_GENERIC__TLB_H
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/quicklist.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
...@@ -85,6 +86,9 @@ tlb_flush_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) ...@@ -85,6 +86,9 @@ tlb_flush_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
static inline void static inline void
tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
{ {
#ifdef CONFIG_QUICKLIST
tlb->need_flush += &__get_cpu_var(quicklist)[0].nr_pages != 0;
#endif
tlb_flush_mmu(tlb, start, end); tlb_flush_mmu(tlb, start, end);
/* keep the page table cache within bounds */ /* keep the page table cache within bounds */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment