Commit 2e7c21ea authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] exit_mmap fix for 64bit->32bit execs

The recent exit_mmap() changes broke PPC64 when 64-bit applications exec
32-bit ones.  ia32-on-ia64 was broken as well

What is happening is that load_elf_binary() sets TIF_32BIT (via
SET_PERSONALITY) _before_ running exit_mmap().  So when we're unmapping the
vma's of the old image, we are running under the new image's personality.

This causes PPC64 to pass a 32-bit TASK_SIZE to unmap_vmas(), even when the
execing process had a 64-bit image.  Because unmap_vmas() is not provided
with the correct virtual address span it does not unmap all the old image's
vma's and we go BUG_ON(mm->map_count) in exit_mmap().

The early SET_PERSONALITY() is required before we look up the interpreter
because the lookup of the executable has to happen under the alternate root
which SET_PERSONALITY() may set.

Unfortunately this means that we're running flush_old_exec() under the new
exec's personality.  Hence this bug.

So what the patch does is to simply pass ~0UL into unmap_vmas(), which tells
it to unmap everything regardless of current personality.  Which is what the
old open-coded VMA killer was doing.

There remains the problem that some architectures are sometimes passing the
incorrect TASK_SIZE into tlb_finish_mmu().  They've always been doing that.
parent b91c1b1b
...@@ -1265,8 +1265,9 @@ void exit_mmap(struct mm_struct *mm) ...@@ -1265,8 +1265,9 @@ void exit_mmap(struct mm_struct *mm)
tlb = tlb_gather_mmu(mm, 1); tlb = tlb_gather_mmu(mm, 1);
flush_cache_mm(mm); flush_cache_mm(mm);
/* Use ~0UL here to ensure all VMAs in the mm are unmapped */
mm->map_count -= unmap_vmas(&tlb, mm, mm->mmap, 0, mm->map_count -= unmap_vmas(&tlb, mm, mm->mmap, 0,
TASK_SIZE, &nr_accounted); ~0UL, &nr_accounted);
vm_unacct_memory(nr_accounted); vm_unacct_memory(nr_accounted);
BUG_ON(mm->map_count); /* This is just debugging */ BUG_ON(mm->map_count); /* This is just debugging */
clear_page_tables(tlb, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD); clear_page_tables(tlb, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment