Commit 47da2976 authored by Johannes Berg's avatar Johannes Berg Committed by Richard Weinberger

um: mm: check more comprehensively for stub changes

If userspace tries to change the stub, we need to kill it,
because otherwise it can escape the virtual machine. In a
few cases the stub checks weren't good, e.g. if userspace
just tries to

	mmap(0x100000 - 0x1000, 0x3000, ...)

it could succeed to get a new private/anonymous mapping
replacing the stubs. Fix this by checking everywhere, and
checking for _overlap_, not just direct changes.

Cc: stable@vger.kernel.org
Fixes: 3963333f ("uml: cover stubs with a VMA")
Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
parent e1e22d0d
......@@ -125,6 +125,9 @@ static int add_mmap(unsigned long virt, unsigned long phys, unsigned long len,
struct host_vm_op *last;
int fd = -1, ret = 0;
if (virt + len > STUB_START && virt < STUB_END)
return -EINVAL;
if (hvc->userspace)
fd = phys_mapping(phys, &offset);
else
......@@ -162,7 +165,7 @@ static int add_munmap(unsigned long addr, unsigned long len,
struct host_vm_op *last;
int ret = 0;
if ((addr >= STUB_START) && (addr < STUB_END))
if (addr + len > STUB_START && addr < STUB_END)
return -EINVAL;
if (hvc->index != 0) {
......@@ -192,6 +195,9 @@ static int add_mprotect(unsigned long addr, unsigned long len,
struct host_vm_op *last;
int ret = 0;
if (addr + len > STUB_START && addr < STUB_END)
return -EINVAL;
if (hvc->index != 0) {
last = &hvc->ops[hvc->index - 1];
if ((last->type == MPROTECT) &&
......@@ -472,6 +478,10 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long address)
struct mm_id *mm_id;
address &= PAGE_MASK;
if (address >= STUB_START && address < STUB_END)
goto kill;
pgd = pgd_offset(mm, address);
if (!pgd_present(*pgd))
goto kill;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment