Commit 333291ce authored by Andrii Nakryiko's avatar Andrii Nakryiko Committed by Alexei Starovoitov

bpf: Fix bug in mmap() implementation for BPF array map

mmap() subsystem allows user-space application to memory-map region with
initial page offset. This wasn't taken into account in initial implementation
of BPF array memory-mapping. This would result in wrong pages, not taking into
account requested page shift, being memory-mmaped into user-space. This patch
fixes this gap and adds a test for such scenario.

Fixes: fc970227 ("bpf: Add mmap() support for BPF_MAP_TYPE_ARRAY")
Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
Acked-by: default avatarYonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512235925.3817805-1-andriin@fb.com
parent 23ad0466
......@@ -486,7 +486,12 @@ static int array_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
if (!(map->map_flags & BPF_F_MMAPABLE))
return -EINVAL;
return remap_vmalloc_range(vma, array_map_vmalloc_addr(array), pgoff);
if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) >
PAGE_ALIGN((u64)array->map.max_entries * array->elem_size))
return -EINVAL;
return remap_vmalloc_range(vma, array_map_vmalloc_addr(array),
vma->vm_pgoff + pgoff);
}
const struct bpf_map_ops array_map_ops = {
......
......@@ -217,6 +217,14 @@ void test_mmap(void)
munmap(tmp2, 4 * page_size);
/* map all 4 pages, but with pg_off=1 page, should fail */
tmp1 = mmap(NULL, 4 * page_size, PROT_READ, MAP_SHARED | MAP_FIXED,
data_map_fd, page_size /* initial page shift */);
if (CHECK(tmp1 != MAP_FAILED, "adv_mmap7", "unexpected success")) {
munmap(tmp1, 4 * page_size);
goto cleanup;
}
tmp1 = mmap(NULL, map_sz, PROT_READ, MAP_SHARED, data_map_fd, 0);
if (CHECK(tmp1 == MAP_FAILED, "last_mmap", "failed %d\n", errno))
goto cleanup;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment