Commit 06fe9b1d authored by Jens Axboe's avatar Jens Axboe

io_uring: don't attempt to mmap larger than what the user asks for

If IORING_FEAT_SINGLE_MMAP is ignored, as can happen if an application
uses an ancient liburing or does setup manually, then 3 mmap's are
required to map the ring into userspace. The kernel will still have
collapsed the mappings, however userspace may ask for mapping them
individually. If so, then we should not use the full number of ring
pages, as it may exceed the partial mapping. Doing so will yield an
-EFAULT from vm_insert_pages(), as we pass in more pages than what the
application asked for.

Cap the number of pages to match what the application asked for, for
the particular mapping operation.
Reported-by: default avatarLucas Mülling <lmulling@proton.me>
Link: https://github.com/axboe/liburing/issues/1157
Fixes: 3ab1db3c ("io_uring: get rid of remap_pfn_range() for mapping rings/sqes")
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 1613e604
...@@ -244,6 +244,7 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) ...@@ -244,6 +244,7 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
struct io_ring_ctx *ctx = file->private_data; struct io_ring_ctx *ctx = file->private_data;
size_t sz = vma->vm_end - vma->vm_start; size_t sz = vma->vm_end - vma->vm_start;
long offset = vma->vm_pgoff << PAGE_SHIFT; long offset = vma->vm_pgoff << PAGE_SHIFT;
unsigned int npages;
void *ptr; void *ptr;
ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz); ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
...@@ -253,8 +254,8 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) ...@@ -253,8 +254,8 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
switch (offset & IORING_OFF_MMAP_MASK) { switch (offset & IORING_OFF_MMAP_MASK) {
case IORING_OFF_SQ_RING: case IORING_OFF_SQ_RING:
case IORING_OFF_CQ_RING: case IORING_OFF_CQ_RING:
return io_uring_mmap_pages(ctx, vma, ctx->ring_pages, npages = min(ctx->n_ring_pages, (sz + PAGE_SIZE - 1) >> PAGE_SHIFT);
ctx->n_ring_pages); return io_uring_mmap_pages(ctx, vma, ctx->ring_pages, npages);
case IORING_OFF_SQES: case IORING_OFF_SQES:
return io_uring_mmap_pages(ctx, vma, ctx->sqe_pages, return io_uring_mmap_pages(ctx, vma, ctx->sqe_pages,
ctx->n_sqe_pages); ctx->n_sqe_pages);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment