Commit 2b1667e5 authored by Björn Töpel's avatar Björn Töpel Committed by Alexei Starovoitov

xsk: Fix number of pinned pages/umem size discrepancy

For AF_XDP sockets, there was a discrepancy between the number of of
pinned pages and the size of the umem region.

The size of the umem region is used to validate the AF_XDP descriptor
addresses. The logic that pinned the pages covered by the region only
took whole pages into consideration, creating a mismatch between the
size and pinned pages. A user could then pass AF_XDP addresses outside
the range of pinned pages, but still within the size of the region,
crashing the kernel.

This change correctly calculates the number of pages to be
pinned. Further, the size check for the aligned mode is
simplified. Now the code simply checks if the size is divisible by the
chunk size.

Fixes: bbff2f32 ("xsk: new descriptor addressing scheme")
Reported-by: default avatarCiara Loftus <ciara.loftus@intel.com>
Signed-off-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
Tested-by: default avatarCiara Loftus <ciara.loftus@intel.com>
Acked-by: default avatarSong Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200910075609.7904-1-bjorn.topel@gmail.com
parent fde6dedf
...@@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct xdp_umem *umem) ...@@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
{ {
u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
u64 npgs, addr = mr->addr, size = mr->len; u64 npgs, addr = mr->addr, size = mr->len;
unsigned int chunks, chunks_per_page; unsigned int chunks, chunks_rem;
int err; int err;
if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
...@@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) ...@@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
if ((addr + size) < addr) if ((addr + size) < addr)
return -EINVAL; return -EINVAL;
npgs = size >> PAGE_SHIFT; npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
if (npgs_rem)
npgs++;
if (npgs > U32_MAX) if (npgs > U32_MAX)
return -EINVAL; return -EINVAL;
chunks = (unsigned int)div_u64(size, chunk_size); chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
if (chunks == 0) if (chunks == 0)
return -EINVAL; return -EINVAL;
if (!unaligned_chunks) { if (!unaligned_chunks && chunks_rem)
chunks_per_page = PAGE_SIZE / chunk_size; return -EINVAL;
if (chunks < chunks_per_page || chunks % chunks_per_page)
return -EINVAL;
}
if (headroom >= chunk_size - XDP_PACKET_HEADROOM) if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
return -EINVAL; return -EINVAL;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment