Commit 2e50b261 authored by Inbar Karmy's avatar Inbar Karmy Committed by Saeed Mahameed

net/mlx5e: Set page to null in case dma mapping fails

Currently, when dma mapping fails, put_page is called,
but the page is not set to null. Later, in the page_reuse treatment in
mlx5e_free_rx_descs(), mlx5e_page_release() is called for the second time,
improperly doing dma_unmap (for a non-mapped address) and an extra put_page.
Prevent this by nullifying the page pointer when dma_map fails.

Fixes: accd5883 ("net/mlx5e: Introduce RX Page-Reuse")
Signed-off-by: default avatarInbar Karmy <inbark@mellanox.com>
Reviewed-by: default avatarTariq Toukan <tariqt@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
parent 2a8d6065
......@@ -215,22 +215,20 @@ static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq,
static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
struct mlx5e_dma_info *dma_info)
{
struct page *page;
if (mlx5e_rx_cache_get(rq, dma_info))
return 0;
page = dev_alloc_pages(rq->buff.page_order);
if (unlikely(!page))
dma_info->page = dev_alloc_pages(rq->buff.page_order);
if (unlikely(!dma_info->page))
return -ENOMEM;
dma_info->addr = dma_map_page(rq->pdev, page, 0,
dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
RQ_PAGE_SIZE(rq), rq->buff.map_dir);
if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
put_page(page);
put_page(dma_info->page);
dma_info->page = NULL;
return -ENOMEM;
}
dma_info->page = page;
return 0;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment