Commit 36798d5a authored by Artemy Kovalyov's avatar Artemy Kovalyov Committed by Jason Gunthorpe

RDMA/umem: Fix ib_umem_find_best_pgsz()

Except for the last entry, the ending iova alignment sets the maximum
possible page size as the low bits of the iova must be zero when starting
the next chunk.

Fixes: 4a353399 ("RDMA/umem: Add API to find best driver supported page size in an MR")
Link: https://lore.kernel.org/r/20200128135612.174820-1-leon@kernel.orgSigned-off-by: default avatarArtemy Kovalyov <artemyko@mellanox.com>
Signed-off-by: default avatarLeon Romanovsky <leonro@mellanox.com>
Tested-by: default avatarGal Pressman <galpress@amazon.com>
Reviewed-by: default avatarJason Gunthorpe <jgg@mellanox.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
parent ea660ad7
...@@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, ...@@ -166,10 +166,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
* for any address. * for any address.
*/ */
mask |= (sg_dma_address(sg) + pgoff) ^ va; mask |= (sg_dma_address(sg) + pgoff) ^ va;
if (i && i != (umem->nmap - 1))
/* restrict by length as well for interior SGEs */
mask |= sg_dma_len(sg);
va += sg_dma_len(sg) - pgoff; va += sg_dma_len(sg) - pgoff;
/* Except for the last entry, the ending iova alignment sets
* the maximum possible page size as the low bits of the iova
* must be zero when starting the next chunk.
*/
if (i != (umem->nmap - 1))
mask |= va;
pgoff = 0; pgoff = 0;
} }
best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap); best_pg_bit = rdma_find_pg_bit(mask, pgsz_bitmap);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment