Commit 7872168a authored by Shiraz Saleem's avatar Shiraz Saleem Committed by Jason Gunthorpe

RDMA/umem: Handle page combining avoidance correctly in ib_umem_add_sg_table()

The flag update_cur_sg tracks whether contiguous pages from a new set of
page_list pages can be merged into the SGE passed into
ib_umem_add_sg_table(). If this flag is true, but the total segment length
exceeds the max_seg_size supported by HW, we avoid combining to this SGE
and move to a new SGE (x) and merge 'len' pages to it. However, if i <
npages, the next iteration can incorrectly merge 'len' contiguous pages
into x instead of into a new SGE since update_cur_sg is still true.

Reset update_cur_sg to false always after the check to merge pages into
the first SGE passed in to ib_umem_add_sg_table().  Also, prevent a new
SGE's segment length from ever exceeding HW max_seg_sz.

There is a crash on hfi1 as result of this where-in max_seg_sz is
defaulting to 64K. Due to above bug, unfolding SGE's in __ib_umem_release
points to a bad page ptr.

 TEST comp-wfr.perfnative.STL-22166-WDT _ perftest native 2-Write_4097QP_4MB STARTING at 1555387093
 BUG: Bad page state in process ib_write_bw  pfn:7ebca0
 page:ffffcd675faf2800 count:0 mapcount:1 mapping:0000000000000000 index:0x1
 flags: 0x17ffffc0000000()
 raw: 0017ffffc0000000 dead000000000100 dead000000000200 0000000000000000
 raw: 0000000000000001 0000000000000000 0000000000000000 0000000000000000
 page dumped because: nonzero mapcount
 CPU: 18 PID: 15853 Comm: ib_write_bw Tainted: G    B             5.1.0-rc4 #1
 Hardware name: Intel Corporation S2600CWR/S2600CW, BIOS SE5C610.86B.01.01.0014.121820151719 12/18/2015
 Call Trace:
  dump_stack+0x5a/0x73
  bad_page+0xf5/0x10f
  free_pcppages_bulk+0x62c/0x680
  free_unref_page+0x54/0x70
  __ib_umem_release+0x148/0x1a0 [ib_uverbs]
  ib_umem_release+0x22/0x80 [ib_uverbs]
  rvt_dereg_mr+0x67/0xb0 [rdmavt]
  ib_dereg_mr_user+0x37/0x60 [ib_core]
  destroy_hw_idr_uobject+0x1c/0x50 [ib_uverbs]
  uverbs_destroy_uobject+0x2e/0x180 [ib_uverbs]
  uobj_destroy+0x4d/0x60 [ib_uverbs]
  __uobj_get_destroy+0x33/0x50 [ib_uverbs]
  __uobj_perform_destroy+0xa/0x30 [ib_uverbs]
  ib_uverbs_dereg_mr+0x66/0x90 [ib_uverbs]
  ib_uverbs_write+0x3e1/0x500 [ib_uverbs]
  vfs_write+0xad/0x1b0
  ksys_write+0x5a/0xd0
  do_syscall_64+0x5b/0x180
  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: d10bcf94 ("RDMA/umem: Combine contiguous PAGE_SIZE regions in SGEs")
Tested-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: default avatarMichael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: default avatarShiraz Saleem <shiraz.saleem@intel.com>
Reviewed-by: default avatarDennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
parent 923abb9d
...@@ -101,18 +101,22 @@ static struct scatterlist *ib_umem_add_sg_table(struct scatterlist *sg, ...@@ -101,18 +101,22 @@ static struct scatterlist *ib_umem_add_sg_table(struct scatterlist *sg,
* at i * at i
*/ */
for (len = 0; i != npages && for (len = 0; i != npages &&
first_pfn + len == page_to_pfn(page_list[i]); first_pfn + len == page_to_pfn(page_list[i]) &&
len < (max_seg_sz >> PAGE_SHIFT);
len++) len++)
i++; i++;
/* Squash N contiguous pages from page_list into current sge */ /* Squash N contiguous pages from page_list into current sge */
if (update_cur_sg && if (update_cur_sg) {
((max_seg_sz - sg->length) >= (len << PAGE_SHIFT))) { if ((max_seg_sz - sg->length) >= (len << PAGE_SHIFT)) {
sg_set_page(sg, sg_page(sg), sg_set_page(sg, sg_page(sg),
sg->length + (len << PAGE_SHIFT), 0); sg->length + (len << PAGE_SHIFT),
0);
update_cur_sg = false; update_cur_sg = false;
continue; continue;
} }
update_cur_sg = false;
}
/* Squash N contiguous pages into next sge or first sge */ /* Squash N contiguous pages into next sge or first sge */
if (!first) if (!first)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment