- 17 Sep, 2020 7 commits
-
-
Jason Gunthorpe authored
Two places were open coding this sequence, and also pull in cma_leave_roce_mc_group() which was called only once. Link: https://lore.kernel.org/r/20200902081122.745412-8-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
There is no kernel user of RDMA CM multicast so this code managing the multicast subscription of the kernel-only internal QP is dead. Remove it. This makes the bug fixes in the next patches much simpler. Link: https://lore.kernel.org/r/20200902081122.745412-7-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
These are the same thing, except that cma_ndev_work doesn't have a state transition. Signal no state transition by setting old_state and new_state == 0. In all cases the handler function should not be called once rdma_destroy_id() has progressed passed setting the state. Link: https://lore.kernel.org/r/20200902081122.745412-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The only place that still uses it is rdma_join_multicast() which is only doing a sanity check that the caller hasn't done something wrong and doesn't need the spinlock. At least in the case of rdma_join_multicast() the information it needs will remain until the ID is destroyed once it enters these states. Similarly there is no reason to check for these specific states in the handler callback, instead use the usual check for a destroyed id under the handler_mutex. Link: https://lore.kernel.org/r/20200902081122.745412-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
There is a strange unlocked read of the ID state when checking for reuseaddr. This is because an ID cannot be reusable once it becomes a listening ID. Instead of using the state to exclude reuse, just clear it as part of rdma_listen()'s flow to convert reusable into not reusable. Once a ID goes to listen there is no way back out, and the only use of reusable is on the bind_list check. Finally, update the checks under handler_mutex to use READ_ONCE and audit that once RDMA_CM_LISTEN is observed in a req callback it is stable under the handler_mutex. Link: https://lore.kernel.org/r/20200902081122.745412-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Re-organize things so the state variable is not read unlocked. The first attempt to go directly from ADDR_BOUND immediately tells us if the ID is already bound, if we can't do that then the attempt inside rdma_bind_addr() to go from IDLE to ADDR_BOUND confirms the ID needs binding. Link: https://lore.kernel.org/r/20200902081122.745412-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
It is currently a bit confusing, but the design is if the handler_mutex is held, and the state is in RDMA_CM_CONNECT, then the state cannot leave RDMA_CM_CONNECT without also serializing with the handler_mutex. Make this clearer by adding a direct assertion, fixing the usage in rdma_connect and generally using READ_ONCE to read the state value. Link: https://lore.kernel.org/r/20200902081122.745412-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 16 Sep, 2020 3 commits
-
-
Liu Shixin authored
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Link: https://lore.kernel.org/r/20200916025022.3992627-1-liushixin2@huawei.comSigned-off-by: Liu Shixin <liushixin2@huawei.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Parav Pandit authored
hw_context stores a pci_device pointer. Use the correct type instead of void * and avoid typecasts. Link: https://lore.kernel.org/r/20200914123528.270382-1-parav@mellanox.comSigned-off-by: Parav Pandit <parav@nvidia.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yuval Basson authored
Implement the XRC specific verbs. The additional QP type introduced new logic to the rest of the verbs that now require distinguishing whether a QP has an "RQ" or an "SQ" or both. Link: https://lore.kernel.org/r/20200722102339.30104-1-ybason@marvell.comSigned-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Yuval Basson <ybason@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 11 Sep, 2020 21 commits
-
-
Michal Kalderon authored
Alignment of parameters was off by one Link: https://lore.kernel.org/r/20200902165741.8355-9-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
commit 59e8970b ("RDMA/qedr: Return max inline data in QP query result") changed query_qp max_inline size to return the max roce inline size. When iwarp was introduced, this should have been modified to return the max inline size based on protocol. This size is cached in the device attributes Fixes: 69ad0e7f ("RDMA/qedr: Add support for iWARP in user space") Link: https://lore.kernel.org/r/20200902165741.8355-8-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
Currently iWARP does not support mtu-change. Notify user when MTU changes that reload of qedr is required for mtu to change. Display the correct active mtu. Fixes: f5b1b177 ("RDMA/qedr: Add iWARP support in existing verbs") Link: https://lore.kernel.org/r/20200902165741.8355-7-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
MTU change on ethtool is currently not supported for iWARP. Notify qedr driver so that appropriate logging can take place. Link: https://lore.kernel.org/r/20200902165741.8355-6-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
In iWARP, accept could be called after a QP is already destroyed. In this case an error should be returned and not success. Fixes: 82af6d19 ("RDMA/qedr: Fix synchronization methods and memory leaks in qedr") Link: https://lore.kernel.org/r/20200902165741.8355-5-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
dev->attr.page_size_caps was used uninitialized when setting device attributes Fixes: ec72fce4 ("qedr: Add support for RoCE HW init") Link: https://lore.kernel.org/r/20200902165741.8355-4-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
Change the doorbell setting so that the maximum value between the last and current value is set. This is to avoid doorbells being lost. Fixes: a7efd777 ("qedr: Add support for PD,PKEY and CQ verbs") Link: https://lore.kernel.org/r/20200902165741.8355-3-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michal Kalderon authored
The qedr_qp structure wasn't freed when the protocol was RoCE. kmemleak output when running basic RoCE scenario. unreferenced object 0xffff927ad7e22c00 (size 1024): comm "ib_send_bw", pid 7082, jiffies 4384133693 (age 274.698s) hex dump (first 32 bytes): 00 b0 cd a2 79 92 ff ff 00 3f a1 a2 79 92 ff ff ....y....?..y... 00 ee 5c dd 80 92 ff ff 00 f6 5c dd 80 92 ff ff ..\.......\..... backtrace: [<00000000b2ba0f35>] qedr_create_qp+0xb3/0x6c0 [qedr] [<00000000e85a43dd>] ib_uverbs_handler_UVERBS_METHOD_QP_CREATE+0x555/0xad0 [ib_uverbs] [<00000000fee4d029>] ib_uverbs_cmd_verbs+0xa5a/0xb80 [ib_uverbs] [<000000005d622660>] ib_uverbs_ioctl+0xa4/0x110 [ib_uverbs] [<00000000eb4cdc71>] ksys_ioctl+0x87/0xc0 [<00000000abe6b23a>] __x64_sys_ioctl+0x16/0x20 [<0000000046e7cef4>] do_syscall_64+0x4d/0x90 [<00000000c6948f76>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: 1212767e ("qedr: Add wrapping generic structure for qpidr and adjust idr routines.") Link: https://lore.kernel.org/r/20200902165741.8355-2-michal.kalderon@marvell.comSigned-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Add work completion opcodes to a new ib_uverbs_wc_opcode enum in ib_user_verbs.h. This plays the same role as ib_uverbs_wr_opcode documenting the opcodes in the user space API. Assigned the IB_WC_XXX opcodes in ib_verbs.h to the IB_UVERBS_WC_XXX where they are defined. This follows the same pattern as the IB_WR_XXX opcodes. This fixes an incorrect value for LSO that had crept in but is not currently being used. Added a missing IB_WR_BIND_MW opcode in ib_verbs.h. Link: https://lore.kernel.org/r/20200903224039.437391-2-rpearson@hpe.comSigned-off-by: Bob Pearson <rpearson@hpe.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This is always the same value as IOVA masked by the page size, just use that clearer calculation directly. It is unclear of ocrdma hardware can actually support a true fbo, if so it could use a different algorithm to compute the best page size. Link: https://lore.kernel.org/r/17-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
zbva is always false, so fbo is never read. A 'zero-based-virtual-address' is simply IOVA == 0, and the driver already supports this. Link: https://lore.kernel.org/r/16-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
For the calls linked to mlx4_ib_umem_calc_optimal_mtt_size() use ib_umem_num_dma_blocks() inside the function, it is just some weird static default. All other places are just using it with PAGE_SIZE, switch to ib_umem_num_dma_blocks(). As this is the last call site, remove ib_umem_num_count(). Link: https://lore.kernel.org/r/15-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This driver always uses PAGE_SIZE. Link: https://lore.kernel.org/r/14-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This driver always uses a DMA array made up of PAGE_SIZE elements, so just use ib_umem_num_dma_blocks(). Since rdma_for_each_dma_block() always iterates exactly ib_umem_num_dma_blocks() there is no need for the early exit check in build_user_pbes(), delete it. Link: https://lore.kernel.org/r/13-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
mtr_umem_page_count() does the same thing, replace it with the core code. Also, ib_umem_find_best_pgsz() should always be called to check that the umem meets the page_size requirement. If there is a limited set of page_sizes that work it the pgsz_bitmap should be set to that set. 0 is a failure and the umem cannot be used. Lightly tidy the control flow to implement this flow properly. Link: https://lore.kernel.org/r/12-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
ib_umem_page_count() returns the number of 4k entries required for a DMA map, but bnxt_re already computes a variable page size. The correct API to determine the size of the page table array is ib_umem_num_dma_blocks(). Fix the overallocation of the page array in fill_umem_pbl_tbl() when working with larger page sizes by using the right function. Lightly re-organize this function to make it clearer. Replace the other calls to ib_umem_num_pages(). Fixes: d8558251 ("RDMA/bnxt_re: Use core helpers to get aligned DMA address") Link: https://lore.kernel.org/r/11-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The length of the list populated by qedr_populate_pbls() should be calculated using ib_umem_num_dma_blocks() with the same size/shift passed to qedr_populate_pbls(). Link: https://lore.kernel.org/r/10-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This loop is splitting the DMA SGL into pg_shift sized pages, use the core code for this directly. Link: https://lore.kernel.org/r/9-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is not correct. 'start' should be 'virt'. Change it to use the core code for page_num and the canonical calculation of page_shift. Fixes: eb52c033 ("RDMA/i40iw: Use core helpers to get aligned DMA address within a supported page size") Link: https://lore.kernel.org/r/8-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is not correct. 'start' should be 'virt'. Change it to use the core code for page_num and the canonical calculation of page_shift. Fixes: 40ddb3f0 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size") Link: https://lore.kernel.org/r/7-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comTested-by: Gal Pressman <galpress@amazon.com> Acked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
ib_umem_num_pages() should only be used by things working with the SGL in CPU pages directly. Drivers building DMA lists should use the new ib_num_dma_blocks() which returns the number of blocks rdma_umem_for_each_block() will return. To make this general for DMA drivers requires a different implementation. Computing DMA block count based on umem->address only works if the requested page size is < PAGE_SIZE and/or the IOVA == umem->address. Instead the number of DMA pages should be computed in the IOVA address space, not umem->address. Thus the IOVA has to be stored inside the umem so it can be used for these calculations. For now set it to umem->address by default and fix it up if ib_umem_find_best_pgsz() was called. This allows drivers to be converted to ib_umem_num_dma_blocks() safely. Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 09 Sep, 2020 9 commits
-
-
Jason Gunthorpe authored
Generally drivers should be using this core helper to split up the umem into DMA pages. These drivers are all probably wrong in some way to pass PAGE_SIZE in as the HW page size. Either the driver doesn't support other page sizes and it should use 4096, or the driver does support other page sizes and should use ib_umem_find_best_pgsz() to select the best HW pages size of the HW supported set. The only case it could be correct is if the HW has a global setting for PAGE_SIZE set at driver initialization time. Link: https://lore.kernel.org/r/5-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comAcked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The calculation in rdma_find_pg_bit() is fairly complicated, and the function is never called anywhere else. Inline a simpler version into ib_umem_find_best_pgsz() Link: https://lore.kernel.org/r/3-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
rdma_for_each_block() makes assumptions about how the SGL is constructed that don't work if the block size is below the page size used to to build the SGL. The rules for umem SGL construction require that the SG's all be PAGE_SIZE aligned and we don't encode the actual byte offset of the VA range inside the SGL using offset and length. So rdma_for_each_block() has no idea where the actual starting/ending point is to compute the first/last block boundary if the starting address should be within a SGL. Fixing the SGL construction turns out to be really hard, and will be the subject of other patches. For now block smaller pages. Fixes: 4a353399 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/2-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comReviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
It is possible for a single SGL to span an aligned boundary, eg if the SGL is 61440 -> 90112 Then the length is 28672, which currently limits the block size to 32k. With a 32k page size the two covering blocks will be: 32768->65536 and 65536->98304 However, the correct answer is a 128K block size which will span the whole 28672 bytes in a single block. Instead of limiting based on length figure out which high IOVA bits don't change between the start and end addresses. That is the highest useful page size. Fixes: 4a353399 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comReviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Change counters to return failure like any other verbs destroy, however this flow shouldn't return error at all. Link: https://lore.kernel.org/r/20200907120921.476363-10-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Make this interface symmetrical to other destroy paths. Fixes: a49b1dc7 ("RDMA: Convert destroy_wq to be void") Link: https://lore.kernel.org/r/20200907120921.476363-9-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Update XRCD destroy flow to allow command failure. Fixes: 28ad5f65 ("RDMA: Move XRCD to be under ib_core responsibility") Link: https://lore.kernel.org/r/20200907120921.476363-8-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Like any other verbs objects, CQ shouldn't fail during destroy, but mlx5_ib didn't follow this contract with mixed IB verbs objects with DEVX. Such mix causes to the situation where FW and kernel are fully interdependent on the reference counting of each side. Kernel verbs and drivers that don't have DEVX flows shouldn't fail. Fixes: e39afe3d ("RDMA: Convert CQ allocations to be under core responsibility") Link: https://lore.kernel.org/r/20200907120921.476363-7-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-