- 16 Jun, 2021 14 commits
-
-
Jason Gunthorpe authored
This is being used to implement both the port and device global stats, which is causing some confusion in the drivers. For instance EFA and i40iw both seem to be misusing the device stats. Split it into two ops so drivers that don't support one or the other can leave the op NULL'd, making the calling code a little simpler to understand. Link: https://lore.kernel.org/r/1955c154197b2a159adc2dc97266ddc74afe420c.1623427137.git.leonro@nvidia.comTested-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Check that an MR has no bound MWs before allowing a dereg or invalidate operation. Link: https://lore.kernel.org/r/20210608042552.33275-11-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Add code to implement memory access through memory windows. Link: https://lore.kernel.org/r/20210608042552.33275-10-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Implement invalidate MW and cleaned up invalidate MR operations. Added code to perform remote invalidate for send with invalidate. Added code to perform local invalidation. Deleted some blank lines in rxe_loc.h. Link: https://lore.kernel.org/r/20210608042552.33275-9-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Add support for bind MW work requests from user space. Since rdma/core does not support bind mw in ib_send_wr there is no way to support bind mw in kernel space. Added bind_mw local operation in rxe_req.c. Added bind_mw WR operation in rxe_opcode.c. Added bind_mw WC in rxe_comp.c. Added additional fields to rxe_mw in rxe_verbs.h. Added rxe_do_dealloc_mw() subroutine to cleanup an mw when rxe_dealloc_mw is called. Added code to implement bind_mw operation in rxe_mw.c Link: https://lore.kernel.org/r/20210608042552.33275-8-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Simplify rxe_requester() by moving the local operations to a subroutine. Add an error return for illegal send WR opcode. Moved next_index ahead of rxe_run_task which fixed a small bug where work completions were delayed until after the next wqe which was not the intended behavior. Let errors return their own WC status. Previously all errors were reported as protection errors which was incorrect. Changed the return of errors from rxe_do_local_ops() to err: which causes an immediate completion. Without this an error on a last WR may get lost. Changed fill_packet() to finish_packet() which is more accurate. Fixes: 8700e2e7c485 ("The software RoCE driver") Link: https://lore.kernel.org/r/20210608042552.33275-7-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Rxe has two mask bits WR_LOCAL_MASK and WR_REG_MASK with WR_REG_MASK used to indicate any local operation and WR_LOCAL_MASK unused. This patch replaces both of these with one mask bit WR_LOCAL_OP_MASK which is clearer. Link: https://lore.kernel.org/r/20210608042552.33275-6-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Add ib_alloc_mw and ib_dealloc_mw verbs APIs. Added new file rxe_mw.c focused on MWs. Changed the 8 bit random key generator. Added a cleanup routine for MWs. Added verbs routines to ib_device_ops. Link: https://lore.kernel.org/r/20210608042552.33275-5-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Currently the rxe driver has a rxe_mw struct object but nothing about memory windows is enabled. This patch turns on memory windows and some minor cleanup. Set device attribute in rxe.c so max_mw = MAX_MW. Change parameters in rxe_param.h so that MAX_MW is the same as MAX_MR. Reduce the number of MRs and MWs to 4K from 256K. Add device capability bits for 2a and 2b memory windows. Removed RXE_MR_TYPE_MW from the rxe_mr_type enum. Link: https://lore.kernel.org/r/20210608042552.33275-4-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Modify rxe_add_index() and rxe_add_key() to return an error if the index or key is aleady present in the pool. Currently they print a warning and silently fail with bad consequences to the caller. Link: https://lore.kernel.org/r/20210608042552.33275-3-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Add fields to struct rxe_send_wr in rdma_user_rxe.h to support bind MW work requests Link: https://lore.kernel.org/r/20210608042552.33275-2-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Currently the rdma_rxe driver attempts to protect atomic responder resources by taking a reference to the qp which is only freed when the resource is recycled for a new read or atomic operation. This means that in normal circumstances there is almost always an extra qp reference once an atomic operation has been executed which prevents cleaning up the qp and associated pd and cqs when the qp is destroyed. This patch removes the call to rxe_add_ref() in send_atomic_ack() and the call to rxe_drop_ref() in free_rd_atomic_resource(). If the qp is destroyed while a peer is retrying an atomic op it will cause the operation to fail which is acceptable. Link: https://lore.kernel.org/r/20210604230558.4812-1-rpearsonhpe@gmail.comReported-by: Zhu Yanjun <zyjzyj2000@gmail.com> Fixes: 86af6176 ("IB/rxe: remove unnecessary skb_clone") Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
All functions of HIP09's ROCEE share on-chip resources for all QPs, the driver needs configure the resource index and number for each function during the init stage. Link: https://lore.kernel.org/r/1622541427-42193-1-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The mlx5_ib_bind_slave_port() doesn't remove multiport device from the unaffiliated list, but mlx5_ib_unbind_slave_port() did it. This unbalanced flow caused to the situation where mlx5_ib_unaffiliated_port_list was changed during iteration. Fixes: 32f69e4b ("{net, IB}/mlx5: Manage port association for multiport RoCE") Link: https://lore.kernel.org/r/2726e6603b1e6ecfe76aa5a12a063af72173bcf7.1622477058.git.leonro@nvidia.comReported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 10 Jun, 2021 2 commits
-
-
Shiraz Saleem authored
The level1 PBL info address is stored as u64. This requires casting through a uinptr_t before used as a pointer type. And this leads to sparse warning such as this when uinptr_t is missing: drivers/infiniband/hw/irdma/hw.c: In function 'irdma_destroy_virt_aeq': drivers/infiniband/hw/irdma/hw.c:579:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 579 | dma_addr_t *pg_arr = (dma_addr_t *)aeq->palloc.level1.addr; This can be fixed using an intermediate uintptr_t, but rather it is better to fix the structure irdm_pble_info to store the address as u64* and the VA it is assigned in irdma_chunk as a void*. This greatly reduces the casting on this address. Fixes: 44d9e529 ("RDMA/irdma: Implement device initialization definitions") Link: https://lore.kernel.org/r/20210609234924.938-1-shiraz.saleem@intel.comReported-by: kernel test robot <lkp@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
It turns out this is only being used to store the LID for SIDR mode to search the RB tree for request de-duplication. Store the LID value directly and don't pretend it is a GID. Link: https://lore.kernel.org/r/2e7c87b6f662c90c642fc1838e363ad3e6ef14a4.1623236345.git.leonro@nvidia.comReviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 08 Jun, 2021 13 commits
-
-
Shiraz Saleem authored
Use list_last_entry and list_first_entry instead of using prev and next pointers. Link: https://lore.kernel.org/r/20210608211415.680-1-shiraz.saleem@intel.comReported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Baokun Li authored
Using list_move() instead of list_del() + list_add(). Link: https://lore.kernel.org/r/20210608031041.2820429-1-libaokun1@huawei.comReported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Baokun Li <libaokun1@huawei.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-8-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-13-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-12-git-send-email-liweihang@huawei.com Cc: Potnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-11-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-10-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-9-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-6-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-5-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The member is never used, delete it. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Increase refcount_t from 0 to 1 is regarded as there is a risk about use-after-free. So it should be set to 1 directly during initialization. Link: https://lore.kernel.org/r/1622194663-2383-3-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The refcount_t API will WARN on underflow and overflow of a reference counter, and avoid use-after-free risks. Link: https://lore.kernel.org/r/1622194663-2383-2-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 07 Jun, 2021 5 commits
-
-
Kamal Heib authored
There is a typo in the returned error code sign from irdma_modify_qp() when the attr_mask is not supported - Fix it. Fixes: b48c24c2 ("RDMA/irdma: Implement device supported verb APIs") Link: https://lore.kernel.org/r/20210607221543.254144-1-kamalheib1@gmail.comSigned-off-by: Kamal Heib <kamalheib1@gmail.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
There is a spelling mistake in a literal string. Fix it. Link: https://lore.kernel.org/r/20210607113345.82206-1-colin.king@canonical.comSigned-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
The variable val is being initialized with a value that is never read, it is being updated later on. The assignment is redundant and can be removed. Link: https://lore.kernel.org/r/20210605131347.26293-1-colin.king@canonical.com Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
A single statement is indented one level too deeply, clean up the code by removing the extraneous tab. Link: https://lore.kernel.org/r/20210605130400.25987-1-colin.king@canonical.comSigned-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
The shifting of the u8 integer info->map[i] the left will be promoted to a 32 bit signed int and then sign-extended to a u64. In the event that the top bit of the u8 is set then all then all the upper 32 bits of the u64 end up as also being set because of the sign-extension. Fix this by casting the u8 values to a u64 before the left shift. This Link: https://lore.kernel.org/r/20210605122059.25105-1-colin.king@canonical.com Addresses-Coverity: ("Unitentional integer overflow / bad shift operation") Fixes: 3f49d684 ("RDMA/irdma: Implement HW Admin Queue OPs") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 04 Jun, 2021 1 commit
-
-
Jiapeng Chong authored
The error code is missing in this code scenario so 0 will be returned. Add the error code '-EINVAL' to the return value 'ret'. Eliminates the follow smatch warning: drivers/infiniband/hw/cxgb4/qp.c:298 create_qp() warn: missing error code 'ret'. Link: https://lore.kernel.org/r/1622545669-20625-1-git-send-email-jiapeng.chong@linux.alibaba.comReported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 03 Jun, 2021 5 commits
-
-
Devesh Sharma authored
Updated the maintainers list and removed non-active members. Link: https://lore.kernel.org/r/20210603131534.982257-3-devesh.sharma@broadcom.comSigned-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Devesh Sharma authored
Enabling Atomic operations for Gen P5 devices if the underlying platform supports global atomic ops. Link: https://lore.kernel.org/r/20210603131534.982257-2-devesh.sharma@broadcom.comSigned-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kamal Heib authored
To avoid the following failure when trying to load the rdma_rxe module while IPv6 is disabled, add a check for EAFNOSUPPORT and ignore the failure, also delete the needless debug print from rxe_setup_udp_tunnel(). $ modprobe rdma_rxe modprobe: ERROR: could not insert 'rdma_rxe': Operation not permitted Fixes: dfdd6158 ("IB/rxe: Fix kernel panic in udp_setup_tunnel") Link: https://lore.kernel.org/r/20210603090112.36341-1-kamalheib1@gmail.comReported-by: Yi Zhang <yi.zhang@redhat.com> Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
In order to prevent user space from modifying the index that belongs to the kernel for shared queues let the kernel use a local copy of the index and copy any new values of that index to the shared rxe_queue_bus struct. This adds more switch statements which decreases the performance of the queue API. Move the type into the parameter list for these functions so that the compiler can optimize out the switch statements when the explicit type is known. Modify all the calls in the driver on performance paths to pass in the explicit queue type. Link: https://lore.kernel.org/r/20210527194748.662636-4-rpearsonhpe@gmail.com Link: https://lore.kernel.org/linux-rdma/20210526165239.GP1002214@@nvidia.com/Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Modify the queue APIs to protect all user space index loads with smp_load_acquire() and all user space index stores with smp_store_release(). Base this on the types of the queues which can be one of ..KERNEL, ..FROM_USER, ..TO_USER. Kernel space indices are protected by locks which also provide memory barriers. Link: https://lore.kernel.org/r/20210527194748.662636-3-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-