- 04 Mar, 2022 11 commits
-
-
Wenpeng Liang authored
Abstract the alloc_cqc() into several parts and separate the process unrelated to allocating CQC. Link: https://lore.kernel.org/r/20220302064830.61706-10-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chengchang Tang authored
Abstract the alloc_srqc() into several parts and separate the alloc_srqn() from the alloc_srqc(). Link: https://lore.kernel.org/r/20220302064830.61706-9-liangwenpeng@huawei.comSigned-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
hns_roce_alloc_cmd_mailbox() never returns NULL, so the check should be IS_ERR(). And the error code should be converted as the function's return value. Link: https://lore.kernel.org/r/20220302064830.61706-8-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chengchang Tang authored
Remove duplicate code for creating and destroying hardware contexts via mailbox. Link: https://lore.kernel.org/r/20220302064830.61706-7-liangwenpeng@huawei.comSigned-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chengchang Tang authored
The current mailbox functions have too many parameters, making the code difficult to maintain. So construct a new structure mbox_msg to pass the information needed by mailbox. Link: https://lore.kernel.org/r/20220302064830.61706-6-liangwenpeng@huawei.comSigned-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The "op" field of the mailbox occupies 8 bits, so the parameter "op" should be of type u8. Link: https://lore.kernel.org/r/20220302064830.61706-5-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The parameter "out_param" of the mailbox is always null when the context is destroyed. So remove the function parameter "mailbox". Link: https://lore.kernel.org/r/20220302064830.61706-4-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chengchang Tang authored
The value of the function parameter “timeout” is unique. Therefore, it is unnecessary to specify the parameter “timeout” value each time. So remove it. Link: https://lore.kernel.org/r/20220302064830.61706-3-liangwenpeng@huawei.comSigned-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chengchang Tang authored
The parameter "op_modifier" is only used for HIP06. It is useless for HIP08 and later versions. After removing HIP06, this parameter is no longer used, so remove it. Link: https://lore.kernel.org/r/20220302064830.61706-2-liangwenpeng@huawei.comSigned-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yajun Deng authored
ib_destroy_qp() would called by ib_create_qp_user() if error, the former contains ib_qp_usecnt_dec(), but ib_qp_usecnt_inc() was not called before. So move ib_qp_usecnt_inc() into create_qp(). Fixes: d2b10794 ("RDMA/core: Create clean QP creations interface for uverbs") Link: https://lore.kernel.org/r/20220303024232.2847388-1-yajun.deng@linux.devSigned-off-by: Yajun Deng <yajun.deng@linux.dev> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mike Marciniszyn authored
The AIP code signals the phys_mtu in the following query_port() fragment: props->phys_mtu = HFI1_CAP_IS_KSET(AIP) ? hfi1_max_mtu : ib_mtu_enum_to_int(props->max_mtu); Using the largest MTU possible should not depend on AIP. Fix by unconditionally using the hfi1_max_mtu value. Fixes: 6d72344c ("IB/ipoib: Increase ipoib Datagram mode MTU's upper limit") Link: https://lore.kernel.org/r/1644348309-174874-1-git-send-email-mike.marciniszyn@cornelisnetworks.comReviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 28 Feb, 2022 5 commits
-
-
Yajun Deng authored
The rdma_zalloc_drv_obj() in __ib_alloc_pd() would zero pd, it unnecessary add NULL to the object in struct pd. The uverbs_free_pd() already return busy if pd->usecnt is true, there is no need to add a warning. Link: https://lore.kernel.org/r/20220223074901.201506-1-yajun.deng@linux.devSigned-off-by: Yajun Deng <yajun.deng@linux.dev> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
The PD id is masked with 0x7fff, while PD can be 18 bits for GEN2 HW. Remove the masking as it should not be needed and can cause incorrect PD id to be used. Fixes: b48c24c2 ("RDMA/irdma: Implement device supported verb APIs") Link: https://lore.kernel.org/r/20220225163211.127-4-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Using PCI_FUNC macro in a VM, when the device is in passthrough mode does not provide the real function instance. This means that currently, devices will not probe unless the instance in the VM matches the instance in the host. Fix this by getting the pf_id from the LAN during the probe. Fixes: 8498a30e ("RDMA/irdma: Register auxiliary driver and implement private channel OPs") Link: https://lore.kernel.org/r/20220225163211.127-3-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Currently, events on vlan netdevs are being ignored. Fix this by finding the real netdev and processing the notifications for vlan netdevs. Fixes: 915cc7ac ("RDMA/irdma: Add miscellaneous utility definitions") Link: https://lore.kernel.org/r/20220225163211.127-2-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Zhu Yanjun authored
The function irdma_create_mg_ctx always returns 0, so make it void and delete the return value check. Link: https://lore.kernel.org/r/20220224182832.3896686-1-yanjun.zhu@linux.devSigned-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 24 Feb, 2022 7 commits
-
-
Zhu Yanjun authored
The union irdma_sockaddr is used frequently. So move it to the header file. Link: https://lore.kernel.org/r/20220223024252.3873736-4-yanjun.zhu@linux.devSigned-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Zhu Yanjun authored
Firstly the variable saddr was to check the type of a network. Now the variable net_type is used to do the same work. So it is removed. Link: https://lore.kernel.org/r/20220223024252.3873736-3-yanjun.zhu@linux.devSigned-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Zhu Yanjun authored
The member variable net_type is to check the type of network. Link: https://lore.kernel.org/r/20220223024252.3873736-2-yanjun.zhu@linux.devSigned-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Finish adding subroutine comment headers to subroutines in rxe_mcast.c. Make minor api change cleanups. Link: https://lore.kernel.org/r/20220223230706.50332-5-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Collect cleanup code for struct rxe_mca into a subroutine, __rxe_cleanup_mca() called in rxe_detach_mcg() in rxe_mcast.c. Link: https://lore.kernel.org/r/20220223230706.50332-4-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Collect initialization code for struct rxe_mca into a subroutine, __rxe_init_mca(), to cleanup rxe_attach_mcg() in rxe_mcast.c. Check limit on total number of attached qp's. Link: https://lore.kernel.org/r/20220223230706.50332-3-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Print a warning if memory allocated by mcast is not cleared when the rxe driver is unloaded. Link: https://lore.kernel.org/r/20220223230706.50332-2-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 23 Feb, 2022 9 commits
-
-
Shiraz Saleem authored
As irdma_status_code is replaced with an int, there is no need for two variables to hold error codes. Remove the excess variable in functions where this occurs. Also, remove any redundant initializations which are no longer needed. Link: https://lore.kernel.org/r/20220217151851.1518-4-shiraz.saleem@intel.comSigned-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Shiraz Saleem authored
All functions now return linux error codes. Propagate the return from these functions as opposed to converting them to generic values. Link: https://lore.kernel.org/r/20220217151851.1518-3-shiraz.saleem@intel.comSigned-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Shiraz Saleem authored
Replace use of custom irdma_status_code with linux error codes. Remove enum irdma_status_code and header in which its defined. Link: https://lore.kernel.org/r/20220217151851.1518-2-shiraz.saleem@intel.comSigned-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bart Van Assche authored
Make it more clear what the different ib_srp data structures represent. Link: https://lore.kernel.org/r/20220215210511.28303-2-bvanassche@acm.orgSigned-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
The mkc is the key for the mkey cache, hence, created in each attempt to get a cache mkey, while pcie_relaxed_ordering_enabled() is called during the setting of the mkc, but used only for cases where IB_ACCESS_RELAXED_ORDERING is set. pcie_relaxed_ordering_enabled() is an expensive call (26 us). Reorder the code so the driver will call it only when it is needed. Link: https://lore.kernel.org/r/684be1366cb1d4f05aa3e78986205e4bc410443a.1644947594.git.leonro@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
Currently, ent->xlt stores the translation table size. This data should not be stored in the cache entry but be written directly to the mailbox. Store ndescs instead, and deduce the translation table size from it according to the access mode. Link: https://lore.kernel.org/r/e9dbfaa1f279793a6bd28ee5a31cb4f0f0d70f05.1644947594.git.leonro@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
When allocating a MR from the cache, the driver calls to get_cache_mr(), and in case of failure, retries with create_cache_mr(). This is the flow of mlx5_mr_cache_alloc(), so use it instead. Link: https://lore.kernel.org/r/53c85fcd4de6ec9de0b8e6cbb1bf5d5fe19900c3.1644947594.git.leonro@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
When an ODP MR cache entry is empty and trying to allocate it, increment the ent->miss counter and call to queue_adjust_cache_locked() to verify the entry is balanced. Fixes: aad719dc ("RDMA/mlx5: Allow MRs to be created in the cache synchronously") Link: https://lore.kernel.org/r/09503e295276dcacc92cb1d8aef1ad0961c99dc1.1644947594.git.leonro@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
delayed_cache_work_func() and the cache_work_func() are both wrappers of __cache_work_func(). Instead of having a special not delayed work, use the delayed work with delay = 0. Link: https://lore.kernel.org/r/18b6ae205e75f087aa4a2a05c81ea8b66d8d88dc.1644947594.git.leonro@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 17 Feb, 2022 1 commit
-
-
Håkon Bugge authored
XRC INI QPs should be able to adjust their local ACK timeout. Fixes: 2c1619ed ("IB/cma: Define option to set ack timeout and pack tos_set") Link: https://lore.kernel.org/r/1644421175-31943-1-git-send-email-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Suggested-by: Avneesh Pant <avneesh.pant@oracle.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 16 Feb, 2022 7 commits
-
-
Bob Pearson authored
Finish removing mcg from rxe pools. Replace rxe pools ref counting by kref's. Replace rxe_alloc by kzalloc. Link: https://lore.kernel.org/r/20220208211644.123457-8-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Now that rxe_mcast.c has it's own red-black tree support there is no longer any requirement for key'ed objects in rxe pools. This patch removes the key APIs and related code. Link: https://lore.kernel.org/r/20220208211644.123457-7-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Continuing to decouple mcg from rxe pools. Create red-black tree code in rxe_mcast.c to hold mcg index. Replace pool key calls by calls to local red-black routines. Link: https://lore.kernel.org/r/20220208211644.123457-6-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Replace int num_qp in struct rxe_mcg by atomic_t qp_num. Link: https://lore.kernel.org/r/20220208211644.123457-5-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Replace 'grp' by 'mcg', 'mce' by 'mca'. Shorten subroutine names in rxe_mcast.c. These name uses are more in line with other object names used. Link: https://lore.kernel.org/r/20220208211644.123457-4-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Remove rxe_mca (was rxe_mc_elem) from rxe pools and use kzmalloc and kfree to allocate and free in rxe_mcast.c. Call kzalloc outside of spinlocks to avoid having to use GFP_ATOMIC. Link: https://lore.kernel.org/r/20220208211644.123457-3-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Replace mcg->mcg_lock and mc_grp_pool->pool_lock by rxe->mcg_lock. This is the first step of several intended to decouple the mc_grp and mc_elem objects from the rxe pool code. Link: https://lore.kernel.org/r/20220208211644.123457-2-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-