- 22 Jun, 2021 26 commits
-
-
Bob Pearson authored
For IPV4 packets sent on the wire the rxe driver calls ip_local_out() which immediately calls __ip_local_out() which sets iph->tot_len and calls ip_send_check(). This code is duplicated in prepare4(). On the loopback path the IP header checksum and tot_len fields are not used so they do not need to be set. Remove this redundant code. Fixes: 8700e3e7 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20210618045742.204195-3-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
In send_atomic_ack() in rxe_resp.c there is code copying ack_pkt into the skb->cb[]. This doesn't do anything useful because the cb[] is not used in the transmit path by the rxe driver. Remove this code. Fixes: 4c93496f ("IB/rxe: do not copy extra stack memory to skb") Link: https://lore.kernel.org/r/20210618045742.204195-2-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearson@hpe.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
ULP can get more error information of CQ through verbs instead of prints. Link: https://lore.kernel.org/r/1624362836-11631-1-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Shiraz Saleem authored
iwmr->page_size stores the return from ib_umem_find_best_pgsz and maybe zero when used in ib_umem_num_dma_blocks thus causing a divide by zero error. Fix this by erroring out of irdma_reg_user when 0 is returned from ib_umem_find_best_pgsz. Link: https://lore.kernel.org/r/20210622175232.439-3-tatyana.e.nikolova@intel.comReported-by: coverity-bot <keescook+coverity-bot@chromium.org> Addresses-Coverity-ID: 1505149 ("Integer handling issues") Fixes: b48c24c2 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
'orignal' should be 'original'. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/1624011020-16992-11-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
The QP type has been checked in check_send_valid(), if it's not RC, it will process the UD/GSI branch. Link: https://lore.kernel.org/r/1624011020-16992-10-git-send-email-liweihang@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The process of flushing CQE can be encapsultated into a function, which can reduce duplicate code. Link: https://lore.kernel.org/r/1624011020-16992-9-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
hns_roce_init_qp_table() will only return 0, because this function does not need to return a value, so it is modified to void type. Link: https://lore.kernel.org/r/1624011020-16992-8-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
Remove unused members in EQ context structure. Fixes: 782832f2 ("RDMA/hns: Simplify the function config_eqc()") Link: https://lore.kernel.org/r/1624011020-16992-7-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
When query_qp is called by userspace, max_send_wr and max_send_sge are set to 0 by the kernel driver. However, the userspace does not use these two return values from the kernel driver, but uses its own calculated values. So there is no need for special treatment. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1624011020-16992-6-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Some kernel ULPs need to use the return value of qp_init_attr, so add member assignments for qp_init_attr. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1624011020-16992-5-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
Remove redundant print and fix a character type mismatch. Fixes: 0e0ab04b ("RDMA/hns: Refactor the MTR creation flow") Link: https://lore.kernel.org/r/1624011020-16992-4-git-send-email-liweihang@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
A random value will be returned if the condition below is not met, so it needs to be initialized. Fixes: 9ea9a53e ("RDMA/hns: Add mapped page count checking for MTR") Link: https://lore.kernel.org/r/1624011020-16992-3-git-send-email-liweihang@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
When a non-inline WR reuses a WQE that was used for inline last time, the remaining inline flag should be cleared. Fixes: 62490fd5 ("RDMA/hns: Avoid unnecessary memset on WQEs in post_send") Link: https://lore.kernel.org/r/1624011020-16992-2-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Aharon Landau says: ==================== In case device supports only real-time timestamp, the kernel will fail to create QP despite rdma-core requested such timestamp type. It is because device returns free-running timestamp, and the conversion from free-running to real-time is performed in the user space. This series fixes it, by returning real-time timestamp. ==================== * mlx5_realtime_ts: RDMA/mlx5: Support real-time timestamp directly from the device RDMA/mlx5: Refactor get_ts_format functions to simplify code Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Linux 5.13-rc7 Needed for dependencies in following patches. Merge conflict in rxe_cmop.c resolved by compining both patches. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
Currently, if the user asks for a real-time timestamp, the device will return a free-running one, and the timestamp will be translated to real-time in the user-space. When the device supports only real-time timestamp and not free-running, the creation of the QP will fail even though the user needs supported the real-time one. To prevent this, we will return the real-time timestamp directly from the device. Link: https://lore.kernel.org/r/c6cfc8e6f038575c5c2de6505830f7e74e4de80d.1623829775.git.leonro@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kees Cook authored
In preparation for FORTIFY_SOURCE performing compile-time and run-time field bounds checking for memcpy(), memmove(), and memset(), avoid intentionally reading across neighboring array fields. Without a flexible array, this looks like an attempt to perform a memcpy() read beyond the end of the packet->mad.data array: drivers/infiniband/core/user_mad.c: memcpy(packet->msg->mad, packet->mad.data, IB_MGMT_MAD_HDR); Switch from [0] to [] to use the appropriately handled type for trailing bytes. Link: https://lore.kernel.org/r/20210616202615.1247242-1-keescook@chromium.orgSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
QPC, SQC and RQC timestamp formats and capabilities are always equal because they represent general hardware support. So instead of code duplication, let's merge them into general enum and logic. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
-
Xiao Yang authored
rxe_mr_init_user() always returns the fixed -EINVAL when ib_umem_get() fails so it's hard for user to know which actual error happens in ib_umem_get(). For example, ib_umem_get() will return -EOPNOTSUPP when trying to pin pages on a DAX file. Return actual error as mlx4/mlx5 does. Link: https://lore.kernel.org/r/20210621071456.4259-1-ice_yangxiao@163.comSigned-off-by: Xiao Yang <yangx.jy@fujitsu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kees Cook authored
In preparation for FORTIFY_SOURCE performing compile-time and run-time field bounds checking for memcpy(), memmove(), and memset(), avoid intentionally writing across neighboring array fields. Use the ether_addr_copy() helper instead, as already done for smac. Link: https://lore.kernel.org/r/20210616203744.1248551-1-keescook@chromium.orgSigned-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jack Wang authored
With fast memory registration on write request, rnbd-clt can do bigger IO without split. rnbd-clt now can query rtrs-clt to get the max_segments, instead of using BMAX_SEGMENTS. BMAX_SEGMENTS is not longer needed, so remove it. Link: https://lore.kernel.org/r/20210621055340.11789-6-jinpu.wang@ionos.com Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jack Wang authored
As we can do fast memory registration on write, we can increase the max_segments, default to 512K. Link: https://lore.kernel.org/r/20210621055340.11789-5-jinpu.wang@ionos.comSigned-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jack Wang authored
With write path fast memory registration, we need less memory for each request. With fast memory registration, we can reduce max_send_sge to save memory usage. Also convert the kmalloc_array to kcalloc. Link: https://lore.kernel.org/r/20210621055340.11789-4-jinpu.wang@ionos.comSigned-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jack Wang authored
With fast memory registration in write path, we can reduce the memory consumption by using less max_send_sge, support IO bigger than 116 KB (29 segments * 4 KB) without splitting, and it also make the IO path more symmetric. To avoid some times MR reg failed, waiting for the invalidation to finish before the new mr reg. Introduce a refcount, only finish the request when both local invalidation and io reply are there. Link: https://lore.kernel.org/r/20210621055340.11789-3-jinpu.wang@ionos.comSigned-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Dima Stepanov <dmitrii.stepanov@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jack Wang authored
Introduce tail wr, we can send as the last wr, we want to send the local invalidate wr after rdma wr in later patch. While at it, also fix coding style issue. Link: https://lore.kernel.org/r/20210621055340.11789-2-jinpu.wang@ionos.comSigned-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 21 Jun, 2021 14 commits
-
-
Devesh Sharma authored
Changing ucontext ABI response structure to pass wqe_mode to user library. A flag in comp_mask has been set to indicate presence of wqe_mode. Moved wqe-mode ABI to uapi/rdma/bnxt_re-abi.h Link: https://lore.kernel.org/r/20210616202817.1185276-1-devesh.sharma@broadcom.comSigned-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Anand Khoje authored
pahole shows two 4-byte holes in struct ib_port_data after pkey_list_lock and netdev_lock respectively. Shuffling the netdev_lock to be after pkey_list_lock, this shaves off eight bytes from the struct. Link: https://lore.kernel.org/r/20210616154509.1047-3-anand.a.khoje@oracle.comSuggested-by: Haakon Bugge <haakon.bugge@oracle.com> Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Anand Khoje authored
Removed port validity check from ib_get_cached_subnet_prefix() as this check is not needed because "port_num" is valid. Link: https://lore.kernel.org/r/20210616154509.1047-2-anand.a.khoje@oracle.comSuggested-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com> Signed-off-by: Haakon Bugge <haakon.bugge@oracle.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Compilation with W=1 produces warnings similar to the below. drivers/infiniband/ulp/ipoib/ipoib_main.c:320: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst All such occurrences were found with the following one line git grep -A 1 "\/\*\*" drivers/infiniband/ Link: https://lore.kernel.org/r/e57d5f4ddd08b7a19934635b44d6d632841b9ba7.1623823612.git.leonro@nvidia.com Reviewed-by: Jack Wang <jinpu.wang@ionos.com> #rtrs Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Switch xrcd index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-7-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Switch pd index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-6-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Switch mtpt index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-5-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Round-robin (RR) is no longer used in the allocation of the bitmap table, and all the function input parameters that use this mechanism are BITMAP_NO_RR. The code that defines and uses the RR needs to be deleted. Link: https://lore.kernel.org/r/1623325814-55737-4-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
hns_roce_bitmap_free_range() is only called inside hns_roce_bitmap_free(), and the input parameter "cnt" is set to a constant 1. In addition, the driver does not use alloc_range scenarios, so free_range does not need to exist. Link: https://lore.kernel.org/r/1623325814-55737-3-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
The function is no longer used. Link: https://lore.kernel.org/r/1623325814-55737-2-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
There are some '%u' for 'int' and '%d' for 'unsigend int', they should be fixed. Link: https://lore.kernel.org/r/1623325232-30900-1-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
Remove unused members in srq context structure. Link: https://lore.kernel.org/r/1624262443-24528-10-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
Use hr_write_reg() instead of roce_set_field(). Link: https://lore.kernel.org/r/1624262443-24528-9-git-send-email-liweihang@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
Use "hr_reg_write" to replace "roce_set_filed". Link: https://lore.kernel.org/r/1624262443-24528-8-git-send-email-liweihang@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-