- 28 May, 2021 25 commits
-
-
Guoqing Jiang authored
The function is just a wrapper of rtrs_clt_close_conns, let's call rtrs_clt_close_conns directly. Link: https://lore.kernel.org/r/20210528113018.52290-10-jinpu.wang@ionos.comSigned-off-by: Guoqing Jiang <guoqing.jiang@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Guoqing Jiang authored
The two wrappers are not needed since we can call rtrs_{start,stop}_hb directly. Link: https://lore.kernel.org/r/20210528113018.52290-9-jinpu.wang@ionos.comSigned-off-by: Guoqing Jiang <guoqing.jiang@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Dima Stepanov authored
During checkpatch analyzing the following warning message was found: WARNING:STRLCPY: Prefer strscpy over strlcpy - see: https://lore.kernel.org/r/CAHk-=wgfRnXz0W3D37d01q3JFkr_i_uTL=V6A6G1oUZcprmknw@mail.gmail.com/ Fix it by using strscpy calls instead of strlcpy. Link: https://lore.kernel.org/r/20210528113018.52290-8-jinpu.wang@ionos.comSigned-off-by: Dima Stepanov <dmitrii.stepanov@ionos.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
Define MIN_CHUNK_SIZE to replace the hard-coding number. We need 4k for metadata, so MIN_CHUNK_SIZE should be at least 8k. Link: https://lore.kernel.org/r/20210528113018.52290-7-jinpu.wang@ionos.comSigned-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS) and the minimum chunk size is 4096 (2^12). Therefore the maximum sess_queue_depth is 65536 (2^16). Link: https://lore.kernel.org/r/20210528113018.52290-6-jinpu.wang@ionos.comSigned-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Guoqing Jiang authored
No need to use double switch to check the change of state everywhere, let's change them to "if" to reduce size. Link: https://lore.kernel.org/r/20210528113018.52290-5-jinpu.wang@ionos.comSigned-off-by: Guoqing Jiang <guoqing.jiang@ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Md Haris Iqbal authored
It was difficult to find out why it failed to establish RDMA connection. This patch adds some messages to show which function has failed why. Link: https://lore.kernel.org/r/20210528113018.52290-4-jinpu.wang@ionos.comSigned-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
Client receives queue_depth value from server. There is no need to use MAX_SESS_QUEUE_DEPTH value. Link: https://lore.kernel.org/r/20210528113018.52290-3-jinpu.wang@ionos.comSigned-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Guoqing Jiang authored
We can goto reject_w_err label after initialize err with -ECONNRESET. Link: https://lore.kernel.org/r/20210528113018.52290-2-jinpu.wang@ionos.comReviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Guoqing Jiang <guoqing.jiang@ionos.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
YueHaibing authored
Use DEVICE_ATTR_*() helpers instead of plain DEVICE_ATTR, which makes the code a bit shorter and easier to read. Link: https://lore.kernel.org/r/20210528125750.20788-1-yuehaibing@huawei.comSigned-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
YueHaibing authored
Use the DEVICE_ATTR_RO() helper instead of plain DEVICE_ATTR(), which makes the code a bit shorter and easier to read. Link: https://lore.kernel.org/r/20210526132949.20184-1-yuehaibing@huawei.comSigned-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
YueHaibing authored
Use DEVICE_ATTR_*() helper instead of plain DEVICE_ATTR, which makes the code a bit shorter and easier to read. Link: https://lore.kernel.org/r/20210526132753.3092-1-yuehaibing@huawei.comSigned-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
Both the PKEY and GID tables in an HCA can hold in the order of hundreds entries. Reading them is expensive. Partly because the API for retrieving them only returns a single entry at a time. Further, on certain implementations, e.g., CX-3, the VFs are paravirtualized in this respect and have to rely on the PF driver to perform the read. This again demands VF to PF communication. IB Core's cache is refreshed on all events. Hence, filter the refresh of the PKEY and GID caches based on the event received being IB_EVENT_PKEY_CHANGE and IB_EVENT_GID_CHANGE respectively. Fixes: 1da177e4 ("Linux-2.6.12-rc2") Link: https://lore.kernel.org/r/1621964949-28484-1-git-send-email-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
The capbability configurations of PFs and VFs are coupled. Decoupling them by abstracting some functions and reorganizing the configuration process. Link: https://lore.kernel.org/r/1621860428-58009-1-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Max Gurtovoy authored
This will allow both IPv4 and IPv6 sockets to bind a single port at the same time. Same behaviour is implemented in NVMe/RDMA target. Link: https://lore.kernel.org/r/20210524085225.29064-1-mgurtovoy@nvidia.comReviewed-by: Alaa Hleihel <alaa@nvidia.com> Reviewed-by: Israel Rukshin <israelr@nvidia.com> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bart Van Assche authored
Define .init_cmd_priv and .exit_cmd_priv callback functions in struct scsi_host_template. Set .cmd_size such that the SCSI core allocates per-command private data. Use scsi_cmd_priv() to access that private data. Remove the req_ring pointer from struct srp_rdma_ch since it is no longer necessary. Convert srp_alloc_req_data() and srp_free_req_data() into functions that initialize one instance of the SRP-private command data. This is a micro-optimization since this patch removes several pointer dereferences from the hot path. Note: due to commit e73a5e8e ("scsi: core: Only return started requests from scsi_host_find_tag()"), it is no longer necessary to protect the completion path against duplicate responses. Link: https://lore.kernel.org/r/20210524041211.9480-6-bvanassche@acm.orgSigned-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bart Van Assche authored
Only allocate a memory registration list if it will be used and if it will be freed. Link: https://lore.kernel.org/r/20210524041211.9480-5-bvanassche@acm.orgReviewed-by: Max Gurtovoy <maxg@mellanox.com> Fixes: f273ad4f ("RDMA/srp: Remove support for FMR memory registration") Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bart Van Assche authored
Applying the __packed attribute to an entire data structure results in suboptimal code on architectures that do not support unaligned accesses. Hence apply the __packed attribute only to those data members that are not naturally aligned. Link: https://lore.kernel.org/r/20210524041211.9480-4-bvanassche@acm.org Cc: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bart Van Assche authored
Before modifying how the __packed attribute is used, add compile time size checks for the structures that will be modified. Link: https://lore.kernel.org/r/20210524041211.9480-3-bvanassche@acm.orgReviewed-by: Leon Romanovsky <leonro@nvidia.com> Cc: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bart Van Assche authored
Since ib_get_len() only has one caller, move it from a header file into a .c file. Additionally, remove the superfluous u16 cast. That cast was introduced by commit 7dafbab3 ("IB/hfi1: Add functions to parse BTH/IB headers"). Link: https://lore.kernel.org/r/20210524041211.9480-2-bvanassche@acm.org Cc: Don Hiatt <don.hiatt@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
Move the HIP06 related code to the hw v1 source file for HEM. Link: https://lore.kernel.org/r/1621589395-2435-6-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
refcount_t is better than integer for reference counting, it will WARN on overflow/underflow and avoid use-after-free risks. Link: https://lore.kernel.org/r/1621589395-2435-5-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
The HEM page size for QPC timer and CQC timer is always 4K and there's no need to calculate a different size by the hns driver, otherwise the ROCEE may access an invalid address. Fixes: 719d1341 ("RDMA/hns: Remove duplicated hem page size config code") Link: https://lore.kernel.org/r/1621589395-2435-4-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
Split the hem_list_alloc_root_bt() into serval small functions to make the code flow more clear. Link: https://lore.kernel.org/r/1621589395-2435-3-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
The base address table is allocated by dma allocator, and the size is always aligned to PAGE_SIZE. If a fixed size is used to allocate the table, the number of base address entries stored in the table will be smaller than that can actually stored. Link: https://lore.kernel.org/r/1621589395-2435-2-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 26 May, 2021 1 commit
-
-
Maor Gottlieb authored
Change all the places in the mlx5_ib driver to take the qp type from the mlx5_ib_qp struct, except the QP initialization flow. It will ensure that we check the right QP type also for vendor specific QPs. Link: https://lore.kernel.org/r/b2e16cd65b59cd24fa81c01c7989248da44e58ea.1621413899.git.leonro@nvidia.comSigned-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 20 May, 2021 13 commits
-
-
Lang Cheng authored
The hcr_mutex was used to serialize mailbox post. Now that mailbox supports concurrency, this variable is no longer useful. Fixes: a389d016 ("RDMA/hns: Enable all CMDQ context") Link: https://lore.kernel.org/r/1621482876-35780-4-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
The CRQ of CMDQ is unused, so remove code about it. Link: https://lore.kernel.org/r/1621482876-35780-3-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
The same name represents opposite meanings in new/old driver, it is hard to maintain, so rename them to PI/CI. Link: https://lore.kernel.org/r/1621482876-35780-2-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xi Wang authored
The timeout link table works in HIP08 ES version and the hns driver only support the CS version for HIP08, so delete the related code. Then simplify the buffer allocation for link table to make the code more readable. Link: https://lore.kernel.org/r/1621481751-27375-1-git-send-email-liweihang@huawei.comSigned-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Tian Tao authored
If go to the err label, abort will be assigned a value of 1, so there is no need to assign a value of 1 here. Link: https://lore.kernel.org/r/1621503577-18093-1-git-send-email-tiantao6@hisilicon.comSigned-off-by: Tian Tao <tiantao6@hisilicon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Shaokun Zhang authored
Function 'init_credit_return' and 'sc_return_credits' are declared twice, remove the repeated declaration. Link: https://lore.kernel.org/r/1621417415-3772-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Guenter Roeck authored
The result of container_of() operations is never NULL unless the first element of the embedding structure is extracted. This is either not the case here, or the pointer passed to container_of() is known to be not NULL. The NULL checks are therefore unnecessary and misleading. Remove them. The channges in this patch were made automatically with the following Coccinelle script. @@ type t; identifier v; statement s; @@ <+... ( t v = container_of(...); | v = container_of(...); ) ... when != v - if (\( !v \| v == NULL \) ) s ...+> Link: https://lore.kernel.org/r/20210514153606.1377119-1-linux@roeck-us.netSigned-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
The old version of ib_umem_get() need these udata as a parameter but now they are unnecessary. Fixes: c320e527 ("IB: Allow calls to ib_umem_get from kernel ULPs") Link: https://lore.kernel.org/r/1620807142-39157-5-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
The old version of ib_umem_get() need these udata as a parameter but now they are unnecessary. Fixes: c320e527 ("IB: Allow calls to ib_umem_get from kernel ULPs") Link: https://lore.kernel.org/r/1620807142-39157-4-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
The old version of ib_umem_get() need these udata as a parameter but now they are unnecessary. Fixes: c320e527 ("IB: Allow calls to ib_umem_get from kernel ULPs") Link: https://lore.kernel.org/r/1620807142-39157-3-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
The old version of ib_umem_get() need these udata as a parameter but now they are unnecessary. Fixes: c320e527 ("IB: Allow calls to ib_umem_get from kernel ULPs") Link: https://lore.kernel.org/r/1620807142-39157-2-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Sergey Gorenko authored
The new bit in the comp_mask is needed to mark that kernel supports SQD2RTS transition for the modify QP command. Link: https://lore.kernel.org/r/7ce705fedac1b2b8e3a2f4013e04244dc5946344.1620641808.git.leonro@nvidia.comReviewed-by: Evgenii Kochetov <evgeniik@nvidia.com> Signed-off-by: Sergey Gorenko <sergeygo@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Sergey Gorenko authored
The transition of the QP state from SQD to RTS is allowed by the IB specification. The hardware also supports that, but it is not implemented in mlx5_ib. This commit adds SQD2RTS command to the modify QP in mlx5_ib to support the missing feature. The feature is required by the signature pipelining API that will be added to rdma-core. Link: https://lore.kernel.org/r/ab4876360bfba0e9d64a5e8599438e32e0cb351e.1620641808.git.leonro@nvidia.comReviewed-by: Evgenii Kochetov <evgeniik@nvidia.com> Signed-off-by: Sergey Gorenko <sergeygo@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 11 May, 2021 1 commit
-
-
Zhen Lei authored
The result of an expression consisting of a single relational operator is already of the bool type and does not need to be evaluated explicitly. No functional change. Link: https://lore.kernel.org/r/20210510120635.3636-1-thunder.leizhen@huawei.comSigned-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-