- 13 Sep, 2019 8 commits
-
-
Sergey Gorenko authored
Maximum supported IO size is 8MB for the iSER driver. The current value is limited by the ISCSI_ISER_MAX_SG_TABLESIZE macro. But the driver is able to handle 16MB IOs without any significant changes. Increasing this limit can be useful for the storage arrays which are fine tuned for IOs larger than 8 MB. This commit allows to configure maximum IO size up to 16MB using the max_sectors module parameter. Link: https://lore.kernel.org/r/20190912103534.18210-1-sergeygo@mellanox.comSigned-off-by: Sergey Gorenko <sergeygo@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Acked-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bernard Metzler authored
Use the correct kmap()/kunmap() flow to determine page address used for CRC computation. Using page_address() is wrong, since page might be in highmem. Fixes: b9be6f18 ("rdma/siw: transmit path") Link: https://lore.kernel.org/r/20190909132427.30264-1-bmt@zurich.ibm.comReported-by: Krishnamraju Eraparaju <krishna2@chelsio.com> Signed-off-by: Bernard Metzler <bmt@zurich.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Navid Emamdoost authored
In bnxt_re_create_srq(), when ib_copy_to_udata() fails allocated memory should be released by goto fail. Fixes: 37cb11ac ("RDMA/bnxt_re: Add SRQ support for Broadcom adapters") Link: https://lore.kernel.org/r/20190910222120.16517-1-navid.emamdoost@gmail.comSigned-off-by: Navid Emamdoost <navid.emamdoost@gmail.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Arnd Bergmann authored
It's never a good idea to put a 1000-byte buffer on the kernel stack. The compiler warns about this instance when usnic_ib_log_vf() gets inlined into usnic_ib_pci_probe(): drivers/infiniband/hw/usnic/usnic_ib_main.c:543:12: error: stack frame size of 1044 bytes in function 'usnic_ib_pci_probe' [-Werror,-Wframe-larger-than=] As this is only called for debugging purposes in the setup path, it's trivial to convert to a dynamic allocation. Link: https://lore.kernel.org/r/20190906155730.2750200-1-arnd@arndb.deSigned-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
length is a size_t which is unsigned int on 32 bit: ../drivers/infiniband/core/umem_odp.c: In function 'ib_init_umem_odp': ../include/linux/overflow.h:59:15: warning: comparison of distinct pointer types lacks a cast 59 | (void) (&__a == &__b); \ | ^~ ../drivers/infiniband/core/umem_odp.c:220:7: note: in expansion of macro 'check_add_overflow' Fixes: 204e3e56 ("RDMA/odp: Check for overflow when computing the umem_odp end") Link: https://lore.kernel.org/r/20190908080726.30017-1-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Link: https://lore.kernel.org/r/20190906141727.26552-1-yuehaibing@huawei.comSigned-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Guoqing Jiang authored
Update the document since those functions had been renamed in below. Fixes: 0a18cfe4 ("IB/core: Rename ib_create_ah to rdma_create_ah") Fixes: 67b985b6 ("IB/core: Rename ib_modify_ah to rdma_modify_ah") Fixes: bfbfd661 ("IB/core: Rename ib_query_ah to rdma_query_ah") Fixes: 36523159 ("IB/core: Rename ib_destroy_ah to rdma_destroy_ah") Link: https://lore.kernel.org/r/20190903124519.28318-1-guoqing.jiang@cloud.ionos.comSigned-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Håkon Bugge authored
In addr_handler(), assuming status == 0 and the device already has been acquired (id_priv->cma_dev != NULL), we get the following incorrect "error" message: RDMA CM: ADDR_ERROR: failed to resolve IP. status 0 Fixes: 498683c6 ("IB/cma: Add debug messages to error flows") Link: https://lore.kernel.org/r/20190902092731.1055757-1-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 28 Aug, 2019 11 commits
-
-
Lijun Ou authored
We used wrong shifts when set qp_attr->qp_access_flag, this patch exchange V2_QP_RRE_S and V2_QP_RWE_S to fix it. Fixes: 2a3d923f ("RDMA/hns: Replace magic numbers with #defines") Signed-off-by: Weihang Li <liweihang@hisilicon.com> Signed-off-by: Lijun Ou <oulijun@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-10-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Wenpeng Liang authored
Delete the assignment of srq->ibsrq.ext.xrc.srq_num, beacause this value is not used. Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Lijun Ou <oulijun@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-9-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Wenpeng Liang authored
Because if the value of srqwqe_buf_pg_sz is zero, npages and ib_umem_page_count are equivalent, page_shif and PAGE_SHIFT are equivalent in hns_roce_create_srq. Here remove it. Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Lijun Ou <oulijun@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-8-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Lang Cheng authored
If the hardware is resetting, the driver should not perform the mailbox operation.Function-clear needs to add relevant judgment. Signed-off-by: Lang Cheng <chenglang@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-7-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Lang Cheng authored
Sparse is whining about the u32 and __le32 mixed usage in the driver. The roce_set_field() is used to __le32 data of hardware only. If a variable is not delivered to the hardware, the __le32 type and related operations are not required. Signed-off-by: Lang Cheng <chenglang@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-6-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Lijun Ou authored
Here uses the meaningful macro instead of the magic number for readability. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Lang Chen <chenglang@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-5-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Lang Cheng authored
we change type of some members to u32/u8 from __le32 as well as split sl_tclass_flowlabel into three variables in hns_roce_av. Signed-off-by: Lang Cheng <chenglang@huawei.com> Link: https://lore.kernel.org/r/1566393276-42555-4-git-send-email-oulijun@huawei.comSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Jason Gunthorpe authored
Michael Guralnik says: ==================== The series adds support for on-demand paging for DC transport. As DC is a mlx-only transport, the capabilities are exposed to the user using DEVX objects and later on through mlx5dv_query_device. ==================== Based on the mlx5-next branch from git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux for dependencies * branch 'mlx5-odp-dc': IB/mlx5: Add page fault handler for DC initiator WQE IB/mlx5: Remove check of FW capabilities in ODP page fault handling net/mlx5: Set ODP capabilities for DC transport to max
-
Michael Guralnik authored
Parsing DC initiator WQEs upon page fault requires skipping an address vector segment, as in UD WQEs. Link: https://lore.kernel.org/r/20190819120815.21225-4-leon@kernel.orgSigned-off-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Michael Guralnik authored
As page fault handling is initiated by FW, there is no need to check that the ODP supports the operation and transport. Link: https://lore.kernel.org/r/20190819120815.21225-3-leon@kernel.orgSigned-off-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Michael Guralnik authored
In mlx5_core initialization, query max ODP capabilities for DC transport from FW and set as current capabilities. Signed-off-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
- 27 Aug, 2019 4 commits
-
-
Qian Cai authored
In file included from ./arch/powerpc/include/asm/paca.h:15, from ./arch/powerpc/include/asm/current.h:13, from ./include/linux/thread_info.h:21, from ./include/asm-generic/preempt.h:5, from ./arch/powerpc/include/generated/asm/preempt.h:1, from ./include/linux/preempt.h:78, from ./include/linux/spinlock.h:51, from ./include/linux/wait.h:9, from ./include/linux/completion.h:12, from ./include/linux/mlx5/driver.h:37, from drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h:6, from drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c:33: In function 'strncpy', inlined from 'mlx5_fw_tracer_save_trace' at drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c:549:2, inlined from 'mlx5_tracer_print_trace' at drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c:574:2: ./include/linux/string.h:305:9: warning: '__builtin_strncpy' output may be truncated copying 256 bytes from a string of length 511 [-Wstringop-truncation] return __builtin_strncpy(p, q, size); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fix it by using the new strscpy_pad() since the commit 458a3bf8 ("lib/string: Add strscpy_pad() function") which will always NUL-terminate the string, and avoid possibly leak data through the ring buffer where non-admin account might enable these events through perf. Fixes: fd1483fe ("net/mlx5: Add support for FW reporter dump") Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Markus Elfring authored
The dev_kfree_skb() function performs also input parameter validation. Thus the test around the shown calls is not needed. This issue was detected by using the Coccinelle software. Link: https://lore.kernel.org/r/16df4c50-1f61-d7c4-3fc8-3073666d281d@web.deSigned-off-by: Markus Elfring <elfring@users.sourceforge.net> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Gal Pressman authored
Use FIELD_SIZEOF macro instead of hard coding it in field_avail macro. Link: https://lore.kernel.org/r/20190826115350.21718-3-galpress@amazon.comReviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Firas JahJah <firasj@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Gal Pressman authored
EFA driver is not a kverbs provider, the check for MR umem is redundant. Link: https://lore.kernel.org/r/20190826115350.21718-2-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 26 Aug, 2019 2 commits
-
-
Mark Zhang authored
Currently user applications can only steer TCP/IP(NIC RX/RX) traffic. This patch adds RDMA_RX as a new flow type to allow the user to insert steering rules to control RDMA traffic. Two destinations are supported(but not set at the same time): devx flow table object and QP. Signed-off-by: Mark Zhang <markz@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Link: https://lore.kernel.org/r/20190819113626.20284-4-leon@kernel.orgSigned-off-by: Doug Ledford <dledford@redhat.com>
-
Doug Ledford authored
Bring in the lastest mlx5-next branch as the RDMA RX RoCE Steering Support patch series requires it (first two patches are in mlx5-next, final patch in RDMA tree). Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 21 Aug, 2019 15 commits
-
-
Jason Gunthorpe authored
Jason Gunthorpe says: ==================== This is a collection of general cleanups for ODP to clarify some of the flows around umem creation and use of the interval tree. ==================== The branch is based on v5.3-rc5 due to dependencies * odp_fixes: RDMA/mlx5: Use odp instead of mr->umem in pagefault_mr RDMA/mlx5: Use ib_umem_start instead of umem.address RDMA/core: Make invalidate_range a device operation RDMA/odp: Use kvcalloc for the dma_list and page_list RDMA/odp: Check for overflow when computing the umem_odp end RDMA/odp: Provide ib_umem_odp_release() to undo the allocs RDMA/odp: Split creating a umem_odp from ib_umem_get RDMA/odp: Make the three ways to create a umem_odp clear RMDA/odp: Consolidate umem_odp initialization RDMA/odp: Make it clearer when a umem is an implicit ODP umem RDMA/odp: Iterate over the whole rbtree directly RDMA/odp: Use the common interval tree library instead of generic RDMA/mlx5: Fix MR npages calculation for IB_ACCESS_HUGETLB Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
These are the same thing since mr always comes from odp->private. It is confusing to reference the same memory via two names. Link: https://lore.kernel.org/r/20190819111710.18440-13-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
These are subtly different, the address is the original VA requested during umem_get, while ib_umem_start() is the version that is rounded to the proper page size, ie is the true start of the umem's dma map. Link: https://lore.kernel.org/r/20190819111710.18440-12-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Moni Shoua authored
The callback function 'invalidate_range' is implemented in a driver so the place for it is in the ib_device_ops structure and not in ib_ucontext. Signed-off-by: Moni Shoua <monis@mellanox.com> Reviewed-by: Guy Levi <guyle@mellanox.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Link: https://lore.kernel.org/r/20190819111710.18440-11-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
There is no specific need for these to be in the valloc space, let the system decide automatically how to do the allocation. Link: https://lore.kernel.org/r/20190819111710.18440-10-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Since the page size can be extended in the ODP case by IB_ACCESS_HUGETLB the existing overflow checks done by ib_umem_get() are not sufficient. Check for overflow again. Further, remove the unchecked math from the inlines and just use the precomputed value stored in the interval_tree_node. Link: https://lore.kernel.org/r/20190819111710.18440-9-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Now that there are allocator APIs that return the ib_umem_odp directly it should be freed through a umem_odp free'er as well. Link: https://lore.kernel.org/r/20190819111710.18440-8-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This is the last creation API that is overloaded for both, there is very little code sharing and a driver has to be specifically ready for a umem_odp to be created to use the odp version. Link: https://lore.kernel.org/r/20190819111710.18440-7-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The three paths to build the umem_odps are kind of muddled, they are: - As a normal ib_mr umem - As a child in an implicit ODP umem tree - As the root of an implicit ODP umem tree Only the first two are actually umem's, the last is an abuse. The implicit case can only be triggered by explicit driver request, it should never be co-mingled with the normal case. While we are here, make sensible function names and add some comments to make this clearer. Link: https://lore.kernel.org/r/20190819111710.18440-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This is done in two different places, consolidate all the post-allocation initialization into a single function. Link: https://lore.kernel.org/r/20190819111710.18440-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Implicit ODP umems are special, they don't have any page lists, they don't exist in the interval tree and they are never DMA mapped. Instead of trying to guess this based on a zero length use an explicit flag. Further, do not allow non-implicit umems to be 0 size. Link: https://lore.kernel.org/r/20190819111710.18440-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Instead of intersecting a full interval, just iterate over every element directly. This is faster and clearer. Link: https://lore.kernel.org/r/20190819111710.18440-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
ODP is working with userspace VA's in the interval tree which always fit into an unsigned long, so we can use the common code. This comes at a cost of a 16 byte increase in ib_umem_odp struct size due to storing the interval tree start/last in addition to the umem addr/length. However these values were computed and are performance critical for the interval lookup, so this seems like a worthwhile trade off. Removes 2k of .text from the kernel. Link: https://lore.kernel.org/r/20190819111710.18440-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Mark Zhang authored
Use different namespaces for bypass and switchdev loopback because they have different priorities and default table miss action requirement: 1. bypass: with multiple priorities support, and MLX5_FLOW_TABLE_MISS_ACTION_DEF as the default table miss action; 2. switchdev loopback: with single priority support, and MLX5_FLOW_TABLE_MISS_ACTION_SWITCH_DOMAIN as the default table miss action. Signed-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Mark Zhang authored
Currently all the namespaces under the same steering domain share the same default table miss action, however in some situations (e.g., RDMA RX) different actions are required. This patch adds a per-namespace default table miss action instead of using the miss action of the steering domain. Signed-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-