- 07 Jun, 2021 4 commits
-
-
Colin Ian King authored
There is a spelling mistake in a literal string. Fix it. Link: https://lore.kernel.org/r/20210607113345.82206-1-colin.king@canonical.comSigned-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
The variable val is being initialized with a value that is never read, it is being updated later on. The assignment is redundant and can be removed. Link: https://lore.kernel.org/r/20210605131347.26293-1-colin.king@canonical.com Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
A single statement is indented one level too deeply, clean up the code by removing the extraneous tab. Link: https://lore.kernel.org/r/20210605130400.25987-1-colin.king@canonical.comSigned-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Colin Ian King authored
The shifting of the u8 integer info->map[i] the left will be promoted to a 32 bit signed int and then sign-extended to a u64. In the event that the top bit of the u8 is set then all then all the upper 32 bits of the u64 end up as also being set because of the sign-extension. Fix this by casting the u8 values to a u64 before the left shift. This Link: https://lore.kernel.org/r/20210605122059.25105-1-colin.king@canonical.com Addresses-Coverity: ("Unitentional integer overflow / bad shift operation") Fixes: 3f49d684 ("RDMA/irdma: Implement HW Admin Queue OPs") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 04 Jun, 2021 1 commit
-
-
Jiapeng Chong authored
The error code is missing in this code scenario so 0 will be returned. Add the error code '-EINVAL' to the return value 'ret'. Eliminates the follow smatch warning: drivers/infiniband/hw/cxgb4/qp.c:298 create_qp() warn: missing error code 'ret'. Link: https://lore.kernel.org/r/1622545669-20625-1-git-send-email-jiapeng.chong@linux.alibaba.comReported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 03 Jun, 2021 6 commits
-
-
Devesh Sharma authored
Updated the maintainers list and removed non-active members. Link: https://lore.kernel.org/r/20210603131534.982257-3-devesh.sharma@broadcom.comSigned-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Devesh Sharma authored
Enabling Atomic operations for Gen P5 devices if the underlying platform supports global atomic ops. Link: https://lore.kernel.org/r/20210603131534.982257-2-devesh.sharma@broadcom.comSigned-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kamal Heib authored
To avoid the following failure when trying to load the rdma_rxe module while IPv6 is disabled, add a check for EAFNOSUPPORT and ignore the failure, also delete the needless debug print from rxe_setup_udp_tunnel(). $ modprobe rdma_rxe modprobe: ERROR: could not insert 'rdma_rxe': Operation not permitted Fixes: dfdd6158 ("IB/rxe: Fix kernel panic in udp_setup_tunnel") Link: https://lore.kernel.org/r/20210603090112.36341-1-kamalheib1@gmail.comReported-by: Yi Zhang <yi.zhang@redhat.com> Signed-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
In order to prevent user space from modifying the index that belongs to the kernel for shared queues let the kernel use a local copy of the index and copy any new values of that index to the shared rxe_queue_bus struct. This adds more switch statements which decreases the performance of the queue API. Move the type into the parameter list for these functions so that the compiler can optimize out the switch statements when the explicit type is known. Modify all the calls in the driver on performance paths to pass in the explicit queue type. Link: https://lore.kernel.org/r/20210527194748.662636-4-rpearsonhpe@gmail.com Link: https://lore.kernel.org/linux-rdma/20210526165239.GP1002214@@nvidia.com/Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Modify the queue APIs to protect all user space index loads with smp_load_acquire() and all user space index stores with smp_store_release(). Base this on the types of the queues which can be one of ..KERNEL, ..FROM_USER, ..TO_USER. Kernel space indices are protected by locks which also provide memory barriers. Link: https://lore.kernel.org/r/20210527194748.662636-3-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
To create optimal code only want to use smp_load_acquire() and smp_store_release() for user indices in rxe_queue APIs since kernel indices are protected by locks which also act as memory barriers. By adding a type to the queues we can determine which indices need to be protected. Link: https://lore.kernel.org/r/20210527194748.662636-2-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 02 Jun, 2021 26 commits
-
-
Jason Gunthorpe authored
Shiraz Saleem says: ==================== Add Intel Ethernet Protocol Driver for RDMA (irdma) The following patch series introduces a unified Intel Ethernet Protocol Driver for RDMA (irdma) for the X722 iWARP device and a new E810 device which supports iWARP and RoCEv2. The irdma module replaces the legacy i40iw module for X722 and extends the ABI already defined for i40iw. It is backward compatible with legacy X722 rdma-core provider (libi40iw). X722 and E810 are PCI network devices that are RDMA capable. The RDMA block of this parent device is represented via an auxiliary device exported to 'irdma' using the core auxiliary bus infrastructure recently added for 5.11 kernel. The parent PCI netdev drivers 'i40e' and 'ice' register auxiliary RDMA devices with private data/ops encapsulated that bind to auxiliary drivers registered in irdma module. Currently, default is RoCEv2 for E810. Runtime support for protocol switch to iWARP will be made available via devlink in a future patch. ==================== Link: https://lore.kernel.org/r/20210602205138.889-1-shiraz.saleem@intel.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com> * branch 'irdma': RDMA/irdma: Update MAINTAINERS file RDMA/irdma: Add irdma Kconfig/Makefile and remove i40iw RDMA/irdma: Add ABI definitions RDMA/irdma: Add dynamic tracing for CM RDMA/irdma: Add miscellaneous utility definitions RDMA/irdma: Add user/kernel shared libraries RDMA/irdma: Add RoCEv2 UD OP support RDMA/irdma: Implement device supported verb APIs RDMA/irdma: Add PBLE resource manager RDMA/irdma: Add connection manager RDMA/irdma: Add QoS definitions RDMA/irdma: Add privileged UDA queue implementation RDMA/irdma: Add HMC backing store setup functions RDMA/irdma: Implement HW Admin Queue OPs RDMA/irdma: Implement device initialization definitions RDMA/irdma: Register auxiliary driver and implement private channel OPs i40e: Register auxiliary devices to provide RDMA i40e: Prep i40e header for aux bus conversion ice: Register auxiliary device to provide RDMA ice: Implement iidc operations ice: Initialize RDMA support iidc: Introduce iidc.h i40e: Replace one-element array with flexible-array member
-
Shiraz Saleem authored
Add maintainer entry for irdma driver. Link: https://lore.kernel.org/r/20210602205138.889-17-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Shiraz Saleem authored
Add Kconfig and Makefile to build irdma driver. Remove i40iw driver and add an alias in irdma. Remove legacy exported symbols i40e_register_client and i40e_unregister_client from i40e as they are no longer used. irdma is the replacement driver that supports X722. Link: https://lore.kernel.org/r/20210602205138.889-16-shiraz.saleem@intel.comSigned-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Add ABI definitions for irdma. Link: https://lore.kernel.org/r/20210602205138.889-15-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Michael J. Ruhl authored
Add dynamic tracing functionality to debug connection management issues. Link: https://lore.kernel.org/r/20210602205138.889-14-shiraz.saleem@intel.comSigned-off-by: "Michael J. Ruhl" <michael.j.ruhl@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Add miscellaneous utility functions and headers. Link: https://lore.kernel.org/r/20210602205138.889-13-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Building the WQE descriptors for different verb operations are similar in kernel and user-space. Add these shared libraries. Link: https://lore.kernel.org/r/20210602205138.889-12-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Add the header, data structures and functions to populate the WQE descriptors and issue the Control QP commands that support RoCEv2 UD operations. Link: https://lore.kernel.org/r/20210602205138.889-11-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Implement device supported verb APIs. The supported APIs vary based on the underlying transport the ibdev is registered as (i.e. iWARP or RoCEv2). Link: https://lore.kernel.org/r/20210602205138.889-10-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Implement a Physical Buffer List Entry (PBLE) resource manager to manage a pool of PBLE HMC resource objects. Link: https://lore.kernel.org/r/20210602205138.889-9-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Add connection management (CM) implementation for iWARP including accept, reject, connect, create_listen, destroy_listen and CM utility functions Link: https://lore.kernel.org/r/20210602205138.889-8-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Add definitions for managing the RDMA HW work scheduler (WS) tree. A WS node is created via a control QP operation with the bandwidth allocation, arbitration scheme, and traffic class of the QP specified. The Qset handle returned associates the QoS parameters for the QP. The Qset is registered with the LAN and an equivalent node is created in the LAN packet scheduler tree. Link: https://lore.kernel.org/r/20210602205138.889-7-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Implement privileged UDA queues to handle iWARP connection packets and receive exceptions. Link: https://lore.kernel.org/r/20210602205138.889-6-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
HW uses host memory as a backing store for a number of protocol context objects and queue state tracking. The Host Memory Cache (HMC) is a component responsible for managing these objects stored in host memory. Add the functions and data structures to manage the allocation of backing pages used by the HMC for the various objects Link: https://lore.kernel.org/r/20210602205138.889-5-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
The driver posts privileged commands to the HW Admin Queue (Control QP or CQP) to request administrative actions from the HW. Implement create/destroy of CQP and the supporting functions, data structures and headers to handle the different CQP commands Link: https://lore.kernel.org/r/20210602205138.889-4-shiraz.saleem@intel.comSigned-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Implement device initialization routines, interrupt set-up, and allocate object bit-map tracking structures. Also, add device specific attributes and register definitions. Link: https://lore.kernel.org/r/20210602205138.889-3-shiraz.saleem@intel.com [flexible array transformation] Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mustafa Ismail authored
Register auxiliary drivers which can attach to auxiliary RDMA devices from Intel PCI netdev drivers i40e and ice. Implement the private channel ops, and register net notifiers. Link: https://lore.kernel.org/r/20210602205138.889-2-shiraz.saleem@intel.com [flexible array transformation] Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
During cm_dev deregistration in cm_remove_one(), the cm_device and cm_ports will be freed, after that they should not be accessed. The mad_agent needs to be protected as well. This patch adds a cm_device kref to protect cm_dev and cm_ports, and a mad_agent_lock spinlock to protect mad_agent. Link: https://lore.kernel.org/r/501ba7a2ff203dccd0e6755d3f93329772adce52.1622629024.git.leonro@nvidia.comSigned-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
The cm_init_av_for_lap() and cm_init_av_by_path() function calls have the following issues: 1. Both of them might sleep and should not be called under spinlock. 2. The access of cm_id_priv->av should be under cm_id_priv->lock, which means it can't be initialized directly. This patch splits the calling of 2 functions into two parts: first one initializes an AV outside of the spinlock, the second one copies AV to cm_id_priv->av under spinlock. Fixes: e1444b5a ("IB/cm: Fix automatic path migration support") Link: https://lore.kernel.org/r/038fb8ad932869b4548b0c7708cab7f76af06f18.1622629024.git.leonro@nvidia.comSigned-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
The mad_agent parameter is redundant since the struct ib_mad_send_buf already has a pointer of it. Link: https://lore.kernel.org/r/0987c784b25f7bfa72f78691f50cff066de587e1.1622629024.git.leonro@nvidia.comSigned-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
This reverts commit 9db0ff53, which wasn't a full fix and still causes to the following panic: panic @ time 1605623870.843, thread 0xfffffeb63b552000: vm_fault_lookup: fault on nofault entry, addr: 0xfffffe811a94e000 time = 1605623870 cpuid = 9, TSC = 0xb7937acc1b6 Panic occurred in module kernel loaded at 0xffffffff80200000:Stack: -------------------------------------------------- kernel:vm_fault+0x19da kernel:vm_fault_trap+0x6e kernel:trap_pfault+0x1f1 kernel:trap+0x31e kernel:cm_destroy_id+0x38c kernel:rdma_destroy_id+0x127 kernel:sdp_shutdown_task+0x3ae kernel:taskqueue_run_locked+0x10b kernel:taskqueue_thread_loop+0x87 kernel:fork_exit+0x83 Link: https://lore.kernel.org/r/4346449a7cdacc7a4eedc89cb1b42d8434ec9814.1622629024.git.leonro@nvidia.comSigned-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Now that all the free paths are explicit cm_free_msg() will only be called for msgs's allocated with cm_alloc_msg(), so we can assume the context is set. Place it after the allocation function it is paired with for clarity. Also remove a bogus NULL assignment in one place after a cancel. This does nothing other than disable completions to become events, but changing the state already did that. Link: https://lore.kernel.org/r/082fd3552be0d1a2c19b1c4cefb5f3f0e3e68e82.1622629024.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
There are now three destroy functions for the cm_msg, and all places except the general send completion handler use the correct function. Fix cm_send_handler() to detect which kind of message is being completed and destroy it using the correct function with the correct locking. Link: https://lore.kernel.org/r/62a507195b8db85bb11228d0c6e7fa944204bf12.1622629024.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This is being used with two quite different flows, one attaches the message to the priv and the other does not. Ensure the message attach is consistently done under the spinlock and ensure that the free on error always detaches the message from the cm_id_priv, also always under lock. This makes read/write to the cm_id_priv->msg consistently locked and consistently NULL'd when the message is freed, even in all error paths. Link: https://lore.kernel.org/r/f692b8c89eecb34fd82244f317e478bea6c97688.1622629024.git.leonro@nvidia.comSigned-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This is not a functional change, but it helps make the purpose of all the cm_free_msg() calls clearer. In this case a response msg has a NULL context[0], and is never placed in cm_id_priv->msg. Link: https://lore.kernel.org/r/5cd53163be7df0a94f0d4ef7294546bc674fb74a.1622629024.git.leonro@nvidia.comSigned-off-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The mlx4 and mlx5 implemented differently the WQ input checks. Instead of duplicating mlx4 logic in the mlx5, let's prepare the input in the central place. The mlx5 implementation didn't check for validity of state input. It is not real bug because our FW checked that, but still worth to fix. Fixes: f213c052 ("IB/uverbs: Add WQ support") Link: https://lore.kernel.org/r/ac41ad6a81b095b1a8ad453dcf62cf8d3c5da779.1621413310.git.leonro@nvidia.comReported-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 31 May, 2021 1 commit
-
-
Jack Wang authored
drivers/infiniband/ulp/rtrs/rtrs-clt.c:1786:19: warning: result of comparison of constant 'MAX_SESS_QUEUE_DEPTH' (65536) with expression of type 'u16' (aka 'unsigned short') is always false [-Wtautological-constant-out-of-range-compare] To fix it, limit MAX_SESS_QUEUE_DEPTH to u16 max, which is 65535, and drop the check in rtrs-clt, as it's the type u16 max. Link: https://lore.kernel.org/r/20210531122835.58329-1-jinpu.wang@ionos.comSigned-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 29 May, 2021 2 commits
-
-
Shiraz Saleem authored
Convert i40e to use the auxiliary bus infrastructure to export the RDMA functionality of the device to the RDMA driver. Register i40e client auxiliary RDMA device on the auxiliary bus per PCIe device function for the new auxiliary rdma driver (irdma) to attach to. The global i40e_register_client and i40e_unregister_client symbols will be obsoleted once irdma replaces i40iw in the kernel for the X722 device. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Shiraz Saleem authored
Add the definitions to the i40e client header file in preparation to convert i40e to use the new auxiliary bus infrastructure. This header is shared between the 'i40e' Intel networking driver providing RDMA support and the 'irdma' driver. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-