- 31 Aug, 2020 1 commit
-
-
Weihang Li authored
The UDP source port number in RoCE v2 is used to create entropy for network routers (ECMP), load balancers and 802.3ad link aggregation switching that are not aware of RoCE IB headers. Considering that the IB core has achieved a new interface to get a hashed value of it, the fixed value of it in QPC and UD WQE in hns driver could be fixed and the port number is to be set dynamically now. For QPC of RC, the value could be hashed from flow_lable if the user pass it in or from remote qpn and local qpn. For WQE of UD, it is set according to fl or as a random value. Link: https://lore.kernel.org/r/1598002289-8611-1-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 27 Aug, 2020 19 commits
-
-
Bob Pearson authored
Fixed several minor checkpatch warnings in existing rxe source. Link: https://lore.kernel.org/r/20200820224638.3212-3-rpearson@hpe.comSigned-off-by: Bob Pearson <rpearson@hpe.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
It should be considered an illegal operation if the ULP attempts to modify a QP from another state to the current hardware state. Otherwise, the ULP can modify some fields of QPC at any time. For example, for a QP in state of RTS, modify it from RTR to RTS can change the PSN, which is always not as expected. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/1598353674-24270-1-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kamal Heib authored
Now that the query_pkey() isn't mandatory by the RDMA core, this callback can be removed from the usnic provider. The libfabric userspace never touches the pkey. Link: https://lore.kernel.org/r/20200820125346.111902-1-kamalheib1@gmail.comSigned-off-by: Kamal Heib <kamalheib1@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Use cancel_work_sync() to ensure that the wq is not running and simply assign NULL to ctx->cm_id to indicate if the work ran or not. Delete the close_wq since flush_workqueue() is no longer needed. Link: https://lore.kernel.org/r/20200818120526.702120-15-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
When a new connection is established the RDMA CM creates a new cm_id and passes it through to the event handler. However inside the UCMA the new ID is not assigned a ucma_context until the user retrieves the event from a syscall. This creates a weird edge condition where a cm_id's context can continue to point at the listening_id that created it, and a number of additional edge conditions on event list clean up related to destroying half created IDs. There is also a race condition in ucma_get_events() where the cm_id->context is being assigned without holding the handler_mutex. Simplify all of this by creating the ucma_context inside the event handler itself and eliminating the edge case of a half created cm_id. All cm_id's can be uniformly destroyed via __destroy_id() or via the close_work. Link: https://lore.kernel.org/r/20200818120526.702120-14-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Since the backlog is now an atomic the file->mut is now only protecting the event_list and ctx_list. Narrow its scope to make it clear Link: https://lore.kernel.org/r/20200818120526.702120-13-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
There is no reason to grab the file->mut just to do this inc/dec work. Use an atomic. Link: https://lore.kernel.org/r/20200818120526.702120-12-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
All entry points to the rdma_cm from a ULP must be single threaded, even this error unwinds. Add the missing locking. Fixes: 7c119107 ("RDMA/ucma: Put a lock around every call to the rdma_cm layer") Link: https://lore.kernel.org/r/20200818120526.702120-11-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
This value is locked under the file->mut, ensure it is held whenever touching it. The case in ucma_migrate_id() is a race, while in ucma_free_uctx() it is already not possible for the write side to run, the movement is just for clarity. Fixes: 88314e4d ("RDMA/cma: add support for rdma_migrate_id()") Link: https://lore.kernel.org/r/20200818120526.702120-10-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
ctx->file is changed under the file->mut lock by ucma_migrate_id(), which is impossible to lock correctly. Instead change ctx->file under the handler_lock and ctx_table lock and revise all places touching ctx->file to use this locking when reading ctx->file. Link: https://lore.kernel.org/r/20200818120526.702120-9-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The only reader of destroying is inside a handler under the handler_mutex, so directly use the handler_mutex when setting it instead of the larger file->mut. As the refcount could be zero here, and the cm_id already freed, and additional refcount grab around the locking is required to touch the cm_id. Link: https://lore.kernel.org/r/20200818120526.702120-8-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
In almost all cases rdma_accept() is called under the handler_mutex by ULPs from their handler callbacks. The one exception was ucma which did not get the handler_mutex. To improve the understand-ability of the locking scheme obtain the mutex for ucma as well. This improves how ucma works by allowing it to directly use handler_mutex for some of its internal locking against the handler callbacks intead of the global file->mut lock. There does not seem to be a serious bug here, other than a DISCONNECT event can be delivered concurrently with accept succeeding. Link: https://lore.kernel.org/r/20200818120526.702120-7-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
It is not really necessary to keep a linked list of mcs associated with each context when we can just scan the xarray to find the right things. The removes another overloading of file->mut by relying on the xarray locking for mc instead. Link: https://lore.kernel.org/r/20200818120526.702120-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The store to ctx->cm_id was based on the idea that _ucma_find_context() would not return the ctx until it was fully setup. Without locking this doesn't work properly. Split things so that the xarray is allocated with NULL to reserve the ID and once everything is final set the cm_id and store. Along the way this shows that the error unwind in ucma_get_event() if a new ctx is created is wrong, fix it up. Link: https://lore.kernel.org/r/20200818120526.702120-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
ucma_close() is open coding the tail end of ucma_destroy_id(), consolidate this duplicated code into a function. Link: https://lore.kernel.org/r/20200818120526.702120-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
During the file_operations release function it is already not possible that write() can be running concurrently, remove the extra locking around the ctx_list. Link: https://lore.kernel.org/r/20200818120526.702120-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Both ucma_destroy_id() and ucma_close_id() (triggered from an event via a wq) can drive the refcount to zero. ucma_get_ctx() was wrongly assuming that the refcount can only go to zero from ucma_destroy_id() which also removes it from the xarray. Use refcount_inc_not_zero() instead. Link: https://lore.kernel.org/r/20200818120526.702120-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
When DCT QPs work in RoCE LAG mode: 1. DCT creation is allowed only when it is supported 2. The "port" of a DCT QP is assigned in a round-robin way Link: https://lore.kernel.org/r/20200818115245.700581-3-leon@kernel.orgSigned-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
DCI QP supports tx_affinity as well. Link: https://lore.kernel.org/r/20200818115245.700581-2-leon@kernel.orgSigned-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 24 Aug, 2020 12 commits
-
-
Chuck Lever authored
Surface the operation of MAD exchanges during connection establishment. Some samples: [root@klimt ~]# trace-cmd report -F ib_cma cpus=4 kworker/0:4-123 [000] 60.677388: icm_send_rep: local_id=1965336542 remote_id=1096195961 state=REQ_RCVD lap_state=LAP_UNINIT kworker/u8:11-391 [002] 60.678808: icm_send_req: local_id=1982113758 remote_id=0 state=IDLE lap_state=LAP_UNINIT kworker/0:4-123 [000] 60.679652: icm_send_rtu: local_id=1982113758 remote_id=1079418745 state=REP_RCVD lap_state=LAP_UNINIT nfsd-1954 [001] 60.691350: icm_send_rep: local_id=1998890974 remote_id=1129750393 state=MRA_REQ_SENT lap_state=LAP_UNINIT nfsd-1954 [003] 62.017931: icm_send_drep: local_id=1998890974 remote_id=1129750393 state=TIMEWAIT lap_state=LAP_UNINIT Link: https://lore.kernel.org/r/159767240197.2968.12048458026453596018.stgit@klimt.1015granger.netSigned-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chuck Lever authored
In the interest of converging on a common instrumentation infrastructure, modernize the pr_debug() call sites added by commit 119bf817 ("IB/cm: Add debug prints to ib_cm"). The new tracepoints appear in a new "ib_cma" subsystem. The conversion is somewhat mechanical. Someone more familiar with the semantics of the recorded information might suggest additional data capture. Some benefits include: - Tracepoints enable "always on" reporting of these errors - The error records are structured and compact - Tracepoints provide hooks for eBPF scripts Sample output: nfsd-1954 [003] 62.017901: icm_dreq_skipped: local_id=1998890974 remote_id=1129750393 state=DREQ_RCVD lap_state=LAP_UNINIT Link: https://lore.kernel.org/r/159767239665.2968.10613294222688696646.stgit@klimt.1015granger.netSigned-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Chuck Lever authored
Refactor: Make it globally available in the utilities header. Link: https://lore.kernel.org/r/159767239131.2968.9520990257041764685.stgit@klimt.1015granger.netSigned-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
The alloc ucontext flow is always called with a valid udata, there's no need to test whether it's NULL. While at it, the 'udata->outlen' check is removed as well as we copy the minimum between the size of the response and outlen, so in case of zero outlen, zero bytes will be copied. Link: https://lore.kernel.org/r/20200818110835.54299-1-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Kamal Heib authored
Fix the kernel-doc documentation by matching between the functions definitions and documentation. Link: https://lore.kernel.org/r/20200820123512.105193-1-kamalheib1@gmail.comSigned-off-by: Kamal Heib <kamalheib1@gmail.com> Acked-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mohammad Heib authored
Creating rxe device on top of vlan interface will create a non-functional device that has an empty gids table and can't be used for rdma cm communication. This is caused by the logic in enum_all_gids_of_dev_cb()/is_eth_port_of_netdev(), which only considers networks connected to "upper devices" of the configured network device, resulting in an empty set of gids for a vlan interface, and attempts to connect via this rdma device fail in cm_init_av_for_response because no gids can be resolved. Apparently, this behavior was implemented to fit the HW-RoCE devices that create RoCE device per port, therefore RXE must behave the same like HW-RoCE devices and create rxe device per real device only. In order to communicate via a vlan interface, the user must use the gid index of the vlan address instead of creating rxe over vlan. Link: https://lore.kernel.org/r/20200811150415.3693-1-goody698@gmail.comSigned-off-by: Mohammad Heib <goody698@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
When scheduling delayed work to clean up the cache, if the entry already has been scheduled for deletion, we adjust the delay. Fixes: 3cf69cc8 ("IB/mlx4: Add CM paravirtualization") Link: https://lore.kernel.org/r/20200803061941.1139994-7-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
A CM REJ packet with its reason equal to timeout is a special beast in the sense that it doesn't have a Remote Communication ID nor does it have a Remote Port GID. Using CX-3 virtual functions, either from a bare-metal machine or pass-through from a VM, MAD packets are proxied through the PF driver. Since the VF drivers have separate name spaces for MAD Transaction Ids (TIDs), the PF driver has to re-map the TIDs and keep the book keeping in a cache. This proxying doesn't not handle said REJ packets. If the active side abandons its connection attempt after having sent a REQ, it will send a REJ with the reason being timeout. This example can be provoked by a simple user-verbs program, which ends up doing: rdma_connect(cm_id, &conn_param); rdma_destroy_id(cm_id); using the async librdmacm API. Having dynamic debug prints enabled in the mlx4_ib driver, we will then see: mlx4_ib_demux_cm_handler: Couldn't find an entry for pv_cm_id 0x0, attr_id 0x12 The solution is to introduce a radix-tree. When a REQ packet is received and handled in mlx4_ib_demux_cm_handler(), we know the connecting peer's para-virtual cm_id and the destination slave. We then insert an entry into the tree with said information. We also schedule work to remove this entry from the tree and free it, in order to avoid memory leak. When a REJ packet with reason timeout is received, we can look up the slave in the tree, and deliver the packet to the correct slave. When a duplicate REQ packet is received, the entry is in the tree. In this case, we adjust the delayed work in order to avoid a too premature eviction of the entry. When cleaning up, we simply traverse the tree and modify any delayed work to use a zero delay. A subsequent flush of the system_wq will ensure all entries being wiped out. Fixes: 3cf69cc8 ("IB/mlx4: Add CM paravirtualization") Link: https://lore.kernel.org/r/20200803061941.1139994-6-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
The mlx4 driver will proxy MAD packets through the PF driver. A VM or an instantiated VF will send its MAD packets to the PF driver using loop-back. The PF driver will be informed by an interrupt, but defer the handling and polling of CQEs to a worker thread running on an ordered work-queue. Consider the following scenario: the VMs will in short proximity in time, for example due to a network event, send many MAD packets to the PF driver. Lets say there are K VMs, each sending N packets. The interrupt from the first VM will start the worker thread, which will poll N CQEs. A common case here is where the PF driver will multiplex the packets received from the VMs out on the wire QP. But before the wire QP has returned a send CQE and associated interrupt, the other K - 1 VMs have sent their N packets as well. The PF driver has to multiplex K * N packets out on the wire QP. But the send-queue on the wire QP has a finite capacity. So, in this scenario, if K * N is larger than the send-queue capacity of the wire QP, we will get MAD packets dropped on the floor with this dynamic debug message: mlx4_ib_multiplex_mad: failed sending GSI to wire on behalf of slave 2 (-11) and this despite the fact that the wire send-queue could have capacity, but the PF driver isn't aware, because the wire send CQEs have not yet been polled. We can also have a similar scenario inbound, with a wire recv-queue larger than the tunnel QP's send-queue. If many remote peers send MAD packets to the very same VM, the tunnel send-queue destined to the VM could allegedly be construed to be full by the PF driver. This starvation is fixed by introducing separate work queues for the wire QPs vs. the tunnel QPs. With this fix, using a dual ported HCA, 8 VFs instantiated, we could run cmtime on each of the 18 interfaces towards a similar configured peer, each cmtime instance with 800 QPs (all in all 14400 QPs) without a single CM packet getting lost. Fixes: 3cf69cc8 ("IB/mlx4: Add CM paravirtualization") Link: https://lore.kernel.org/r/20200803061941.1139994-5-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
Using CX-3 in virtualized mode, MAD packets are proxied through the PF driver. The feed is N tunnel QPs, and what is received from the VFs is multiplexed out on the wire QP. Since this is a many-to-one scenario, it is better to have separate initialization parameters for the two usages. The number of wire and tunnel bufs are yanked up to 2K and 512 respectively. With this set of parameters, a system consisting of eight physical servers, each with eight VMs and 14 I/O servers (BM), can run switch fail-over without seeing: mlx4_ib_demux_mad: failed sending GSI to slave 3 via tunnel qp (-11) or mlx4_ib_multiplex_mad: failed sending GSI to wire on behalf of slave 2 (-11) Fixes: 3cf69cc8 ("IB/mlx4: Add CM paravirtualization") Link: https://lore.kernel.org/r/20200803061941.1139994-4-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
Using CX-3 in virtualized mode, MAD packets are proxied through the PF driver. However, the handling lacks support of the MRA (Message Receipt Acknowledgment) packet. When having dynamic debug enabled, we see tons of: mlx4_ib_multiplex_cm_handler: id{slave: 7, sl_cm_id: 0x8fcb45a0} is NULL! attr_id: 0x11 Fixes: 3cf69cc8 ("IB/mlx4: Add CM paravirtualization") Link: https://lore.kernel.org/r/20200803061941.1139994-3-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
Add missing check for success after call to mlx4_ib_send_to_wire() in mlx4_ib_multiplex_mad(). Amended the existing pr_debug() in mlx4_ib_multiplex_cm_handler() and mlx4_ib_demux_cm_handler() with attr_id during a lookup failure. Removed two noisy pr_debug() in mad.c Fixes: 3cf69cc8 ("IB/mlx4: Add CM paravirtualization") Link: https://lore.kernel.org/r/20200803061941.1139994-2-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 18 Aug, 2020 8 commits
-
-
Jason Gunthorpe authored
Previous commits removed all references to the /sys/class/infiniband_cm/ directory represented by the cm_class symbol. Remove the directory and cm_class. Fixes: a1a8e4a8 ("rdma: Delete the ib_ucm module") Link: https://lore.kernel.org/r/0-v1-90096a98c476+205-remove_cm_leftovers_jgg@nvidia.comSigned-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Max Gurtovoy authored
The isert_post_recv function prints an error in case of failures, so no need for the callers to add another print. Link: https://lore.kernel.org/r/20200805121231.166162-2-maxg@mellanox.comSigned-off-by: Max Gurtovoy <maxg@mellanox.com> Acked-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Maor Gottlieb authored
In order to allow sniffer when the RDMA device is in switchdev mode, we don't need to set the source port when creating the sniffer rule. Link: https://lore.kernel.org/r/20200803060214.15328-1-leon@kernel.orgSigned-off-by: Maor Gottlieb <maorg@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mark Zhang authored
Support 56, 25, 100, 200 and 50Gbps IB rates in mlx5 driver. Link: https://lore.kernel.org/r/20200802081712.1993490-1-leon@kernel.orgSigned-off-by: Mark Zhang <markz@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
This patch introduces the ability to configure SRD QPs with the RNR retry parameter when issuing a modify QP command. In addition, a capability bit was added to report support to the userspace library. Link: https://lore.kernel.org/r/20200731060420.17053-5-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
This precursory patch adds the SRD QP type state machine, which is currently identical to the one of UD QP type. A following patch is going to change the SRD QP state machine to support RNR retry modifications. Link: https://lore.kernel.org/r/20200731060420.17053-4-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
The modify QP bitmask was not consistent with other bitmasks used in the device interface. Remove the bitmask enum and allow usage with EFA_GET/SET. Link: https://lore.kernel.org/r/20200731060420.17053-3-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
Instead of adding a new function for each capability added, introduce a generic helper to query device capabilities. Link: https://lore.kernel.org/r/20200731060420.17053-2-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-