- 29 Sep, 2020 2 commits
-
-
Leon Romanovsky authored
Remove duplication of mlx5_ib_qp and mlx5_ib_gsi_qp fields. This change returns the memory footprint of mlx5_ib QP to be as it was before embedding GSI QP. Link: https://lore.kernel.org/r/20200926102450.2966017-3-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The GSI QPs have different create flow from the regular QPs, but it is not really needed. Update the code to use mlx5_ib_qp as a storage class for all outside of GSI calls. Link: https://lore.kernel.org/r/20200926102450.2966017-2-leon@kernel.orgReviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 25 Sep, 2020 1 commit
-
-
Liu Shixin authored
sizeof() when applied to a pointer typed expression should give the size of the pointed data, even if the data is a pointer. Fixes: e1f24a79 ("IB/mlx5: Support congestion related counters") Link: https://lore.kernel.org/r/20200917081354.2083293-1-liushixin2@huawei.comSigned-off-by: Liu Shixin <liushixin2@huawei.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 24 Sep, 2020 13 commits
-
-
Weihang Li authored
HIP08 supports RC inline up to size of 32 Bytes, and all data should be put into SQWQE. For HIP09, this capability is extended to 1024 Bytes, if length of data is longer than 32 Bytes, they will be filled into extended sge space. Link: https://lore.kernel.org/r/1599744069-9968-1-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The sq_sig_type field should be filled when querying QP, or the users may get a wrong value. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1600509802-44382-9-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
The hardware will add AckReq flag in BTH header according to the value of ack_req_freq to request ACK from responder for the packets with this flag. It should be greater than or equal to lp_pktn_ini instead of using a fixed value. Fixes: 7b9bd73e ("RDMA/hns: Fix wrong assignment of lp_pktn_ini in QPC") Link: https://lore.kernel.org/r/1600509802-44382-8-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The rnr_retry returned to the user is not correct, it should be got from another fields in QPC. Fixes: bfe86035 ("RDMA/hns: Fix cast from or to restricted __le32 for driver") Link: https://lore.kernel.org/r/1600509802-44382-7-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jiaran Zhang authored
calc_pg_sz() may gets a data calculation overflow if the PAGE_SIZE is 64 KB and hop_num is 2. It is because that all variables involved in calculation are defined in type of int. So change the type of bt_chunk_size, buf_chunk_size and obj_per_chunk_default to u64. Fixes: ba6bb7e9 ("RDMA/hns: Add interfaces to get pf capabilities from firmware") Link: https://lore.kernel.org/r/1600509802-44382-6-git-send-email-liweihang@huawei.comSigned-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jiaran Zhang authored
According to the RoCE v1 specification, the sl (service level) 0-7 are mapped directly to priorities 0-7 respectively, sl 8-15 are reserved. The driver should verify whether the the value of sl is larger than 7, if so, an exception should be returned. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1600509802-44382-5-git-send-email-liweihang@huawei.comSigned-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
Change "initialze" to "initialize". Fixes: 8f3e9f3e ("IB/hns: Add code for refreshing CQ CI using TPTR") Link: https://lore.kernel.org/r/1600509802-44382-4-git-send-email-liweihang@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
HIP08 doesn't support modifying the maximum number of outstanding WR in an SRQ. However, the driver does not return a failure message, and users may mistakenly think that the resizing is executed successfully. So the driver needs to intercept this operation. Fixes: ffb1308b ("RDMA/hns: Move SRQ code to the reasonable place") Link: https://lore.kernel.org/r/1600509802-44382-3-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
According to the IB specifications, the verbs should return an immediate error when the users set an unsupported opcode. Furthermore, refactor codes about opcode in process of post_send to make the difference between opcodes clearer. Link: https://lore.kernel.org/r/1600509802-44382-2-git-send-email-liweihang@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
For HIP09, size of SCCC (Soft Congestion Control Context) is increased to 64 Bytes from 32 Bytes. The hardware will get the configuration of SCCC from driver instead of using a fixed value. Link: https://lore.kernel.org/r/1600245806-56321-5-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The new version of RoCEE supports using QPC in size of 256B or 512B, so that HIP09 can supports new congestion control algorithms by using QPC in larger size. Link: https://lore.kernel.org/r/1600245806-56321-4-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The new version of RoCEE supports using CQE in size of 32B or 64B. The performance of bus can be improved by using larger size of CQE. Link: https://lore.kernel.org/r/1600245806-56321-3-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The new version of RoCEE supports using CEQE in size of 4B or 64B, AEQE in size of 16B or 64B. The performance of bus can be improved by using larger size of EQE. Link: https://lore.kernel.org/r/1600245806-56321-2-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 22 Sep, 2020 9 commits
-
-
Julia Lawall authored
sg_init_table() zeroes its first argument, so the allocation of that argument doesn't have to. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression x,n,flags; @@ x = - kcalloc + kmalloc_array (n,sizeof(*x),flags) ... sg_init_table(x,n) // </smpl> Link: https://lore.kernel.org/r/1600601186-7420-6-git-send-email-Julia.Lawall@inria.frSigned-off-by: Julia Lawall <Julia.Lawall@inria.fr> Acked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Sindhu, Devale authored
Occasionally ib_write_bw crash is seen due to access of a pd object in i40iw_sc_qp_destroy after it is freed. Destroy qp is not synchronous in i40iw and thus the iwqp object could be referencing a pd object that is freed by ib core as a result of successful return from i40iw_destroy_qp. Wait in i40iw_destroy_qp till all QP references are released and destroy the QP and its associated resources before returning. Switch to use the refcount API vs atomic API for lifetime management of the qp. RIP: 0010:i40iw_sc_qp_destroy+0x4b/0x120 [i40iw] [...] RSP: 0018:ffffb4a7042e3ba8 EFLAGS: 00010002 RAX: 0000000000000000 RBX: 0000000000000001 RCX: dead000000000122 RDX: ffffb4a7042e3bac RSI: ffff8b7ef9b1e940 RDI: ffff8b7efbf09080 RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 R10: 8080808080808080 R11: 0000000000000010 R12: ffff8b7efbf08050 R13: 0000000000000001 R14: ffff8b7f15042928 R15: ffff8b7ef9b1e940 FS: 0000000000000000(0000) GS:ffff8b7f2fa00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000400 CR3: 000000020d60a006 CR4: 00000000001606e0 Call Trace: i40iw_exec_cqp_cmd+0x4d3/0x5c0 [i40iw] ? try_to_wake_up+0x1ea/0x5d0 ? __switch_to_asm+0x40/0x70 i40iw_process_cqp_cmd+0x95/0xa0 [i40iw] i40iw_handle_cqp_op+0x42/0x1a0 [i40iw] ? cm_event_handler+0x13c/0x1f0 [iw_cm] i40iw_rem_ref+0xa0/0xf0 [i40iw] cm_work_handler+0x99c/0xd10 [iw_cm] process_one_work+0x1a1/0x360 worker_thread+0x30/0x380 ? process_one_work+0x360/0x360 kthread+0x10c/0x130 ? kthread_park+0x80/0x80 ret_from_fork+0x35/0x40 Fixes: d3749841 ("i40iw: add files for iwarp interface") Link: https://lore.kernel.org/r/20200916131811.2077-1-shiraz.saleem@intel.comReported-by: Kamal Heib <kheib@redhat.com> Signed-off-by: Sindhu, Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Daniel Kranzdorf authored
Add separate stats types for send messages and RDMA read work requests. Link: https://lore.kernel.org/r/20200915141449.8428-3-galpress@amazon.comSigned-off-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
The keep alive received counter is a software stat, keep it grouped with all other software stats. Since all stored stats are software stats, remove the efa_sw_stats struct and use efa_stats instead. Link: https://lore.kernel.org/r/20200915141449.8428-2-galpress@amazon.comReviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Use rdma_restrack_set_name() and rdma_restrack_parent_name() instead of tricky uses of rdma_restrack_attach_task()/rdma_restrack_uadd(). This uniformly makes all restracks add'd using rdma_restrack_add(). Link: https://lore.kernel.org/r/20200922091106.2152715-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Have a single rdma_restrack_add() that adds an entry, there is no reason to split the user/kernel here, the rdma_restrack_set_task() is responsible for this difference. This patch prepares the code to the future requirement of making restrack is mandatory for managing ib objects. Link: https://lore.kernel.org/r/20200922091106.2152715-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Refactor the restrack code to make sure the kref inside the restrack entry properly kref's the object in which it is embedded. This slight change is needed for future conversions of MR and QP which are refcounted before the release and kfree. The ideal flow from ib_core perspective as follows: * Allocate ib_* structure with rdma_zalloc_*. * Set everything that is known to ib_core to that newly created object. * Initialize kref with restrack help * Call to driver specific allocation functions. * Insert into restrack DB .... * Return and release restrack with restrack_put. Largely this means a rdma_restrack_new() should be called near allocating the containing structure. Link: https://lore.kernel.org/r/20200922091106.2152715-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The restrack is going to manage memory of all IB objects and must be called before object is created. GSI QP in the mlx5_ib separated between creating dummy interface and HW object beneath. This was achieved by double call to ib_create_qp(). In order to skip such reentry call to internal driver create_qp code. Link: https://lore.kernel.org/r/20200922091106.2152715-3-leon@kernel.orgReviewed-by: Mark Zhang <markz@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Update the code to have similar destroy pattern like other IB objects. This change create asymmetry to the rdma_id_private create flow to make sure that memory is managed by restrack. Link: https://lore.kernel.org/r/20200922091106.2152715-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 18 Sep, 2020 14 commits
-
-
Jason Gunthorpe authored
ucma_destroy_id() assumes that all things accessing the ctx will do so via the xarray. This assumption violated only in the case the FD is being closed, then the ctx is reached via the ctx_list. Normally this is OK since ucma_destroy_id() cannot run concurrenty with release(), however with ucma_migrate_id() is involved this can violated as the close of the 2nd FD can run concurrently with destroy on the first: CPU0 CPU1 ucma_destroy_id(fda) ucma_migrate_id(fda -> fdb) ucma_get_ctx() xa_lock() _ucma_find_context() xa_erase() xa_unlock() xa_lock() ctx->file = new_file list_move() xa_unlock() ucma_put_ctx() ucma_close(fdb) _destroy_id() kfree(ctx) _destroy_id() wait_for_completion() // boom, ctx was freed The ctx->file must be modified under the handler and xa_lock, and prior to modification the ID must be rechecked that it is still reachable from cur_file, ie there is no parallel destroy or migrate. To make this work remove the double locking and streamline the control flow. The double locking was obsoleted by the handler lock now directly preventing new uevents from being created, and the ctx_list cannot be read while holding fgets on both files. Removing the double locking also removes the need to check for the same file. Fixes: 88314e4d ("RDMA/cma: add support for rdma_migrate_id()") Link: https://lore.kernel.org/r/0-v1-05c5a4090305+3a872-ucma_syz_migrate_jgg@nvidia.com Reported-and-tested-by: syzbot+cc6fc752b3819e082d0c@syzkaller.appspotmail.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Once a mkey is created it can be modified using UMR. This is desirable for performance reasons. However, different hardware has restrictions on what modifications are possible using UMR. Make sense of these checks: - mlx5_ib_can_reconfig_with_umr() returns true if the access flags can be altered. Most cases create MRs using 0 access flags (now made clear by consistent use of set_mkc_access_pd_addr_fields()), but the old logic here was tormented. Make it clear that this is checking if the current access_flags can be modified using UMR to different access_flags. It is always OK to use UMR to change flags that all HW supports. - mlx5_ib_can_load_pas_with_umr() returns true if UMR can be used to enable and update the PAS/XLT. Enabling requires updating the entity size, so UMR ends up completely disabled on this old hardware. Make it clear why it is disabled. FRWR, ODP and cache always requires mlx5_ib_can_load_pas_with_umr(). - mlx5_ib_pas_fits_in_mr() is used to tell if an existing MR can be resized to hold a new PAS list. This only works for cached MR's because we don't store the PAS list size in other cases. To be very clear, arrange things so any pre-created MR's in the cache check the newly requested access_flags before allowing the MR to leave the cache. If UMR cannot set the required access_flags the cache fails to create the MR. This in turn means relaxed ordering and atomic are now correctly blocked early for implicit ODP on older HW. Link: https://lore.kernel.org/r/20200914112653.345244-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
set_reg_wr() always fails if !umr_modify_entity_size_disabled because mlx5_ib_can_use_umr() always fails. Without set_reg_wr() IB_WR_REG_MR doesn't work and that means the device should not advertise IB_DEVICE_MEM_MGT_EXTENSIONS. Fixes: 841b07f9 ("IB/mlx5: Block MR WR if UMR is not possible") Link: https://lore.kernel.org/r/20200914112653.345244-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Any mkey that is not enabled and assigned to userspace should have the PD set to a kernel owned PD. When cache entries are created for the first time the PDN is set to 0, which is probably a kernel PD, but be explicit. When a MR is registered using the hybrid reg_create with UMR xlt & enable the disabled mkey is pointing at the user PD, keep it pointing at the kernel until a UMR enables it and sets the user PD. Fixes: 9ec4483a ("IB/mlx5: Move MRs to a kernel PD when freeing them to the MR cache") Link: https://lore.kernel.org/r/20200914112653.345244-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
reg_create() open codes this helper, use the shared code. Link: https://lore.kernel.org/r/20200914112653.345244-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
alloc_mr_from_cache() no longer returns EAGAIN, this is just dead code now. Fixes: aad719dc ("RDMA/mlx5: Allow MRs to be created in the cache synchronously") Link: https://lore.kernel.org/r/20200914112653.345244-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lijun Ou authored
hip06 does not support IB_WR_LOCAL_INV, so the ps_opcode should be set to an invalid value instead of being left uninitialized. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Fixes: a2f3d447 ("RDMA/hns: Avoid unncessary initialization") Link: https://lore.kernel.org/r/1600350615-115217-1-git-send-email-oulijun@huawei.comSigned-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Keita Suzuki authored
When xa_insert() fails, the acquired resource in qedr_create_qp should also be freed. However, current implementation does not handle the error. Fix this by adding a new goto label that calls qedr_free_qp_resources. Fixes: 1212767e ("qedr: Add wrapping generic structure for qpidr and adjust idr routines.") Link: https://lore.kernel.org/r/20200911125159.4577-1-keitasuzuki.park@sslab.ics.keio.ac.jpSigned-off-by: Keita Suzuki <keitasuzuki.park@sslab.ics.keio.ac.jp> Acked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Potnuri Bharat Teja authored
Receive side delayed ack mode is needed only for certain area networks/ connections. Therefore disable it by default. Link: https://lore.kernel.org/r/20200909153707.2795-1-bharat@chelsio.comSigned-off-by: Potnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Leon Romanovsky says: ==================== This series from Alex extends software steering interface to support devices with extra capability "sw_owner_2" which will replace existing "sw_owner". ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux due to dependencies. * branch 'mlx5_sw_owner_v2: RDMA/mlx5: Expose TIR and QP ICM address for sw_owner_v2 devices RDMA/mlx5: Allow DM allocation for sw_owner_v2 enabled devices RDMA/mlx5: Add sw_owner_v2 bit capability
-
Jason Gunthorpe authored
Leon Romanovsky says: ==================== IBTA declares speed as 16 bits, but kernel stores it in u8. This series fixes in-kernel declaration while keeping external interface intact. ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux due to dependencies. * branch 'mlx5_active_speed': RDMA: Fix link active_speed size RDMA/mlx5: Delete duplicated mlx5_ptys_width enum net/mlx5: Refactor query port speed functions
-
Aharon Landau authored
According to the IB spec active_speed size should be u16 and not u8 as before. Changing it to allow further extensions in offered speeds. Link: https://lore.kernel.org/r/20200917090223.1018224-4-leon@kernel.orgSigned-off-by: Aharon Landau <aharonl@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Alex Vesker authored
Expose the ICM address to access TIR and QP, this will allow sw_owned_v2 devices to steer traffic to TIRs and QPs same as done with sw_owner capability. Link: https://lore.kernel.org/r/20200903073857.1129166-4-leon@kernel.orgSigned-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Alex Vesker authored
sw_owner_v2 will replace sw_owner for future devices, this means that if sw_owner_v2 is set sw_owner should be ignored and DM allocation is required for sw_owner_v2 devices to function. Link: https://lore.kernel.org/r/20200903073857.1129166-3-leon@kernel.orgSigned-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 17 Sep, 2020 1 commit
-
-
Alex Vesker authored
Added sw_owner_v2 which will be enabled for future devices, replacing sw_owner bit. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
-