- 02 Jan, 2017 40 commits
-
-
Santosh Shilimkar authored
When application sends an RDS RDMA composite message consist of RDMA transfer to be followed up by non RDMA payload, it expect to be notified *only* when the full message gets delivered. RDS RDMA notification doesn't behave this way though. Thanks to Venkat for debug and root casuing the issue where only first part of the message(RDMA) was successfully delivered but remainder payload delivery failed. In that case, application should not be notified with a false positive of message delivery success. Fix this case by making sure the user gets notified only after the full message delivery. Reviewed-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Based on available device vectors, allocate cqs accordingly to get better spread of completion vectors which helps performace great deal.. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Tracks the ib receive cache total, incoming and frag allocations. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Useful to know the active and passive end points in a RDS IB connection. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
In absence of extension headers, message log will keep flooding the console. As such even without use_once we can clean up the MRs so its not really an error case message so make it debug message Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
MR invalidation in RDS is done in background thread and not in data path like registration. So break the dependency between them which helps to remove the performance bottleneck. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
The first message to a remote node should prompt a new connection even if it is RDMA operation. For RDMA operation the MR mapping can fail because connections is not yet up. Since the connection establishment is asynchronous, we make sure the map failure because of unavailable connection reach to the user by appropriate error code. Before returning to the user, lets trigger the connection so that its ready for the next retry. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Qing Huang authored
This prevents RDS from handling incoming rdma packets before RDS completes initializing its recv/send components. Signed-off-by: Qing Huang <qing.huang@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Fixes warning: Using plain integer as NULL pointer Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Transport retry is not much useful since it indicate packet loss in fabric so its better to failover fast rather than longer retry. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Also use pr_* for it. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
Fixes below warnings: warning: symbol 'rds_send_probe' was not declared. Should it be static? warning: symbol 'rds_send_ping' was not declared. Should it be static? warning: symbol 'rds_tcp_accept_one_path' was not declared. Should it be static? warning: symbol 'rds_walk_conn_path_info' was not declared. Should it be static? Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
It's useful to know the IP address when RDS fails to bind a connection. Thus, adding it to the error message. Orabug: 21894138 Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
David S. Miller authored
Saeed Mahameed says: ==================== Mellanox mlx5 core and ODP updates 2017-01-01 The following eleven patches mainly come from Artemy Kovalyov who expanded mlx5 on-demand-paging (ODP) support. In addition there are three cleanup patches which don't change any functionality, but are needed to align codebase prior accepting other patches. Memory region (MR) in IB can be huge and ODP (on-demand paging) technique allows to use unpinned memory, which can be consumed and released on demand. This allows to applications do not pin down the underlying physical pages of the address space, and save from them need to track the validity of the mappings. Rather, the HCA requests the latest translations from the OS when pages are not present, and the OS invalidates translations which are no longer valid due to either non-present pages or mapping changes. In existing ODP implementation applications is needed to register memory buffers for communication, though registered memory regions need not have valid mappings at registration time. This patch set performs the following steps to expand current ODP implementation: 1. It refactors UMR to support large regions, by introducing generic function to perform HCA translation table modifications. This function supports both atomic and process contexts and is not limited by number of modified entries. This function allows to enable reallocated memory regions of arbitrary size, so adding MR cache buckets to support up to 16GB MRs. 2. It changes page fault event format and refactor page faults logic together with addition of atomic support. 3. It prepares mlx5 core code to support implicit registration with simplified and relaxed semantics. Implicit ODP semantics allows to applications provide special memory key that represents their complete address space. Thus all IO accesses referencing to this key (with proper access rights associated with the key) wouldn't need not register any virtual address range. Thanks, Artemy, Ilya and Leon v1->v2: - Don't use 'inline' in .c files ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
Add "type" field to mlx5_core MKEY struct. Check whether page fault happens on MKEY corresponding to MR. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
Handle ODP atomic operations. When initiator of RDMA atomic operation use ODP MR to provide source data handle pagefault properly. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
* Update page fault event according to last specification. * Separate code path for page fault EQ, completion EQ and async EQ. * Move page fault handling work queue from mlx5_ib static variable into mlx5_core page fault EQ. * Allocate memory to store ODP event dynamically as the events arrive, since in atomic context - use mempool. * Make mlx5_ib page fault handler run in process context. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
Update PAGE_FAULT_RESUME command layout. Three bit fields describing page fault: rdma, rdma_write, req_res gave 8 possible combinations, while only a few were legal. Now they are interpreted as three-bit type field, where former legal combinations turns into corresponding types and unused were added as new types. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
In this change we turn mlx5_ib_update_mtt() into generic mlx5_ib_update_xlt() to perfrom HCA translation table modifiactions supporting both atomic and process contexts and not limited by number of modified entries. Using this function we increase preallocated MRs up to 16GB. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
Make use of extended UMR translation offset. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
* Update struct mlx5_wqe_umr_ctrl_seg. * Currenlty UMR send_flags aim only certain use cases: enabled/disable cached MR, modifying XLT for ODP. By making flags independent make UMR more flexible allowing arbitrary manipulations. * Since different UMR formats have different entry sizes UMR request should receive exact size of translation table update instead of number of entries. Rename field npages to xlt_size in struct mlx5_umr_wr and update relevant code accordingly. * Add support of length64 bit. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Artemy Kovalyov authored
This patch adds the following items to IFC file. 1. MLX5_MKC_ACCESS_MODE_KSM enum value for creating KSM memory keys. KSM access mode used when indirect MKey associated with fixed memory size entries. 2. null_mkey field that is used to indicate non-present KLM/KSM entries, where it causes the device to generate page fault event when trying to access it. 3. struct mlx5_ifc_cmd_hca_cap_bits capability bits indicating related value/field is supported: * fixed_buffer_size - MLX5_MKC_ACCESS_MODE_KSM * umr_extended_translation_offset - translation_offset_42_16 in UMR ctrl segment * null_mkey - null_mkey in QUERY_SPECIAL_CONTEXTS Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Binoy Jayan authored
Clean up the following common code (to post a list of work requests to the send queue of the specified QP) at various places and add a helper function 'mlx5_ib_post_send_wait' to implement the same. - Initialize 'mlx5_ib_umr_context' on stack - Assign "mlx5_umr_wr:wr:wr_cqe to umr_context.cqe - Acquire the semaphore - call ib_post_send with a single ib_send_wr - wait_for_completion() - Check for umr_context.status - Release the semaphore Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leon Romanovsky authored
The order of features exposed by private mlx5-abi.h file is CQE zipping, packet pacing and multi-packet WQE. The internal order implemented in mlx5_ib_query_device() is multi-packet WQE, CQE zipping and packet pacing. Such difference hurts code readability, so let's sync, while mlx5-abi.h (exposed to userspace) is the primary order. This commit doesn't change any functionality. Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Max Gurtovoy authored
Fix offset for reserved fields. Fixes: 7486216b ("{net,IB}/mlx5: mlx5_ifc updates") Fixes: b4ff3a36 ("net/mlx5: Use offset based reserved field names in the IFC header file") Fixes: 7d5e1423 ("net/mlx5: Update mlx5_ifc hardware features") Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'wireless-drivers-next-for-davem-2017-01-02' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for 4.11 The most notable change here is the inclusion of airtime fairness scheduling to ath9k. It prevents slow clients from hogging all the airtime and unfairly slowing down faster clients. Otherwise smaller changes and cleanup. Major changes: ath9k * cleanup eeprom endian handling * add airtime fairness scheduling ath10k * fix issues for new QCA9377 firmware version * support dev_coredump() for firmware crash dump * enable channel 169 on 5 GHz band ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Niklas Cassel authored
For core revision 3.x Address-Aligned Beats is available in two registers. The DT property snps,aal was created for AAL in the DMA bus register, which is a read/write bit. The DT property snps,axi_all was created for AXI_AAL in the AXI bus mode register, which is a read only bit that reflects the value of AAL in the DMA bus register. Since the value of snps,axi_all is never used in the driver, and since the property was created for a bit that is read only, it should be safe to remove the property. Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Yuval Mintz says: ==================== qed*: Driver updates The more interesting changes in this series include: - Restructuring of the qede files - qede_main.c has grown big and this series splits it into 3 parts [patches #2 and #3]. - Some significant changes in the API through which RSS indirection table gets configured [#8]. - Support for ndo_set_vf_trust() [#9] which would regulate which VFs are allowed to use promisc/multi-promisc mode. It also contains various minor changes to qed/qede, as well as non-functional changes [#1, #12] to complement other changes. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ram Amrani authored
If qedr isn't part of the kernel then don't allocate RDMA resources for it in qed. Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
Currently multicast traffic wouldn't be routed internally to listener; Instead it would only be sent to network via the physical carrier. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
Trusted VFs would be allowed to receive promiscuous and multicast promiscuous data. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
A step toward having qede agnostic to the queue configurations in firmware/hardware - let the RSS indirections use queue handles instead of actual queue indices. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Manish Chopra authored
When driver receives a recognized encapsulated packet it needs to set the skb->encapsulation field as well. Signed-off-by: Manish Chopra <Manish.Chopra@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
During Rx flow driver allocates a replacement buffer each time it consumes an Rx buffer. Failing to do so, it would consume the currently processed buffer and re-post it on the ring. As a result, the Rx ring is always completely full [from driver POV]. We now allow the Rx ring to shorten by doing the re-allocations at the end of the NAPI run. The only limitation is that we still want to make sure each time we reallocate that we'd still have sufficient elements in the Rx ring to guarantee that FW would be able to post additional data and trigger an interrupt. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
Today qede requests contexts that would suffice for 64 'whole' combined queues [192 meant for 64 rx, tx and xdp tx queues], but registers netdev and limits the number of queues based on information received by qed. In turn, qed doesn't take context into account when informing qede how many queues it can support. This would lead to a configuration problem in case user tries configuring >64 combined queues to interface [or >96 in case xdp isn't enabled]. Since we don't have a mangement firware that actually provides so many interrupt lines to a single device we're currently safe but that's about to change soon. The new maximum is hence changed: - For RoCE devices, the limit would remain 64. - For non-RoCE devices, the limit might be higher [depending on the actual configuration of the device]. qed would start enforcing that limit in both scenarios. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
This takes the various filtering logic of the driver and moves them into their own dedicated file - qede_filter.c. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
This adds a new file qede_fp.c and relocates the datapath-related logic into it [from qede_main.c]. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mintz, Yuval authored
Since the submission of the qedr driver, there's inconsistency in the licensing of the various qed/qede files - some are GPLv2 and some are dual-license. Since qedr requires dual-license and it's dependent on both, we're updating the licensing of all qed/qede source files. Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-