- 09 Mar, 2017 40 commits
-
-
Thomas Petazzoni authored
In preparation to the introduction for the support of PPv2.2 in the mvpp2 driver, this commit adds a hw_version field to the struct mvpp2, and uses the .data field of the DT match table to fill it in. Having the MVPP21 and MVPP22 definitions available will allow to start adding the necessary conditional code to support PPv2.2. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
The PPv2.2 IP has a different TX and RX descriptor layout compared to PPv2.1. In order to prepare for the introduction of PPv2.2 support in mvpp2, this commit adds accessors for the different fields of the TX and RX descriptors, and changes the code to use them. For now, the mvpp2_port argument passed to the accessors is not used, but it will be used in follow-up to update the descriptor according to the version of the IP being used. Apart from the mechanical changes to use the newly introduced accessors, a few other changes, needed to use the accessors, are made: - The mvpp2_txq_inc_put() function now takes a mvpp2_port as first argument, as it is needed to use the accessors. - Similarly, the mvpp2_bm_cookie_build() gains a mvpp2_port first argument, for the same reason. - In mvpp2_rx_error(), instead of accessing the RX descriptor in each case of the switch, we introduce a local variable to store the packet size. - In mvpp2_tx_frag_process() and mvpp2_tx() instead of accessing the packet size from the TX descriptor, we use the actual value available in the function, which is used to set the TX descriptor packet size a few lines before. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
The RX descriptors of the PPv2 hardware allow to store several information, amongst which: - the DMA address of the buffer in which the data has been received - a "cookie" field, left to the use of the driver, and not used by the hardware In the current implementation, the "cookie" field is used to store the virtual address of the buffer, so that in the receive completion path, we can easily get the virtual address of the buffer that corresponds to a completed RX descriptors. On PPv2.1, used on 32-bit platforms, those two fields are 32-bit wide, which is enough to store a DMA address in the first field, and a virtual address in the second field. On PPv2.2, used on 64-bit platforms, these two fields have been extended to 40 bits. While 40 bits is enough to store a DMA address (as long as the DMA mask is 40 bits or lower), it is not enough to store a virtual address. Therefore, the "cookie" field can no longer be used to store the virtual address of the buffer. However, as Russell King pointed out, the RX buffers are always allocated in the kernel linear mapping, and therefore using phys_to_virt() on the physical address of the RX buffer is possible and correct. Therefore, this commit changes the driver to use the "cookie" field to store the physical address instead of the virtual address. phys_to_virt() is used in the receive completion path to retrieve the virtual address from the physical address. It is obviously important to realize that the DMA address and physical address are two different things, which is why we store both in the RX descriptors. While those addresses may be identical in some situations, it remains two distinct concepts, and both addresses should be handled separately. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
The mvpp2_txq_pend_desc_num_get() function only selects a TX queue, and reads the number of pending descriptors. It is used in only one place, in mvpp2_txq_clean(), where the TX queue has already been selected by a write to MVPP2_TXQ_NUM_REG. Therefore, this function is useless, and the caller can simply read the value of the MVPP2_TXQ_PENDING_REG register instead. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
This register is no longer used since commit edc660fa ("net: mvpp2: replace TX coalescing interrupts with hrtimer"). Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
The "buffer header" functionality is a functionality used by the hardware to split an incoming packets over multiple BM buffers if they are not large enough. However, the mvpp2 driver guarantees that a pool of BM buffers has buffers with a size large enough to store MTU-sized packets. Therefore, this functionality is completely unused, and the code can be removed, and we should never get a descriptor with bit MVPP2_RXD_BUF_HDR set. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
As indicated by Russell King, the mvpp2 driver currently uses a lot "phys" or "phys_addr" to store what really is a DMA address. This commit clarifies this by using "dma" or "dma_addr" where appropriate. This is especially important as we are going to introduce more changes where the distinction between physical address and DMA address will be key. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
The Marvell PPv2 Device Tree binding was so far only used to describe the PPv2.1 network controller, used in the Marvell Armada 375. A new version of this IP block, PPv2.2 is used in the Marvell Armada 7K/8K processor. This commit extends the existing binding so that it can also be used to describe PPv2.2 hardware. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== mlx4: order-0 allocations and page recycling As mentioned half a year ago, we better switch mlx4 driver to order-0 allocations and page recycling. This reduces vulnerability surface thanks to better skb->truesize tracking and provides better performance in most cases. (33 Gbit for one TCP flow on my lab hosts) I will provide for linux-4.13 a patch on top of this series, trying to improve data locality as described in https://www.spinics.net/lists/netdev/msg422258.html v2 provides an ethtool -S new counter (rx_alloc_pages) and code factorization, plus Tariq fix. v3 includes various fixes based on Tariq tests and feedback from Saeed and Tariq. v4 rebased on net-next for inclusion in linux-4.12, as requested by Tariq. Worth noting this patch series deletes ~250 lines of code ;) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We should keep one way to build skbs, regardless of GRO being on or off. Note that I made sure to defer as much as possible the point we need to pull data from the frame, so that future prefetch() we might add are more effective. These skb attributes derive from the CQE or ring : ip_summed, csum hash vlan offload hwtstamps queue_mapping As a bonus, this patch removes mlx4 dependency on eth_get_headlen() which is very often broken enough to give us headaches. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Testing a boolean in fast path is not worth duplicating the code allocating packets, when GRO is on or off. If this proves to be a problem, we might later use a jump label. Next patch will remove this duplicated code and ease code review. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We need to compute the frame virtual address at different points. Do it once. Following patch will use the new va address for validate_loopback() Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Instead of fetching dma address from rx_desc->data[0].addr, prefer using frags[0].dma + frags[0].page_offset to avoid a potential cache line miss. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This new counter tracks number of pages that we allocated for one port. lpaa24:~# ethtool -S eth0 | egrep 'rx_alloc_pages|rx_packets' rx_packets: 306755183 rx_alloc_pages: 932897 Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Same technique than some Intel drivers, for arches where PAGE_SIZE = 4096 In most cases, pages are reused because they were consumed before we could loop around the RX ring. This brings back performance, and is even better, a single TCP flow reaches 30Gbit on my hosts. v2: added full memset() in mlx4_en_free_frag(), as Tariq found it was needed if we switch to large MTU, as priv->log_rx_info can dynamically be changed. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use of order-3 pages is problematic in some cases. This patch might add three kinds of regression : 1) a CPU performance regression, but we will add later page recycling and performance should be back. 2) TCP receiver could grow its receive window slightly slower, because skb->len/skb->truesize ratio will decrease. This is mostly ok, we prefer being conservative to not risk OOM, and eventually tune TCP better in the future. This is consistent with other drivers using 2048 per ethernet frame. 3) Because we allocate one page per RX slot, we consume more memory for the ring buffers. XDP already had this constraint anyway. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We will soon use order-0 pages, and frag truesize will more precisely match real sizes. In the new model, we prefer to use <= 2048 bytes fragments, so that we can use page-recycle technique on PAGE_SIZE=4096 arches. We will still pack as much frames as possible on arches with big pages, like PowerPC. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We only need to store the page and dma address. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
No need to duplicate it per RX queue / frags. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Using per frag storage for frag_prefix_size is really silly. mlx4_en_complete_rx_desc() has all needed info already. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This is really a port attribute, no need to duplicate it per RX queue and per frag. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
No need to duplicate it for all queues and frags. num_frags & log_rx_info become u8 to save space. u8 accesses are a bit faster than u16 anyway. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiri Pirko says: ==================== mlxsw: cosmetics Couple of cosmetic mlxsw patches ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The overrun ignore bit isn't supported by the device's firmware and was recently removed from the programmer's reference manual (PRM). Remove it from the driver as well. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Commit dd82364c ("mlxsw: Flip to the new dev walk API") did some small changes in mlxsw code, but it did not respect the naming conventions. So fix this now. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
__sk_dst_set() must be called while we own the socket. We can get proper lockdep coverage using lockdep_sock_is_held() and rcu_dereference_protected() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiri Pirko says: ==================== flow dissector improvements This patchset follows-up the discussion about future extensions of flow dissector and tries to address the mentioned concerns. Some parts are cut out into sub-functions. Also, the processing of the code (ARP, MPLS) is made dependent on user actually requiring the bisected values. This prepares the code for future extensions to bisect IPv6 ND messages, TCP flags, etc. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Make the main flow_dissect function a bit smaller and move the GRE dissection into a separate function. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Align with "ip_proto_again" label used in the same function and rename vague "again" to "proto_again". Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Now, when an unexpected element in the GRE header appears, we break so the l4 ports are processed. But since the ports are processed unconditionally, there will be certainly random values dissected. Fix this by just bailing out in such situations. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Make the main flow_dissect function a bit smaller and move the MPLS dissection into a separate function. Along with that, do the MPLS header processing only in case the flow dissection user requires it. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Make the main flow_dissect function a bit smaller and move the ARP dissection into a separate function. Along with that, do the ARP header processing only in case the flow dissection user requires it. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Tested-by: Geoff Levand <geoff@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Philippe Reynes authored
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-