- 09 Jul, 2019 10 commits
-
-
John Hurley authored
Open vSwitch provides code to pop an MPLS header to a packet. In preparation for supporting this in TC, move the pop code to an skb helper that can be reused. Remove the, now unused, update_ethertype static function from OvS. Signed-off-by: John Hurley <john.hurley@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Acked-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Hurley authored
Open vSwitch provides code to push an MPLS header to a packet. In preparation for supporting this in TC, move the push code to an skb helper that can be reused. Signed-off-by: John Hurley <john.hurley@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Acked-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller authored
Two cases of overlapping changes, nothing fancy. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Willem de Bruijn authored
skb_warn_bad_offload and netdev_rx_csum_fault trigger on hard to debug issues. Dump more state and the header. Optionally dump the entire packet and linear segment. This is required to debug checksum bugs that may include bytes past skb_tail_pointer(). Both call sites call this function inside a net_ratelimit() block. Limit full packet log further to a hard limit of can_dump_full (5). Based on an earlier patch by Cong Wang, see link below. Changes v1 -> v2 - dump frag_list only on full_pkt Link: https://patchwork.ozlabs.org/patch/1000841/Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Willem de Bruijn authored
Processes can request ipv6 flowlabels with cmsg IPV6_FLOWINFO. If not set, by default an autogenerated flowlabel is selected. Explicit flowlabels require a control operation per label plus a datapath check on every connection (every datagram if unconnected). This is particularly expensive on unconnected sockets multiplexing many flows, such as QUIC. In the common case, where no lease is exclusive, the check can be safely elided, as both lease request and check trivially succeed. Indeed, autoflowlabel does the same even with exclusive leases. Elide the check if no process has requested an exclusive lease. fl6_sock_lookup previously returns either a reference to a lease or NULL to denote failure. Modify to return a real error and update all callers. On return NULL, they can use the label and will elide the atomic_dec in fl6_sock_release. This is an optimization. Robust applications still have to revert to requesting leases if the fast path fails due to an exclusive lease. Changes RFC->v1: - use static_key_false_deferred to rate limit jump label operations - call static_key_deferred_flush to stop timers on exit - move decrement out of RCU context - defer optimization also if opt data is associated with a lease - updated all fp6_sock_lookup callers, not just udp Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christoph Paasch authored
If an app is playing tricks to reuse a socket via tcp_disconnect(), bytes_acked/received needs to be reset to 0. Otherwise tcp_info will report the sum of the current and the old connection.. Cc: Eric Dumazet <edumazet@google.com> Fixes: 0df48c26 ("tcp: add tcpi_bytes_acked to tcp_info") Fixes: bdd1f9ed ("tcp: add tcpi_bytes_received to tcp_info") Signed-off-by: Christoph Paasch <cpaasch@apple.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vincent Bernat authored
IFLA_BOND_PEER_NOTIF_DELAY was set to the value of downdelay instead of peer_notif_delay. After this change, the correct value is exported. Fixes: 07a4ddec ("bonding: add an option to specify a delay between peer notifications") Signed-off-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Al Viro authored
socket->wq is assign-once, set when we are initializing both struct socket it's in and struct socket_wq it points to. As the matter of fact, the only reason for separate allocation was the ability to RCU-delay freeing of socket_wq. RCU-delaying the freeing of socket itself gets rid of that need, so we can just fold struct socket_wq into the end of struct socket and simplify the life both for sock_alloc_inode() (one allocation instead of two) and for tun/tap oddballs, where we used to embed struct socket and struct socket_wq into the same structure (now - embedding just the struct socket). Note that reference to struct socket_wq in struct sock does remain a reference - that's unchanged. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Al Viro authored
we do have an RCU-delayed part there already (freeing the wq), so it's not like the pipe situation; moreover, it might be worth considering coallocating wq with the rest of struct sock_alloc. ->sk_wq in struct sock would remain a pointer as it is, but the object it normally points to would be coallocated with struct socket... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-07-09 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Lots of libbpf improvements: i) addition of new APIs to attach BPF programs to tracing entities such as {k,u}probes or tracepoints, ii) improve specification of BTF-defined maps by eliminating the need for data initialization for some of the members, iii) addition of a high-level API for setting up and polling perf buffers for BPF event output helpers, all from Andrii. 2) Add "prog run" subcommand to bpftool in order to test-run programs through the kernel testing infrastructure of BPF, from Quentin. 3) Improve verifier for BPF sockaddr programs to support 8-byte stores for user_ip6 and msg_src_ip6 members given clang tends to generate such stores, from Stanislav. 4) Enable the new BPF JIT zero-extension optimization for further riscv64 ALU ops, from Luke. 5) Fix a bpftool json JIT dump crash on powerpc, from Jiri. 6) Fix an AF_XDP race in generic XDP's receive path, from Ilya. 7) Various smaller fixes from Ilya, Yue and Arnd. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 08 Jul, 2019 30 commits
-
-
Ilya Maximets authored
Unlike driver mode, generic xdp receive could be triggered by different threads on different CPU cores at the same time leading to the fill and rx queue breakage. For example, this could happen while sending packets from two processes to the first interface of veth pair while the second part of it is open with AF_XDP socket. Need to take a lock for each generic receive to avoid race. Fixes: c497176c ("xsk: add Rx receive functions and poll support") Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Tested-by: William Tu <u9012063@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
David S. Miller authored
Stephen Suryaputra says: ==================== net: Multipath hashing on inner L3 This series extends commit 363887a2 ("ipv4: Support multipath hashing on inner IP pkts for GRE tunnel") to include support when the outer L3 is IPv6 and to consider the case where the inner L3 is different version from the outer L3, such as IPv6 tunneled by IPv4 GRE or vice versa. It also includes kselftest scripts to test the use cases. v2: Clarify the commit messages in the commits in this series to use the term tunneled by IPv4 GRE or by IPv6 GRE so that it's clear which one is the inner and which one is the outer (per David Miller). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Suryaputra authored
Add selftest scripts for multipath hashing on inner IP pkts when there is a single GRE tunnel but there are multiple underlay routes to reach the other end of the tunnel. Four cases are covered in these scripts: - IPv4 inner, IPv4 outer - IPv6 inner, IPv4 outer - IPv4 inner, IPv6 outer - IPv6 inner, IPv6 outer Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Stephen Suryaputra <ssuryaextr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Suryaputra authored
Make the same support as commit 363887a2 ("ipv4: Support multipath hashing on inner IP pkts for GRE tunnel") for outer IPv6. The hashing considers both IPv4 and IPv6 pkts when they are tunneled by IPv6 GRE. Signed-off-by: Stephen Suryaputra <ssuryaextr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Suryaputra authored
Commit 363887a2 ("ipv4: Support multipath hashing on inner IP pkts for GRE tunnel") supports multipath policy value of 2, Layer 3 or inner Layer 3 if present, but it only considers inner IPv4. There is a use case of IPv6 is tunneled by IPv4 GRE, thus add the ability to hash on inner IPv6 addresses. Fixes: 363887a2 ("ipv4: Support multipath hashing on inner IP pkts for GRE tunnel") Signed-off-by: Stephen Suryaputra <ssuryaextr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wen Yang authored
The phy_dn variable is still being used in of_phy_connect() after the of_node_put() call, which may result in use-after-free. Fixes: 1dd2d06c ("net: Rework pasemi_mac driver to use of_mdio infrastructure") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wen Yang authored
There is a possible use-after-free issue in the axienet_probe(): 1701: np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0); 1702: if (np) { ... 1787: of_node_put(np); ---> released here 1788: lp->eth_irq = platform_get_irq(pdev, 0); 1789: } else { ... 1801: } 1802: if (IS_ERR(lp->dma_regs)) { ... 1805: of_node_put(np); ---> double released here 1806: goto free_netdev; 1807: } We solve this problem by removing the unnecessary of_node_put(). Fixes: 28ef9ebd ("net: axienet: make use of axistream-connected attribute optional") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: Anirudha Sarangi <anirudh@xilinx.com> Cc: John Linn <John.Linn@xilinx.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Robert Hancock <hancock@sedsystems.ca> Cc: netdev@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Robert Hancock <hancock@sedsystems.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ilya Leoshkevich authored
Fix endianness issue: passing a pointer to 64-bit fd as a 32-bit key does not work on big-endian architectures. So cast fd to 32-bits when necessary. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Kweh Hock Leong authored
DWMAC4 is capable to support clause 45 mdio communication. This patch enable the feature on stmmac_mdio_write() and stmmac_mdio_read() by following phy_write_mmd() and phy_read_mmd() mdiobus read write implementation format. Reviewed-by: Li, Yifan <yifan2.li@intel.com> Signed-off-by: Kweh Hock Leong <hock.leong.kweh@intel.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Taehee Yoo authored
Use netif_ovs_is_port() function instead of open code. This patch doesn't change logic. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesper Dangaard Brouer authored
In this release cycle the number of NIC drivers using page_pool will likely reach 4 drivers. It is about time to add a maintainer entry. Add myself and Ilias. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Maxime Chevallier says: ==================== net: mvpp2: Add classification based on the ETHER flow This series adds support for classification of the ETHER flow in the mvpp2 driver. The first patch allows detecting when a user specifies a flow_type that isn't supported by the driver, while the second adds support for this flow_type by adding the mapping between the ETHER_FLOW enum value and the relevant classifier flow entries. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
Users can specify classification actions based on the 'ether' flow type. In that case, this will apply to all ethernet traffic, superseeding flows such as 'udp4' or 'tcp6'. Add support for this flow type in the PPv2 classifier, by mapping the ETHER_FLOW value to the corresponding entries in the classifier. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
Add a missing check to detect flow types that we don't support, so that user can be informed of this. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Frank de Brabander authored
If mmap() fails it returns MAP_FAILED, which is defined as ((void *) -1). The current if-statement incorrectly tests if *ring is NULL. Fixes: 358be656 ("selftests/net: add txring_overwrite") Signed-off-by: Frank de Brabander <debrabander@gmail.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Stefano Garzarella says: ==================== vsock/virtio: several fixes in the .probe() and .remove() During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock before registering the driver", Stefan pointed out some possible issues in the .probe() and .remove() callbacks of the virtio-vsock driver. This series tries to solve these issues: - Patch 1 adds RCU critical sections to avoid use-after-free of 'the_virtio_vsock' pointer. - Patch 2 stops workers before to call vdev->config->reset(vdev) to be sure that no one is accessing the device. - Patch 3 moves the works flush at the end of the .remove() to avoid use-after-free of 'vsock' object. v3: - Patch 1: use rcu_dereference_protected() to get the_virtio_vosck value in the virtio_vsock_probe() [Jason] v2: https://patchwork.kernel.org/cover/11022343/ v1: https://patchwork.kernel.org/cover/10964733/ Before this series the guest crashes in a few second. After this series the test runs (~12h) without issues. Tested on an SMP guest (-smp 4 -monitor tcp:127.0.0.1:1234,server,nowait) with these scripts to stress the .probe()/.remove() path: - guest while true; do cat /dev/urandom | nc-vsock -l 4321 > /dev/null & cat /dev/urandom | nc-vsock -l 5321 > /dev/null & cat /dev/urandom | nc-vsock -l 6321 > /dev/null & cat /dev/urandom | nc-vsock -l 7321 > /dev/null & wait done - host while true; do cat /dev/urandom | nc-vsock 3 4321 > /dev/null & cat /dev/urandom | nc-vsock 3 5321 > /dev/null & cat /dev/urandom | nc-vsock 3 6321 > /dev/null & cat /dev/urandom | nc-vsock 3 7321 > /dev/null & sleep 2 echo "device_del v1" | nc 127.0.0.1 1234 sleep 1 echo "device_add vhost-vsock-pci,id=v1,guest-cid=3" | nc 127.0.0.1 1234 sleep 1 done ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Garzarella authored
This patch moves the flush of works after vdev->config->del_vqs(vdev), because we need to be sure that no workers run before to free the 'vsock' object. Since we stopped the workers using the [tx|rx|event]_run flags, we are sure no one is accessing the device while we are calling vdev->config->reset(vdev), so we can safely move the workers' flush. Before the vdev->config->del_vqs(vdev), workers can be scheduled by VQ callbacks, so we must flush them after del_vqs(), to avoid use-after-free of 'vsock' object. Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Garzarella authored
Before to call vdev->config->reset(vdev) we need to be sure that no one is accessing the device, for this reason, we add new variables in the struct virtio_vsock to stop the workers during the .remove(). This patch also add few comments before vdev->config->reset(vdev) and vdev->config->del_vqs(vdev). Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Garzarella authored
Some callbacks used by the upper layers can run while we are in the .remove(). A potential use-after-free can happen, because we free the_virtio_vsock without knowing if the callbacks are over or not. To solve this issue we move the assignment of the_virtio_vsock at the end of .probe(), when we finished all the initialization, and at the beginning of .remove(), before to release resources. For the same reason, we do the same also for the vdev->priv. We use RCU to be sure that all callbacks that use the_virtio_vsock ended before freeing it. This is not required for callbacks that use vdev->priv, because after the vdev->config->del_vqs() we are sure that they are ended and will no longer be invoked. We also take the mutex during the .remove() to avoid that .probe() can run while we are resetting the device. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Benedikt Spranger says: ==================== Document the configuration of b53 this is the third round to document the configuration of a b53 supported switch. v3..v2: - fix a typo - improve b53 configuration in DSA_TAG_PROTO_NONE showcase. - grade up from RFC to patch for mainline inclusion. v1..v2: - split out generic parts of the configuration. - target comments by Andrew Lunn and Florian Fainelli. - make changes visible to build system ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Benedikt Spranger authored
Document the different needs of documentation for the b53 driver. Signed-off-by: Benedikt Spranger <b.spranger@linutronix.de> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Benedikt Spranger authored
Document DSA tagged and VLAN based switch configuration by showcases. Signed-off-by: Benedikt Spranger <b.spranger@linutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fix to return negative error code -EINVAL from the error handling case instead of 0, as done elsewhere in this function. Fixes: 1f35a56c ("nfp: tls: add/delete TLS TX connections") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Michael Chan says: ==================== bnxt_en: Add XDP_REDIRECT support. This patch series adds XDP_REDIRECT support by Andy Gospodarek. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This removes contention over page allocation for XDP_REDIRECT actions by adding page_pool support per queue for the driver. The performance for XDP_REDIRECT actions scales linearly with the number of cores performing redirect actions when using the page pools instead of the standard page allocator. v2: Fix up the error path from XDP registration, noted by Ilias Apalodimas. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This adds basic support for XDP_REDIRECT in the bnxt_en driver. Next patch adds the more optimized page pool support. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
__bnxt_xmit_xdp() is used by XDP_TX and ethtool loopback packet transmit. Refactor it so that it can be re-used by the XDP_REDIRECT logic. Restructure the TX interrupt handler logic to cleanly separate XDP_TX logic in preparation for XDP_REDIRECT. Acked-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Renaming bnxt_xmit_xdp to __bnxt_xmit_xdp to get ready for XDP_REDIRECT support and reduce confusion/namespace collision. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ivan Khoronzhuk says: ==================== net: ethernet: ti: cpsw: Add XDP support This patchset adds XDP support for TI cpsw driver and base it on page_pool allocator. It was verified on af_xdp socket drop, af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX. It was verified with following configs enabled: CONFIG_JIT=y CONFIG_BPFILTER=y CONFIG_BPF_SYSCALL=y CONFIG_XDP_SOCKETS=y CONFIG_BPF_EVENTS=y CONFIG_HAVE_EBPF_JIT=y CONFIG_BPF_JIT=y CONFIG_CGROUP_BPF=y Link on previous v7: https://lkml.org/lkml/2019/7/4/715 Also regular tests with iperf2 were done in order to verify impact on regular netstack performance, compared with base commit: https://pastebin.com/JSMT0iZ4 v8..v9: - fix warnings on arm64 caused by typos in type casting v7..v8: - corrected dma calculation based on headroom instead of hard start - minor comment changes v6..v7: - rolled back to v4 solution but with small modification - picked up patch: https://www.spinics.net/lists/netdev/msg583145.html - added changes related to netsec fix and cpsw v5..v6: - do changes that is rx_dev while redirect/flush cycle is kept the same - dropped net: ethernet: ti: davinci_cpdma: return handler status - other changes desc in patches v4..v5: - added two plreliminary patches: net: ethernet: ti: davinci_cpdma: allow desc split while down net: ethernet: ti: cpsw_ethtool: allow res split while down - added xdp alocator refcnt on xdp level, avoiding page pool refcnt - moved flush status as separate argument for cpdma_chan_process - reworked cpsw code according to last changes to allocator - added missed statistic counter v3..v4: - added page pool user counter - use same pool for ndevs in dual mac - restructured page pool create/destroy according to the last changes in API v2..v3: - each rxq and ndev has its own page pool v1..v2: - combined xdp_xmit functions - used page allocation w/o refcnt juggle - unmapped page for skb netstack - moved rxq/page pool allocation to open/close pair - added several preliminary patches: net: page_pool: add helper function to retrieve dma addresses net: page_pool: add helper function to unmap dma addresses net: ethernet: ti: cpsw: use cpsw as drv data net: ethernet: ti: cpsw_ethtool: simplify slave loops ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ivan Khoronzhuk authored
Add XDP support based on rx page_pool allocator, one frame per page. Page pool allocator is used with assumption that only one rx_handler is running simultaneously. DMA map/unmap is reused from page pool despite there is no need to map whole page. Due to specific of cpsw, the same TX/RX handler can be used by 2 network devices, so special fields in buffer are added to identify an interface the frame is destined to. Thus XDP works for both interfaces, that allows to test xdp redirect between two interfaces easily. Also, each rx queue have own page pools, but common for both netdevs. XDP prog is common for all channels till appropriate changes are added in XDP infrastructure. Also, once page_pool recycling becomes part of skb netstack some simplifications can be added, like removing page_pool_release_page() before skb receive. In order to keep rx_dev while redirect, that can be somehow used in future, do flush in rx_handler, that allows to keep rx dev the same while redirect. It allows to conform with tracing rx_dev pointed by Jesper. Also, there is probability, that XDP generic code can be extended to support multi ndev drivers like this one, using same rx queue for several ndevs, based on switchdev for instance or else. In this case, driver can be modified like exposed here: https://lkml.org/lkml/2019/7/3/243Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-