- 04 Aug, 2020 40 commits
-
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'wanxl_pci_init_one()', GFP_KERNEL can be used because it is a probe function and no lock is acquired. Moreover, just a few lines above, GFP_KERNEL is already used. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
If the accelerated networking SRIOV VF device has lost carrier use the synthetic network device which is available as backup path. This is a rare case since if VF link goes down, normally the VMBus device will also loose external connectivity as well. But if the communication is between two VM's on the same host the VMBus device will still work. Reported-by: "Shah, Ashish N" <ashish.n.shah@intel.com> Fixes: 0c195567 ("netvsc: transparent VF management") Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Fix smatch warning: drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c:2419 alloc_channel() warn: passing zero to 'ERR_PTR' setup_dpcon() should return ERR_PTR(err) instead of zero in error handling case. Fixes: d7f5a9d8 ("dpaa2-eth: defer probe on object allocate") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Roese authored
I just recently noticed that ethernet does not work anymore since v5.5 on the GARDENA smart Gateway, which is based on the AT91SAM9G25. Debugging showed that the "GEM bits" in the NCFGR register are now unconditionally accessed, which is incorrect for the !macb_is_gem() case. This patch adds the macb_is_gem() checks back to the code (in macb_mac_config() & macb_mac_link_up()), so that the GEM register bits are not accessed in this case any more. Fixes: 7897b071 ("net: macb: convert to phylink") Signed-off-by: Stefan Roese <sr@denx.de> Cc: Reto Schneider <reto.schneider@husqvarnagroup.com> Cc: Alexandre Belloni <alexandre.belloni@bootlin.com> Cc: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nfDavid S. Miller authored
Pablo Neira Ayuso says: ==================== Netfilter fixes for net The following patchset contains Netfilter fixes for net: 1) Flush the cleanup xtables worker to make sure destructors have completed, from Florian Westphal. 2) iifgroup is matching erroneously, also from Florian. 3) Add selftest for meta interface matching, from Florian Westphal. 4) Move nf_ct_offload_timeout() to header, from Roi Dayan. 5) Call nf_ct_offload_timeout() from flow_offload_add() to make sure garbage collection does not evict offloaded flow, from Roi Dayan. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xin Long authored
A dead lock was triggered on thunderx driver: CPU0 CPU1 ---- ---- [01] lock(&(&nic->rx_mode_wq_lock)->rlock); [11] lock(&(&mc->mca_lock)->rlock); [12] lock(&(&nic->rx_mode_wq_lock)->rlock); [02] <Interrupt> lock(&(&mc->mca_lock)->rlock); The path for each is: [01] worker_thread() -> process_one_work() -> nicvf_set_rx_mode_task() [02] mld_ifc_timer_expire() [11] ipv6_add_dev() -> ipv6_dev_mc_inc() -> igmp6_group_added() -> [12] dev_mc_add() -> __dev_set_rx_mode() -> nicvf_set_rx_mode() To fix it, it needs to disable bh on [1], so that the timer on [2] wouldn't be triggered until rx_mode_wq_lock is released. So change to use spin_lock_bh() instead of spin_lock(). Thanks to Paolo for helping with this. v1->v2: - post to netdev. Reported-by: Rafael P. <rparrazo@redhat.com> Tested-by: Dean Nelson <dnelson@redhat.com> Fixes: 469998c8 ("net: thunderx: prevent concurrent data re-writing by nicvf_set_rx_mode") Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Stefano Brivio says: ==================== Support PMTU discovery with bridged UDP tunnels Currently, PMTU discovery for UDP tunnels only works if packets are routed to the encapsulating interfaces, not bridged. This results from the fact that we generally don't have valid routes to the senders we can use to relay ICMP and ICMPv6 errors, and makes PMTU discovery completely non-functional for VXLAN and GENEVE ports of both regular bridges and Open vSwitch instances. If the sender is local, and packets are forwarded to the port by a regular bridge, all it takes is to generate a corresponding route exception on the encapsulating device. The bridge then finds the route exception carrying the PMTU value estimate as it forwards frames, and relays ICMP messages back to the socket of the local sender. Patch 1/6 fixes this case. If the sender resides on another node, we actually need to reply to IP and IPv6 packets ourselves and send these ICMP or ICMPv6 errors back, using the same encapsulating device. Patch 2/6, based on an original idea by Florian Westphal, adds the needed functionality, while patches 3/6 and 4/6 add matching support for VXLAN and GENEVE. Finally, 5/6 and 6/6 introduce selftests for all combinations of inner and outer IP versions, covering both VXLAN and GENEVE, with both regular bridges and Open vSwitch instances. v2: Add helper to check for any bridge port, skip oif check for PMTU routes for bridge ports only, split IPv4 and IPv6 helpers and functions (all suggested by David Ahern) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
The new tests check that IP and IPv6 packets exceeding the local PMTU estimate, forwarded by an Open vSwitch instance from another node, result in the correct route exceptions being created, and that communication with end-to-end fragmentation, over GENEVE and VXLAN Open vSwitch ports, is now possible as a result of PMTU discovery. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
The new tests check that IP and IPv6 packets exceeding the local PMTU estimate, both locally generated and forwarded by a bridge from another node, result in the correct route exceptions being created, and that communication with end-to-end fragmentation over VXLAN and GENEVE tunnels is now possible as a result of PMTU discovery. Part of the existing setup functions aren't generic enough to simply add a namespace and a bridge to the existing routing setup. This rework is in progress and we can easily shrink this once more generic topology functions are available. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
If the interface is a bridge or Open vSwitch port, and we can't forward a packet because it exceeds the local PMTU estimate, trigger an ICMP or ICMPv6 reply to the sender, using the same interface to forward it back. If metadata collection is enabled, set destination and source addresses for the flow as if we were receiving the packet, so that Open vSwitch can match the ICMP error against the existing association. v2: Use netif_is_any_bridge_port() (David Ahern) Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
If the interface is a bridge or Open vSwitch port, and we can't forward a packet because it exceeds the local PMTU estimate, trigger an ICMP or ICMPv6 reply to the sender, using the same interface to forward it back. If metadata collection is enabled, reverse destination and source addresses, so that Open vSwitch is able to match this packet against the existing, reverse flow. v2: Use netif_is_any_bridge_port() (David Ahern) Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
It's currently possible to bridge Ethernet tunnels carrying IP packets directly to external interfaces without assigning them addresses and routes on the bridged network itself: this is the case for UDP tunnels bridged with a standard bridge or by Open vSwitch. PMTU discovery is currently broken with those configurations, because the encapsulation effectively decreases the MTU of the link, and while we are able to account for this using PMTU discovery on the lower layer, we don't have a way to relay ICMP or ICMPv6 messages needed by the sender, because we don't have valid routes to it. On the other hand, as a tunnel endpoint, we can't fragment packets as a general approach: this is for instance clearly forbidden for VXLAN by RFC 7348, section 4.3: VTEPs MUST NOT fragment VXLAN packets. Intermediate routers may fragment encapsulated VXLAN packets due to the larger frame size. The destination VTEP MAY silently discard such VXLAN fragments. The same paragraph recommends that the MTU over the physical network accomodates for encapsulations, but this isn't a practical option for complex topologies, especially for typical Open vSwitch use cases. Further, it states that: Other techniques like Path MTU discovery (see [RFC1191] and [RFC1981]) MAY be used to address this requirement as well. Now, PMTU discovery already works for routed interfaces, we get route exceptions created by the encapsulation device as they receive ICMP Fragmentation Needed and ICMPv6 Packet Too Big messages, and we already rebuild those messages with the appropriate MTU and route them back to the sender. Add the missing bits for bridged cases: - checks in skb_tunnel_check_pmtu() to understand if it's appropriate to trigger a reply according to RFC 1122 section 3.2.2 for ICMP and RFC 4443 section 2.4 for ICMPv6. This function is already called by UDP tunnels - a new function generating those ICMP or ICMPv6 replies. We can't reuse icmp_send() and icmp6_send() as we don't see the sender as a valid destination. This doesn't need to be generic, as we don't cover any other type of ICMP errors given that we only provide an encapsulation function to the sender While at it, make the MTU check in skb_tunnel_check_pmtu() accurate: we might receive GSO buffers here, and the passed headroom already includes the inner MAC length, so we don't have to account for it a second time (that would imply three MAC headers on the wire, but there are just two). This issue became visible while bridging IPv6 packets with 4500 bytes of payload over GENEVE using IPv4 with a PMTU of 4000. Given the 50 bytes of encapsulation headroom, we would advertise MTU as 3950, and we would reject fragmented IPv6 datagrams of 3958 bytes size on the wire. We're exclusively dealing with network MTU here, though, so we could get Ethernet frames up to 3964 octets in that case. v2: - moved skb_tunnel_check_pmtu() to ip_tunnel_core.c (David Ahern) - split IPv4/IPv6 functions (David Ahern) Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefano Brivio authored
Currently, processes sending traffic to a local bridge with an encapsulation device as a port don't get ICMP errors if they exceed the PMTU of the encapsulated link. David Ahern suggested this as a hack, but it actually looks like the correct solution: when we update the PMTU for a given destination by means of updating or creating a route exception, the encapsulation might trigger this because of PMTU discovery happening either on the encapsulation device itself, or its lower layer. This happens on bridged encapsulations only. The output interface shouldn't matter, because we already have a valid destination. Drop the output interface restriction from the associated route lookup. For UDP tunnels, we will now have a route exception created for the encapsulation itself, with a MTU value reflecting its headroom, which allows a bridge forwarding IP packets originated locally to deliver errors back to the sending socket. The behaviour is now consistent with IPv6 and verified with selftests pmtu_ipv{4,6}_br_{geneve,vxlan}{4,6}_exception introduced later in this series. v2: - reset output interface only for bridge ports (David Ahern) - add and use netif_is_any_bridge_port() helper (David Ahern) Suggested-by: David Ahern <dsahern@gmail.com> Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'wireless-drivers-next-2020-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for v5.9 Second set of patches for v5.9. mt76 has most of patches this time. Otherwise it's just smaller fixes and cleanups to other drivers. There was a major conflict in mt76 driver between wireless-drivers and wireless-drivers-next. I solved that by merging the former to the latter. Major changes: rtw88 * add support for ieee80211_ops::change_interface * add support for enabling and disabling beacon * add debugfs file for testing h2c mt76 * ARP filter offload for 7663 * runtime power management for 7663 * testmode support for mfg calibration * support for more channels ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use netdev_<level> in place of VELOCITY_PRT. Use pr_<level> in place of printk(KERN_<LEVEL>. Miscellanea: o Add pr_fmt to prefix pr_<level> output with "via-velocity: " o Remove now unused functions and macros o Realign some logging lines o Remove devname where pr_<level> is also used Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Luo bin says: ==================== hinic: mailbox channel enhancement add support to generate mailbox random id for VF to ensure that the mailbox message from VF is valid and PF should check whether the cmd from VF is supported before passing it to hw. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luo bin authored
PF should check whether the cmd from VF is supported and its content is right before passing it to hw. Signed-off-by: Luo bin <luobin9@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luo bin authored
add support to generate mailbox random id of VF to ensure that mailbox messages PF received are from the correct VF. Signed-off-by: Luo bin <luobin9@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.gitKalle Valo authored
mt76 driver had major conflicts within mt7615 directory. To make it easier for every merge wireless-drivers to wireless-drivers-next and solve those conflicts.
-
David S. Miller authored
drivers/net/ethernet/sfc/ef100_nic.c:835:3: error: 'const struct efx_nic_type' has no member named 'filter_rfs_expire_one' 835 | .filter_rfs_expire_one = efx_mcdi_filter_rfs_expire_one, | ^~~~~~~~~~~~~~~~~~~~~ >> drivers/net/ethernet/sfc/ef100_nic.c:835:27: error: initialization of 'void (*)(struct efx_nic *, u32)' {aka 'void (*)(struct efx_nic *, unsigned int)'} from incompatible pointer type 'bool (*)(struct efx_nic *, u32, unsigned int)' {aka '_Bool (*)(struct efx_nic *, unsigned int, unsigned int)'} [-Werror=incompatible-pointer-types] 835 | .filter_rfs_expire_one = efx_mcdi_filter_rfs_expire_one, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2020-08-04 The following pull-request contains BPF updates for your *net-next* tree. We've added 73 non-merge commits during the last 9 day(s) which contain a total of 135 files changed, 4603 insertions(+), 1013 deletions(-). The main changes are: 1) Implement bpf_link support for XDP. Also add LINK_DETACH operation for the BPF syscall allowing processes with BPF link FD to force-detach, from Andrii Nakryiko. 2) Add BPF iterator for map elements and to iterate all BPF programs for efficient in-kernel inspection, from Yonghong Song and Alexei Starovoitov. 3) Separate bpf_get_{stack,stackid}() helpers for perf events in BPF to avoid unwinder errors, from Song Liu. 4) Allow cgroup local storage map to be shared between programs on the same cgroup. Also extend BPF selftests with coverage, from YiFei Zhu. 5) Add BPF exception tables to ARM64 JIT in order to be able to JIT BPF_PROBE_MEM load instructions, from Jean-Philippe Brucker. 6) Follow-up fixes on BPF socket lookup in combination with reuseport group handling. Also add related BPF selftests, from Jakub Sitnicki. 7) Allow to use socket storage in BPF_PROG_TYPE_CGROUP_SOCK-typed programs for socket create/release as well as bind functions, from Stanislav Fomichev. 8) Fix an info leak in xsk_getsockopt() when retrieving XDP stats via old struct xdp_statistics, from Peilin Ye. 9) Fix PT_REGS_RC{,_CORE}() macros in libbpf for MIPS arch, from Jerry Crunchtime. 10) Extend BPF kernel test infra with skb->family and skb->{local,remote}_ip{4,6} fields and allow user space to specify skb->dev via ifindex, from Dmitry Yakunin. 11) Fix a bpftool segfault due to missing program type name and make it more robust to prevent them in future gaps, from Quentin Monnet. 12) Consolidate cgroup helper functions across selftests and fix a v6 localhost resolver issue, from John Fastabend. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2020-08-03 This patchset introduces some updates to mlx5 driver. 1) Jakub converts mlx5 to use the new udp tunnel infrastructure. Starting with a hack to allow drivers to request a static configuration of the default vxlan port, and then a patch that converts mlx5. 2) Parav implements change_carrier ndo for VF eswitch representors, to speedup link state control of representors netdevices. 3) Alex Vesker, makes a simple update to software steering to fix an issue with push vlan action sequence 4) Leon removes a redundant dump stack on error flow. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Edward Cree says: ==================== sfc: driver for EF100 family NICs, part 2 This series implements the data path and various other functionality for Xilinx/Solarflare EF100 NICs. Changed from v2: * Improved error handling of design params (patch #3) * Removed 'inline' from .c file in patch #4 * Don't report common stats to ethtool -S (patch #8) Changed from v1: * Fixed build errors on CONFIG_RFS_ACCEL=n (patch #5) and 32-bit (patch #8) * Dropped patch #10 (ethtool ops) as it's buggy and will need a bigger rework to fix. ==================== Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
We don't yet have a .sriov_configure() to create them, though. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
We'll need it later, for VF representors. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Self-tests for event and interrupt reception and NVRAM. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
MAC stats work much the same as on EF10, with a periodic DMA to a region specified via an MCDI. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Bring down the TX and RX queues at ifdown, so that we can then fini the EVQs (otherwise the MC would return EBUSY because they're still in use). Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Includes RSS spreading. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Includes checksum offload and TSO, so declare those in our netdev features. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Several parts of the EF100 architecture are parameterised (to allow varying capabilities on FPGAs according to resource constraints), and these parameters are exposed to the driver through a TLV-encoded region of the BAR. For the most part we either don't care about these values at all or just need to sanity-check them against the driver's assumptions, but there are a number of TSO limits which we record so that we will be able to check against them in the TX path when handling GSO skbs. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
In the future, EF100 is planned to have a credit-based scheme for handling unsolicited events, which drivers will need to use in order to function correctly. However, current EF100 hardware does not yet generate unsolicited events and the credit scheme has not yet been implemented in firmware. To prevent compatibility problems later if the current driver is used with future firmware which does implement it, we check for the corresponding capability flag (which that future firmware will set), and if found, we refuse to probe. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Early in EF100 development there was a different format of event descriptor; if the NIC is somehow running the very old firmware which will use that format, fail the probe. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiafei Pan authored
The driver calls napi_schedule_irqoff() from a context where, in RT, hardirqs are not disabled, since the IRQ handler is force-threaded. In the call path of this function, __raise_softirq_irqoff() is modifying its per-CPU mask of pending softirqs that must be processed, using or_softirq_pending(). The or_softirq_pending() function is not atomic, but since interrupts are supposed to be disabled, nobody should be preempting it, and the operation should be safe. Nonetheless, when running with hardirqs on, as in the PREEMPT_RT case, it isn't safe, and the pending softirqs mask can get corrupted, resulting in softirqs being lost and never processed. To have common code that works with PREEMPT_RT and with mainline Linux, we can use plain napi_schedule() instead. The difference is that napi_schedule() (via __napi_schedule) also calls local_irq_save, which disables hardirqs if they aren't already. But, since they already are disabled in non-RT, this means that in practice we don't see any measurable difference in throughput or latency with this patch. Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiafei Pan authored
The driver calls napi_schedule_irqoff() from a context where, in RT, hardirqs are not disabled, since the IRQ handler is force-threaded. In the call path of this function, __raise_softirq_irqoff() is modifying its per-CPU mask of pending softirqs that must be processed, using or_softirq_pending(). The or_softirq_pending() function is not atomic, but since interrupts are supposed to be disabled, nobody should be preempting it, and the operation should be safe. Nonetheless, when running with hardirqs on, as in the PREEMPT_RT case, it isn't safe, and the pending softirqs mask can get corrupted, resulting in softirqs being lost and never processed. To have common code that works with PREEMPT_RT and with mainline Linux, we can use plain napi_schedule() instead. The difference is that napi_schedule() (via __napi_schedule) also calls local_irq_save, which disables hardirqs if they aren't already. But, since they already are disabled in non-RT, this means that in practice we don't see any measurable difference in throughput or latency with this patch. Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
net: dsa: loop: Preparatory changes for 802.1Q data path Florian Fainelli says: ==================== These patches are all meant to help pave the way for a 802.1Q data path added to the mockup driver, making it more useful than just testing for configuration. Sending those out now since there is no real need to wait. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
We only support DSA_LOOP_NUM_PORTS in the switch, do not tell the DSA core to allocate up to DSA_MAX_PORTS which is nearly the double (6 vs. 11). Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
For now we simply store the port MTU into a per-port member. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
In preparation for adding support for a mockup data path, move the driver data structures to include/linux/dsa/loop.h such that we can share them between net/dsa/ and drivers/net/dsa/ later on. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-