- 17 Nov, 2022 1 commit
-
-
Li zeming authored
The subh.addip_hdr pointer is also of type (struct sctp_addiphdr *), so it does not require a cast. Signed-off-by: Li zeming <zeming@nfschina.com> Link: https://lore.kernel.org/r/20221115020705.3220-1-zeming@nfschina.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
- 16 Nov, 2022 39 commits
-
-
Jacob Keller authored
The mlxsw adjfine implementation in the spectrum_ptp.c file converts scaled_ppm into ppb before updating a cyclecounter multiplier using the standard "base * ppb / 1billion" calculation. This can be re-written to use adjust_by_scaled_ppm, directly using the scaled parts per million and reducing the amount of code required to express this calculation. We still calculate the parts per billion for passing into mlxsw_sp_ptp_phc_adjfreq because this function requires the input to be in parts per billion. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Cc: Amit Cohen <amcohen@nvidia.com> Cc: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Tested-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/20221114213701.815132-1-jacob.e.keller@intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Annotate the lockless read of queue->synflood_warned. Following xchg() has the needed data-race resolution. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Li zeming authored
The valptr pointer is of (void *) type, so other pointers need not be forced to assign values to it. Signed-off-by: Li zeming <zeming@nfschina.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== net: add atomic dev->stats infra Long standing KCSAN issues are caused by data-race around some dev->stats changes. Most performance critical paths already use per-cpu variables, or per-queue ones. It is reasonable (and more correct) to use atomic operations for the slow paths. First patch adds the infrastructure, then three patches address the most common paths that syzbot is playing with. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Most of code paths in tunnels are lockless (eg NETIF_F_LLTX in tx). Adopt SMP safe DEV_STATS_INC() to update dev->stats fields. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Most of code paths in tunnels are lockless (eg NETIF_F_LLTX in tx). Adopt SMP safe DEV_STATS_{INC|ADD}() to update dev->stats fields. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
syzbot/KCSAN reported that multiple cpus are updating dev->stats.tx_error concurrently. This is because sit tunnels are NETIF_F_LLTX, meaning their ndo_start_xmit() is not protected by a spinlock. While original KCSAN report was about tx path, rx path has the same issue. Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Long standing KCSAN issues are caused by data-race around some dev->stats changes. Most performance critical paths already use per-cpu variables, or per-queue ones. It is reasonable (and more correct) to use atomic operations for the slow paths. This patch adds an union for each field of net_device_stats, so that we can convert paths that are not yet protected by a spinlock or a mutex. netdev_stats_to_stats64() no longer has an #if BITS_PER_LONG==64 Note that the memcpy() we were using on 64bit arches had no provision to avoid load-tearing, while atomic_long_read() is providing the needed protection at no cost. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== net: more try_cmpxchg() conversions Adopt try_cmpxchg() and friends in more places, as this is preferred nowadays. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Adopt atomic64_try_cmpxchg() and remove the loop, to make the intent more obvious. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This makes code a bit cleaner. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This makes the code slightly more efficient. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Adopting atomic_try_cmpxchg() makes the code cleaner. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Adopt atomic_try_cmpxchg() which is slightly more efficient. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Adopt atomic_long_try_cmpxchg() in mm_account_pinned_pages() as it is slightly more efficient. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
RFC 2863 says: The lowerLayerDown state is also a refinement on the down state. This new state indicates that this interface runs "on top of" one or more other interfaces (see ifStackTable) and that this interface is down specifically because one or more of these lower-layer interfaces are down. DSA interfaces are virtual network devices, stacked on top of the DSA master, but they have a physical MAC, with a PHY that reports a real link status. But since DSA (perhaps improperly) uses an iflink to describe the relationship to its master since commit c0840801 ("dsa: set ->iflink on slave interfaces to the ifindex of the parent"), default_operstate() will misinterpret this to mean that every time the carrier of a DSA interface is not ok, it is because of the master being not ok. In fact, since commit c0a8a9c2 ("net: dsa: automatically bring user ports down when master goes down"), DSA cannot even in theory be in the lowerLayerDown state, because it just calls dev_close_many(), thereby going down, when the master goes down. We could revert the commit that creates an iflink between a DSA user port and its master, especially since now we have an alternative IFLA_DSA_MASTER which has less side effects. But there may be tooling in use which relies on the iflink, which has existed since 2009. We could also probably do something local within DSA to overwrite what rfc2863_policy() did, in a way similar to hsr_set_operstate(), but this seems like a hack. What seems appropriate is to follow the iflink, and check the carrier status of that interface as well. If that's down too, yes, keep reporting lowerLayerDown, otherwise just down. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Kuniyuki Iwashima says: ==================== udp: Introduce optional per-netns hash table. This series is the UDP version of the per-netns ehash series [0], which were initially in the same patch set. [1] The notable difference with TCP is the max table size is 64K and the min size is 128. This is because the possible hash range by udp_hashfn() always fits in 64K within the same netns and because we want to keep a bitmap in udp_lib_get_port() on the stack. Also, the UDP per-netns table isolates both 1-tuple and 2-tuple tables. For details, please see the last patch. patch 1 - 4: prep for per-netns hash table patch 5: add per-netns hash table [0]: https://lore.kernel.org/netdev/20220908011022.45342-1-kuniyu@amazon.com/ [1]: https://lore.kernel.org/netdev/20220826000445.46552-1-kuniyu@amazon.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kuniyuki Iwashima authored
The maximum hash table size is 64K due to the nature of the protocol. [0] It's smaller than TCP, and fewer sockets can cause a performance drop. On an EC2 c5.24xlarge instance (192 GiB memory), after running iperf3 in different netns, creating 32Mi sockets without data transfer in the root netns causes regression for the iperf3's connection. uhash_entries sockets length Gbps 64K 1 1 5.69 1Mi 16 5.27 2Mi 32 4.90 4Mi 64 4.09 8Mi 128 2.96 16Mi 256 2.06 32Mi 512 1.12 The per-netns hash table breaks the lengthy lists into shorter ones. It is useful on a multi-tenant system with thousands of netns. With smaller hash tables, we can look up sockets faster, isolate noisy neighbours, and reduce lock contention. The max size of the per-netns table is 64K as well. This is because the possible hash range by udp_hashfn() always fits in 64K within the same netns and we cannot make full use of the whole buckets larger than 64K. /* 0 < num < 64K -> X < hash < X + 64K */ (num + net_hash_mix(net)) & mask; Also, the min size is 128. We use a bitmap to search for an available port in udp_lib_get_port(). To keep the bitmap on the stack and not fire the CONFIG_FRAME_WARN error at build time, we round up the table size to 128. The sysctl usage is the same with TCP: $ dmesg | cut -d ' ' -f 6- | grep "UDP hash" UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc) # sysctl net.ipv4.udp_hash_entries net.ipv4.udp_hash_entries = 65536 # can be changed by uhash_entries # sysctl net.ipv4.udp_child_hash_entries net.ipv4.udp_child_hash_entries = 0 # disabled by default # ip netns add test1 # ip netns exec test1 sysctl net.ipv4.udp_hash_entries net.ipv4.udp_hash_entries = -65536 # share the global table # sysctl -w net.ipv4.udp_child_hash_entries=100 net.ipv4.udp_child_hash_entries = 100 # ip netns add test2 # ip netns exec test2 sysctl net.ipv4.udp_hash_entries net.ipv4.udp_hash_entries = 128 # own a per-netns table with 2^n buckets We could optimise the hash table lookup/iteration further by removing the netns comparison for the per-netns one in the future. Also, we could optimise the sparse udp_hslot layout by putting it in udp_table. [0]: https://lore.kernel.org/netdev/4ACC2815.7010101@gmail.com/Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kuniyuki Iwashima authored
We will soon introduce an optional per-netns hash table for UDP. This means we cannot use udp_table directly in most places. Instead, access it via net->ipv4.udp_table. The access will be valid only while initialising udp_table itself and creating/destroying each netns. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kuniyuki Iwashima authored
We will soon introduce an optional per-netns hash table for UDP. This means we cannot use the global udp_seq_afinfo.udp_table to fetch a UDP hash table. Instead, set NULL to udp_seq_afinfo.udp_table for UDP and get a proper table from net->ipv4.udp_table. Note that we still need udp_seq_afinfo.udp_table for UDP LITE. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kuniyuki Iwashima authored
We will soon introduce an optional per-netns hash table for UDP. This means we cannot use the global sk->sk_prot->h.udp_table to fetch a UDP hash table. Instead, set NULL to sk->sk_prot->h.udp_table for UDP and get a proper table from net->ipv4.udp_table. Note that we still need sk->sk_prot->h.udp_table for UDP LITE. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kuniyuki Iwashima authored
This patch adds no functional change and cleans up some functions that the following patches touch around so that we make them tidy and easy to review/revert. The change is mainly to keep reverse christmas tree order. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Edward Cree says: ==================== sfc: TC offload counters EF100 hardware supports attaching counters to action-sets in the MAE. Use these counters to implement stats for TC flower offload. The counters are delivered to the host over a special hardware RX queue which should only ever receive counter update messages, not 'real' network packets. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
On FLOW_CLS_STATS, look up the MAE counter by TC cookie, and report the change in packet and byte count since the last time FLOW_CLS_STATS read them. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Currently the only actions supported are COUNT and DELIVER, which can only happen in the right order; but when more actions are added, it will be necessary to check that they are only used in the same order in which the hardware performs them (since the hardware API takes an action *set* in which the order is implicit). For instance, a VLAN pop must not follow a VLAN push. Most practical use-cases should be unaffected by these restrictions. Add a function efx_tc_flower_action_order_ok() that checks whether it is appropriate to add a specified action to the existing action-set. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
The only actions that expect stats (that sfc HW supports) are gact shot (drop), mirred redirect and mirred mirror. Since these are 'deliverish' actions that end an action-set, we only require at most one counter per action-set. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Add the packet and byte counts to the software running total, and store the latest jiffies every time the counter is bumped. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
efx_tc_flower_get_counter_index() will create an MAE counter mapped to the passed (TC filter) cookie, or increment the reference if one already exists for that cookie. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Nothing populates them yet. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Currently there is no counter-allocating machinery to connect the resulting counter update values to; that will be added in a subsequent patch. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Start and stop MAE counter streaming, and grant credits. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
The TC extra channel will need its own special RX handling, which must operate before any code that expects the RX buffer to contain a network packet; buffers on this RX queue contain MAE counter packets in a special format that does not resemble an Ethernet frame, and many fields of the RX packet prefix are not populated. The USER_MARK field, however, is populated with the generation count from the counter subsystem, which needs to be passed on to the RX handler. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
The TC extra channel needs to do extra work in efx_{start,stop}_channels() to start/stop MAE counter streaming from the hardware. Add callbacks for it to implement. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
EF100 hardware streams MAE counter updates to the driver over a dedicated RX queue; however, the MCPU is not able to detect when RX buffers have been posted to the ring. Thus, the driver must call MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS; this patch adds the infrastructure to support that to the core RXQ handling code. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Macro PREFIX_WIDTH_MASK uses unsigned long arithmetic for a shift of up to 32 bits, which breaks on 32-bit systems. This did not previously show up as we weren't using any fields of width 32, but we now need to access ESF_GZ_RX_PREFIX_USER_MARK. Change it to unsigned long long. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Vladimir Oltean says: ==================== Remove phylink_validate() from Felix DSA driver The Felix DSA driver still uses its own phylink_validate() procedure rather than the (relatively newly introduced) phylink_generic_validate() because the latter did not cater for the case where a PHY provides rate matching between the Ethernet cable side speed and the SERDES side speed (and does not advertise other speeds except for the SERDES speed). This changed with Sean Anderson's generic support for rate matching PHYs in phylib and phylink: https://patchwork.kernel.org/project/netdevbpf/cover/20220920221235.1487501-1-sean.anderson@seco.com/ Building upon that support, this patch set makes Linux understand that the PHYs used in combination with the Felix DSA driver (SCH-30841 riser card with AQR412 PHY, used with SERDES protocol 0x7777 - 4x2500base-x, plugged into LS1028A-QDS) do support PAUSE rate matching. This requires Aquantia PHY driver support for new PHY IDs. To activate the rate matching support in phylink, config->mac_capabilities must be populated. Coincidentally, this also opts the Felix driver into the generic phylink validation. Next, code that is no longer necessary is eliminated. This includes the Felix driver validation procedures for VSC9959 and VSC9953, the workaround in the Ocelot switch library to leave RX flow control always enabled, as well as DSA plumbing necessary for a custom phylink validation procedure to be propagated to the hardware driver level. Many thanks go to Sean Anderson for providing generic support for rate matching. ==================== Link: https://lore.kernel.org/r/20221114170730.2189282-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
As of now, no DSA driver uses a custom link mode validation procedure anymore. So remove this DSA operation and let phylink determine what is supported based on config->mac_capabilities (if provided by the driver). Leave a comment why we left the code that we did, and that there is more work to do. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
As phylink gained generic support for PHYs with rate matching via PAUSE frames, the phylink_mac_link_up() method will be called with the maximum speed and with rx_pause=true if rate matching is in use. This means that setups with 2500base-x as the SERDES protocol between the MAC/PCS and the PHY now work with no need for the driver to do anything special. Tested with fsl-ls1028a-qds-7777.dts. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Drop the custom implementation of phylink_validate() in favor of the generic one, which requires config->mac_capabilities to be set. This was used up until now because of the possibility of being paired with Aquantia PHYs with support for rate matching. The phylink framework gained generic support for these, and knows to advertise all 10/100/1000 lower speed link modes when our SERDES protocol is 2500base-x (fixed speed). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-