- 08 Apr, 2022 40 commits
-
-
Ido Schimmel authored
For better error reporting to user space, add an extack message when mirred action offload fails. Currently, the failure cannot be triggered, but add a message in case the action is extended in the future to support more than ingress/egress mirror/redirect. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
For better error reporting to user space, add extack messages when gact action offload fails. Example: # echo 1 > /sys/kernel/tracing/events/netlink/netlink_extack/enable # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action continue Error: cls_matchall: Failed to setup flow action. We have an error talking to the kernel # cat /sys/kernel/tracing/trace_pipe tc-181 [002] b..1. 105.493450: netlink_extack: msg=act_gact: Offload of "continue" action is not supported tc-181 [002] ..... 105.493466: netlink_extack: msg=cls_matchall: Failed to setup flow action # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action reclassify Error: cls_matchall: Failed to setup flow action. We have an error talking to the kernel # cat /sys/kernel/tracing/trace_pipe tc-183 [002] b..1. 124.126477: netlink_extack: msg=act_gact: Offload of "reclassify" action is not supported tc-183 [002] ..... 124.126489: netlink_extack: msg=cls_matchall: Failed to setup flow action # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action pipe action drop Error: cls_matchall: Failed to setup flow action. We have an error talking to the kernel # cat /sys/kernel/tracing/trace_pipe tc-185 [002] b..1. 137.097791: netlink_extack: msg=act_gact: Offload of "pipe" action is not supported tc-185 [002] ..... 137.097804: netlink_extack: msg=cls_matchall: Failed to setup flow action Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The callback is used by various actions to populate the flow action structure prior to offload. Pass extack to this callback so that the various actions will be able to report accurate error messages to user space. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The verbose flag was added in commit 81c7288b ("sched: cls: enable verbose logging") to avoid suppressing logging of error messages that occur "when the rule is not to be exclusively executed by the hardware". However, such error messages are currently suppressed when setup of flow action fails. Take the verbose flag into account to avoid suppressing error messages. This is done by using the extack pointer initialized by tc_cls_common_offload_init(), which performs the necessary checks. Before: # tc filter add dev dummy0 ingress pref 1 proto ip flower dst_ip 198.51.100.1 action police rate 100Mbit burst 10000 # tc filter add dev dummy0 ingress pref 2 proto ip flower verbose dst_ip 198.51.100.1 action police rate 100Mbit burst 10000 After: # tc filter add dev dummy0 ingress pref 1 proto ip flower dst_ip 198.51.100.1 action police rate 100Mbit burst 10000 # tc filter add dev dummy0 ingress pref 2 proto ip flower verbose dst_ip 198.51.100.1 action police rate 100Mbit burst 10000 Warning: cls_flower: Failed to setup flow action. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
The verbose flag was added in commit 81c7288b ("sched: cls: enable verbose logging") to avoid suppressing logging of error messages that occur "when the rule is not to be exclusively executed by the hardware". However, such error messages are currently suppressed when setup of flow action fails. Take the verbose flag into account to avoid suppressing error messages. This is done by using the extack pointer initialized by tc_cls_common_offload_init(), which performs the necessary checks. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/nexDavid S. Miller authored
t-queue Tony Nguyen says: ==================== 100GbE Intel Wired LAN Driver Updates 2022-04-07 Alexander Lobakin says: This hunts down several places around packet templates/dummies for switch rules which are either repetitive, fragile or just not really readable code. It's a common need to add new packet templates and to review such changes as well, try to simplify both with the help of a pair macros and aliases. ice_find_dummy_packet() became very complex at this point with tons of nested if-elses. It clearly showed this approach does not scale, so convert its logics to the simple mask-match + static const array. bloat-o-meter is happy about that (built w/ LLVM 13): add/remove: 0/1 grow/shrink: 1/1 up/down: 2/-1058 (-1056) Function old new delta ice_fill_adv_dummy_packet 289 291 +2 ice_adv_add_update_vsi_list 201 - -201 ice_add_adv_rule 2950 2093 -857 Total: Before=414512, After=413456, chg -0.25% add/remove: 53/52 grow/shrink: 0/0 up/down: 4660/-3988 (672) RO Data old new delta ice_dummy_pkt_profiles - 672 +672 Total: Before=37895, After=38567, chg +1.77% Diffstat also looks nice, and adding new packet templates now takes less lines. We'll probably come out with dynamic template crafting in a while, but for now let's improve what we have currently. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Potin Lai says: ==================== mdio: aspeed: Add Clause 45 support for Aspeed MDIO This patch series add Clause 45 support for Aspeed MDIO driver, and separate c22 and c45 implementation into different functions. LINK: [v1] https://lore.kernel.org/all/20220329161949.19762-1-potin.lai@quantatw.com/ LINK: [v2] https://lore.kernel.org/all/20220406170055.28516-1-potin.lai@quantatw.com/ Changes v2 --> v3: - sort local variable sequence in reverse Christmas tree format. Changes v1 --> v2: - add C45 to probe_capabilities - break one patch into 3 small patches ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Potin Lai authored
Add Clause 45 support for Aspeed mdio driver. Signed-off-by: Potin Lai <potin.lai@quantatw.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Potin Lai authored
Add following additional functions to move out the implementation from aspeed_mdio_read() and aspeed_mdio_write(). c22: - aspeed_mdio_read_c22() - aspeed_mdio_write_c22() c45: - aspeed_mdio_read_c45() - aspeed_mdio_write_c45() Signed-off-by: Potin Lai <potin.lai@quantatw.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Potin Lai authored
Add aspeed_mdio_op() and aseed_mdio_get_data() for register accessing. aspeed_mdio_op() handles operations, write command to control register, then check and wait operations is finished (bit 31 is cleared). aseed_mdio_get_data() fetchs the result value of operation from data register. Signed-off-by: Potin Lai <potin.lai@quantatw.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
The driver for ATM Ambassador devices spews build warnings on microblaze. The virt_to_bus() calls discard the volatile keyword. The right thing to do would be to migrate this driver to a modern DMA API but it seems unlikely anyone is actually using it. There had been no fixes or functional changes here since the git era begun. In fact it sounds like the FW loading was broken from 2008 'til 2012 - see commit fcdc90b0 ("atm: forever loop loading ambassador firmware"). Let's remove this driver, there isn't much changing in the APIs, if users come forward we can apologize and revert. Link: https://lore.kernel.org/all/20220321144013.440d7fc0@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Michael Chan says: ==================== bnxt: Support XDP multi buffer This series adds XDP multi buffer support, allowing MTU to go beyond the page size limit. v4: Rebase with latest net-next v3: Simplify page mode buffer size calculation Check to make sure XDP program supports multipage packets v2: Fix uninitialized variable warnings in patch 1 and 10. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Allow aggregation buffers to be in place in the receive path and allow XDP programs to be attached when using a larger than 4k MTU. v3: Add a check to sure XDP program supports multipage packets. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This patch adds the following features: - Support for XDP_TX and XDP_DROP action when using xdp_buff with frags - Support for freeing all frags attached to an xdp_buff - Cleanup of TX ring buffers after transmits complete - Slight change in definition of bnxt_sw_tx_bd since nr_frags and RX producer may both need to be used - Clear out skb_shared_info at the end of the buffer v2: Fix uninitialized variable warning in bnxt_xdp_buff_frags_free(). Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Since we have an xdp_buff with frags there needs to be a way to convert that into a valid sk_buff in the event that XDP_PASS is the resulting operation. This adds a new rx_skb_func when the netdev has an MTU that prevents the packets from sitting in a single page. This also make sure that GRO/LRO stay disabled even when using the aggregation ring for large buffers. v3: Use BNXT_PAGE_MODE_BUF_SIZE for build_skb Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
If we are using aggregation rings with XDP enabled, allocate page buffers for the aggregation rings from the page_pool. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Modify ring header data split and jumbo parameters to account for the fact that the design for XDP multibuffer puts close to the first 4k of data in a page and the remaining portions of the packet go in the aggregation ring. v3: Simplified code around initial buffer size calculation Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Set the pfmemaloc flag in the xdp buff so that this can be copied to the skb if needed for an XDP_PASS action. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This patch adds a new function that will read pages from the aggregation ring and create an xdp_buff with frags based on the entries in the aggregation ring. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Clarify that this is reading buffers from the aggregation ring. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Rather than operating on an sk_buff, add frags from the aggregation ring into the frags of an skb_shared_info. This will allow the caller to use either an sk_buff or xdp_buff. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This will be used to determine if bnxt_rx_xdp should be called rather than calling it every time. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Move initialization of xdp_buff outside of bnxt_rx_xdp to prepare for allowing bnxt_rx_xdp to operate on multibuffer xdp_buffs. v2: Fix uninitalized variables warning in bnxt_xdp.c. v3: Add new define BNXT_PAGE_MODE_BUF_SIZE Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jakub Kicinski says: ==================== tls: rx: random refactoring part 1 TLS Rx refactoring. Part 1 of 3. A couple of features to follow. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Instead of tls_device poking into internals of the message return 1 from tls_device_decrypted() if the device handled the decryption. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Use early return and a jump label to remove two indentation levels. No functional changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We inform the applications that data is available when the record is received. Decryption happens inline inside recvmsg or splice call. Generating another wakeup inside the decryption handler seems pointless as someone must be actively reading the socket if we are executing this code. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
The padding length TLS 1.3 logic is searching for content_type from the end of text. IMHO the code is easier to parse if we calculate offset and decrement it rather than try to maintain positive offset from the end of the record called "back". Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
TLS 1.3 has to strip padding, and it starts out 16 bytes from the end of the record. Make it clear this is because of the auth tag. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We set the record type in tls_read_size(), can as well init the tlm->decrypted field there. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Similar justification to previous change, the information about decryption status belongs in the skb. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Original TLS implementation was handling one record at a time. It stashed the type of the record inside tls context (per socket structure) for convenience. When async crypto support was added [1] the author had to use skb->cb to store the type per-message. The use of skb->cb overlaps with strparser, however, so a hybrid approach was taken where type is stored in context while parsing (since we parse a message at a time) but once parsed its copied to skb->cb. Recently a workaround for sockmaps [2] exposed the previously private struct _strp_msg and started a trend of adding user fields directly in strparser's header. This is cleaner than storing information about an skb in the context. This change is not strictly necessary, but IMHO the ownership of the context field is confusing. Information naturally belongs to the skb. [1] commit 94524d8f ("net/tls: Add support for async decryption of tls records") [2] commit b2c46181 ("bpf, sockmap: sk_skb data_end access incorrect when src_reg = dst_reg") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Pointless else branch after goto makes the code harder to refactor down the line. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
'recv_end:' checks num_async and decrypted, and is then followed by the 'end' label. Since we know that decrypted and num_async are 0 at the start we can jump to 'end'. Move the init of decrypted and num_async to let the compiler catch if I'm wrong. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Paolo Abeni: "Including fixes from bpf and netfilter. Current release - new code bugs: - mctp: correct mctp_i2c_header_create result - eth: fungible: fix reference to __udivdi3 on 32b builds - eth: micrel: remove latencies support lan8814 Previous releases - regressions: - bpf: resolve to prog->aux->dst_prog->type only for BPF_PROG_TYPE_EXT - vrf: fix packet sniffing for traffic originating from ip tunnels - rxrpc: fix a race in rxrpc_exit_net() - dsa: revert "net: dsa: stop updating master MTU from master.c" - eth: ice: fix MAC address setting Previous releases - always broken: - tls: fix slab-out-of-bounds bug in decrypt_internal - bpf: support dual-stack sockets in bpf_tcp_check_syncookie - xdp: fix coalescing for page_pool fragment recycling - ovs: fix leak of nested actions - eth: sfc: - add missing xdp queue reinitialization - fix using uninitialized xdp tx_queue - eth: ice: - clear default forwarding VSI during VSI release - fix broken IFF_ALLMULTI handling - synchronize_rcu() when terminating rings - eth: qede: confirm skb is allocated before using - eth: aqc111: fix out-of-bounds accesses in RX fixup - eth: slip: fix NPD bug in sl_tx_timeout()" * tag 'net-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (61 commits) drivers: net: slip: fix NPD bug in sl_tx_timeout() bpf: Adjust bpf_tcp_check_syncookie selftest to test dual-stack sockets bpf: Support dual-stack sockets in bpf_tcp_check_syncookie myri10ge: fix an incorrect free for skb in myri10ge_sw_tso net: usb: aqc111: Fix out-of-bounds accesses in RX fixup qede: confirm skb is allocated before using net: ipv6mr: fix unused variable warning with CONFIG_IPV6_PIMSM_V2=n net: phy: mscc-miim: reject clause 45 register accesses net: axiemac: use a phandle to reference pcs_phy dt-bindings: net: add pcs-handle attribute net: axienet: factor out phy_node in struct axienet_local net: axienet: setup mdio unconditionally net: sfc: fix using uninitialized xdp tx_queue rxrpc: fix a race in rxrpc_exit_net() net: openvswitch: fix leak of nested actions net: ethernet: mv643xx: Fix over zealous checking of_get_mac_address() net: openvswitch: don't send internal clone attribute to the userspace. net: micrel: Fix KS8851 Kconfig ice: clear cmd_type_offset_bsz for TX rings ice: xsk: fix VSI state check in ice_xsk_wakeup() ...
-
GONG, Ruiqi authored
Simply use kmemdup instead of explicitly allocating and copying memory. Generated by: scripts/coccinelle/api/memdup.cocci Signed-off-by: GONG, Ruiqi <gongruiqi1@huawei.com> Link: https://lore.kernel.org/r/20220406114629.182833-1-gongruiqi1@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrea Parri (Microsoft) authored
That being useful for debugging purposes. Notice that the packet descriptor is in "private" guest memory, so that Hyper-V can not tamper with it. While at it, remove two unnecessary u64-casts. Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xiaomeng Tong authored
The define for_each_pci_dev(d) is: while ((d = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, d)) != NULL) Thus, the list iterator 'd' is always non-NULL so it doesn't need to be checked. So just remove the unnecessary NULL check. Also remove the unnecessary initializer because the list iterator is always initialized. Signed-off-by: Xiaomeng Tong <xiam0nd.tong@gmail.com> Link: https://lore.kernel.org/r/20220406015921.29267-1-xiam0nd.tong@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Robin Murphy authored
Even if an IOMMU might be present for some PCI segment in the system, that doesn't necessarily mean it provides translation for the device we care about. It appears that what we care about here is specifically whether DMA mapping ops involve any IOMMU overhead or not, so check for translation actually being active for our device. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://lore.kernel.org/r/7350f957944ecfce6cce90f422e3992a1f428775.1649166055.git.robin.murphy@arm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-