- 08 Apr, 2022 36 commits
-
-
David S. Miller authored
Michael Chan says: ==================== bnxt: Support XDP multi buffer This series adds XDP multi buffer support, allowing MTU to go beyond the page size limit. v4: Rebase with latest net-next v3: Simplify page mode buffer size calculation Check to make sure XDP program supports multipage packets v2: Fix uninitialized variable warnings in patch 1 and 10. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Allow aggregation buffers to be in place in the receive path and allow XDP programs to be attached when using a larger than 4k MTU. v3: Add a check to sure XDP program supports multipage packets. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This patch adds the following features: - Support for XDP_TX and XDP_DROP action when using xdp_buff with frags - Support for freeing all frags attached to an xdp_buff - Cleanup of TX ring buffers after transmits complete - Slight change in definition of bnxt_sw_tx_bd since nr_frags and RX producer may both need to be used - Clear out skb_shared_info at the end of the buffer v2: Fix uninitialized variable warning in bnxt_xdp_buff_frags_free(). Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Since we have an xdp_buff with frags there needs to be a way to convert that into a valid sk_buff in the event that XDP_PASS is the resulting operation. This adds a new rx_skb_func when the netdev has an MTU that prevents the packets from sitting in a single page. This also make sure that GRO/LRO stay disabled even when using the aggregation ring for large buffers. v3: Use BNXT_PAGE_MODE_BUF_SIZE for build_skb Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
If we are using aggregation rings with XDP enabled, allocate page buffers for the aggregation rings from the page_pool. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Modify ring header data split and jumbo parameters to account for the fact that the design for XDP multibuffer puts close to the first 4k of data in a page and the remaining portions of the packet go in the aggregation ring. v3: Simplified code around initial buffer size calculation Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Set the pfmemaloc flag in the xdp buff so that this can be copied to the skb if needed for an XDP_PASS action. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This patch adds a new function that will read pages from the aggregation ring and create an xdp_buff with frags based on the entries in the aggregation ring. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Clarify that this is reading buffers from the aggregation ring. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Rather than operating on an sk_buff, add frags from the aggregation ring into the frags of an skb_shared_info. This will allow the caller to use either an sk_buff or xdp_buff. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
This will be used to determine if bnxt_rx_xdp should be called rather than calling it every time. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
Move initialization of xdp_buff outside of bnxt_rx_xdp to prepare for allowing bnxt_rx_xdp to operate on multibuffer xdp_buffs. v2: Fix uninitalized variables warning in bnxt_xdp.c. v3: Add new define BNXT_PAGE_MODE_BUF_SIZE Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jakub Kicinski says: ==================== tls: rx: random refactoring part 1 TLS Rx refactoring. Part 1 of 3. A couple of features to follow. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Instead of tls_device poking into internals of the message return 1 from tls_device_decrypted() if the device handled the decryption. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Use early return and a jump label to remove two indentation levels. No functional changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We inform the applications that data is available when the record is received. Decryption happens inline inside recvmsg or splice call. Generating another wakeup inside the decryption handler seems pointless as someone must be actively reading the socket if we are executing this code. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
The padding length TLS 1.3 logic is searching for content_type from the end of text. IMHO the code is easier to parse if we calculate offset and decrement it rather than try to maintain positive offset from the end of the record called "back". Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
TLS 1.3 has to strip padding, and it starts out 16 bytes from the end of the record. Make it clear this is because of the auth tag. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
We set the record type in tls_read_size(), can as well init the tlm->decrypted field there. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Similar justification to previous change, the information about decryption status belongs in the skb. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Original TLS implementation was handling one record at a time. It stashed the type of the record inside tls context (per socket structure) for convenience. When async crypto support was added [1] the author had to use skb->cb to store the type per-message. The use of skb->cb overlaps with strparser, however, so a hybrid approach was taken where type is stored in context while parsing (since we parse a message at a time) but once parsed its copied to skb->cb. Recently a workaround for sockmaps [2] exposed the previously private struct _strp_msg and started a trend of adding user fields directly in strparser's header. This is cleaner than storing information about an skb in the context. This change is not strictly necessary, but IMHO the ownership of the context field is confusing. Information naturally belongs to the skb. [1] commit 94524d8f ("net/tls: Add support for async decryption of tls records") [2] commit b2c46181 ("bpf, sockmap: sk_skb data_end access incorrect when src_reg = dst_reg") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Pointless else branch after goto makes the code harder to refactor down the line. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
'recv_end:' checks num_async and decrypted, and is then followed by the 'end' label. Since we know that decrypted and num_async are 0 at the start we can jump to 'end'. Move the init of decrypted and num_async to let the compiler catch if I'm wrong. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Paolo Abeni: "Including fixes from bpf and netfilter. Current release - new code bugs: - mctp: correct mctp_i2c_header_create result - eth: fungible: fix reference to __udivdi3 on 32b builds - eth: micrel: remove latencies support lan8814 Previous releases - regressions: - bpf: resolve to prog->aux->dst_prog->type only for BPF_PROG_TYPE_EXT - vrf: fix packet sniffing for traffic originating from ip tunnels - rxrpc: fix a race in rxrpc_exit_net() - dsa: revert "net: dsa: stop updating master MTU from master.c" - eth: ice: fix MAC address setting Previous releases - always broken: - tls: fix slab-out-of-bounds bug in decrypt_internal - bpf: support dual-stack sockets in bpf_tcp_check_syncookie - xdp: fix coalescing for page_pool fragment recycling - ovs: fix leak of nested actions - eth: sfc: - add missing xdp queue reinitialization - fix using uninitialized xdp tx_queue - eth: ice: - clear default forwarding VSI during VSI release - fix broken IFF_ALLMULTI handling - synchronize_rcu() when terminating rings - eth: qede: confirm skb is allocated before using - eth: aqc111: fix out-of-bounds accesses in RX fixup - eth: slip: fix NPD bug in sl_tx_timeout()" * tag 'net-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (61 commits) drivers: net: slip: fix NPD bug in sl_tx_timeout() bpf: Adjust bpf_tcp_check_syncookie selftest to test dual-stack sockets bpf: Support dual-stack sockets in bpf_tcp_check_syncookie myri10ge: fix an incorrect free for skb in myri10ge_sw_tso net: usb: aqc111: Fix out-of-bounds accesses in RX fixup qede: confirm skb is allocated before using net: ipv6mr: fix unused variable warning with CONFIG_IPV6_PIMSM_V2=n net: phy: mscc-miim: reject clause 45 register accesses net: axiemac: use a phandle to reference pcs_phy dt-bindings: net: add pcs-handle attribute net: axienet: factor out phy_node in struct axienet_local net: axienet: setup mdio unconditionally net: sfc: fix using uninitialized xdp tx_queue rxrpc: fix a race in rxrpc_exit_net() net: openvswitch: fix leak of nested actions net: ethernet: mv643xx: Fix over zealous checking of_get_mac_address() net: openvswitch: don't send internal clone attribute to the userspace. net: micrel: Fix KS8851 Kconfig ice: clear cmd_type_offset_bsz for TX rings ice: xsk: fix VSI state check in ice_xsk_wakeup() ...
-
GONG, Ruiqi authored
Simply use kmemdup instead of explicitly allocating and copying memory. Generated by: scripts/coccinelle/api/memdup.cocci Signed-off-by: GONG, Ruiqi <gongruiqi1@huawei.com> Link: https://lore.kernel.org/r/20220406114629.182833-1-gongruiqi1@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrea Parri (Microsoft) authored
That being useful for debugging purposes. Notice that the packet descriptor is in "private" guest memory, so that Hyper-V can not tamper with it. While at it, remove two unnecessary u64-casts. Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xiaomeng Tong authored
The define for_each_pci_dev(d) is: while ((d = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, d)) != NULL) Thus, the list iterator 'd' is always non-NULL so it doesn't need to be checked. So just remove the unnecessary NULL check. Also remove the unnecessary initializer because the list iterator is always initialized. Signed-off-by: Xiaomeng Tong <xiam0nd.tong@gmail.com> Link: https://lore.kernel.org/r/20220406015921.29267-1-xiam0nd.tong@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Robin Murphy authored
Even if an IOMMU might be present for some PCI segment in the system, that doesn't necessarily mean it provides translation for the device we care about. It appears that what we care about here is specifically whether DMA mapping ops involve any IOMMU overhead or not, so check for translation actually being active for our device. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://lore.kernel.org/r/7350f957944ecfce6cce90f422e3992a1f428775.1649166055.git.robin.murphy@arm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ian Wienand authored
As noted in the original commit 685343fc ("net: add name_assign_type netdev attribute") ... when the kernel has given the interface a name using global device enumeration based on order of discovery (ethX, wlanY, etc) ... are labelled NET_NAME_ENUM. That describes this case, so set the default for the devices here to NET_NAME_ENUM. Current popular network setup tools like systemd use this only to warn if you're setting static settings on interfaces that might change, so it is expected this only leads to better user information, but not changing of interfaces, etc. Signed-off-by: Ian Wienand <iwienand@redhat.com> Link: https://lore.kernel.org/r/20220406093635.1601506-1-iwienand@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ping Gan authored
The congestion status of a tcp flow may be updated since there is congestion between tcp sender and receiver. It makes sense to add tracepoint for congestion status set function to summate cc status duration and evaluate the performance of network and congestion algorithm. the backgound of this patch is below. Link: https://github.com/iovisor/bcc/pull/3899Signed-off-by: Ping Gan <jacky_gam_2001@163.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220406010956.19656-1-jacky_gam_2001@163.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jeffrey Ji authored
Increment rx_otherhost_dropped counter when packet dropped due to mismatched dest MAC addr. An example when this drop can occur is when manually crafting raw packets that will be consumed by a user space application via a tap device. For testing purposes local traffic was generated using trafgen for the client and netcat to start a server Tested: Created 2 netns, sent 1 packet using trafgen from 1 to the other with "{eth(daddr=$INCORRECT_MAC...}", verified that iproute2 showed the counter was incremented. (Also had to modify iproute2 to show the stat, additional patch for that coming next.) Signed-off-by: Jeffrey Ji <jeffreyji@google.com> Reviewed-by: Brian Vazquez <brianvv@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220406172600.1141083-1-jeffreyjilinux@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== net: create a net/core/ internal header We are adding stuff to netdevice.h which really should be local to net/core/. Create a net/core/dev.h header and use it. Minor cleanups precede. ==================== Link: https://lore.kernel.org/r/20220406213754.731066-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
There's a number of functions and static variables used under net/core/ but not from the outside. We currently dump most of them into netdevice.h. That bad for many reasons: - netdevice.h is very cluttered, hard to figure out what the APIs are; - netdevice.h is very long; - we have to touch netdevice.h more which causes expensive incremental builds. Create a header under net/core/ and move some declarations. The new header is also a bit of a catch-all but that's fine, if we create more specific headers people will likely over-think where their declaration fit best. And end up putting them in netdevice.h, again. More work should be done on splitting netdevice.h into more targeted headers, but that'd be more time consuming so small steps. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
We have a bunch of functions which are only used under net/core/ yet they get exported. Remove the exports. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Following patch will hide that typedef. There seems to be no strong reason for hyperv to use it, so let's not. Acked-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 07 Apr, 2022 4 commits
-
-
Linus Torvalds authored
Merge tag 'hyperv-fixes-signed-20220407' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux Pull hyperv fixes from Wei Liu: - Correctly propagate coherence information for VMbus devices (Michael Kelley) - Disable balloon and memory hot-add on ARM64 temporarily (Boqun Feng) - Use barrier to prevent reording when reading ring buffer (Michael Kelley) - Use virt_store_mb in favour of smp_store_mb (Andrea Parri) - Fix VMbus device object initialization (Andrea Parri) - Deactivate sysctl_record_panic_msg on isolated guest (Andrea Parri) - Fix a crash when unloading VMbus module (Guilherme G. Piccoli) * tag 'hyperv-fixes-signed-20220407' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux: Drivers: hv: vmbus: Replace smp_store_mb() with virt_store_mb() Drivers: hv: balloon: Disable balloon and hot-add accordingly Drivers: hv: balloon: Support status report for larger page sizes Drivers: hv: vmbus: Prevent load re-ordering when reading ring buffer PCI: hv: Propagate coherence from VMbus device to PCI device Drivers: hv: vmbus: Propagate VMbus coherence to each VMbus device Drivers: hv: vmbus: Fix potential crash on module unload Drivers: hv: vmbus: Fix initialization of device object in vmbus_device_register() Drivers: hv: vmbus: Deactivate sysctl_record_panic_msg by default in isolated guests
-
git://git.kernel.org/pub/scm/linux/kernel/git/crng/randomLinus Torvalds authored
Pull random number generator fixes from Jason Donenfeld: - Another fixup to the fast_init/crng_init split, this time in how much entropy is being credited, from Jan Varho. - As discussed, we now opportunistically call try_to_generate_entropy() in /dev/urandom reads, as a replacement for the reverted commit. I opted to not do the more invasive wait_for_random_bytes() change at least for now, preferring to do something smaller and more obvious for the time being, but maybe that can be revisited as things evolve later. - Userspace can use FUSE or userfaultfd or simply move a process to idle priority in order to make a read from the random device never complete, which breaks forward secrecy, fixed by overwriting sensitive bytes early on in the function. - Jann Horn noticed that /dev/urandom reads were only checking for pending signals if need_resched() was true, a bug going back to the genesis commit, now fixed by always checking for signal_pending() and calling cond_resched(). This explains various noticeable signal delivery delays I've seen in programs over the years that do long reads from /dev/urandom. - In order to be more like other devices (e.g. /dev/zero) and to mitigate the impact of fixing the above bug, which has been around forever (users have never really needed to check the return value of read() for medium-sized reads and so perhaps many didn't), we now move signal checking to the bottom part of the loop, and do so every PAGE_SIZE-bytes. * tag 'random-5.18-rc2-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: random: check for signals every PAGE_SIZE chunk of /dev/[u]random random: check for signal_pending() outside of need_resched() check random: do not allow user to keep crng key around on stack random: opportunistically initialize on /dev/urandom reads random: do not split fast init input in add_hwgenerator_randomness()
-
git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libataLinus Torvalds authored
Pull ata fixes from Damien Le Moal: - Fix a compilation warning due to an uninitialized variable in ata_sff_lost_interrupt(), from me. - Fix invalid internal command tag handling in the sata_dwc_460ex driver, from Christian. - Disable READ LOG DMA EXT with Samsung 840 EVO SSDs as this command causes the drives to hang, from Christian. - Change the config option CONFIG_SATA_LPM_POLICY back to its original name CONFIG_SATA_LPM_MOBILE_POLICY to avoid potential problems with users losing their configuration (as discussed during the merge window), from Mario. * tag 'ata-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata: ata: ahci: Rename CONFIG_SATA_LPM_POLICY configuration item back ata: libata-core: Disable READ LOG DMA EXT for Samsung 840 EVOs ata: sata_dwc_460ex: Fix crash due to OOB write ata: libata-sff: Fix compilation warning in ata_sff_lost_interrupt()
-
Duoming Zhou authored
When a slip driver is detaching, the slip_close() will act to cleanup necessary resources and sl->tty is set to NULL in slip_close(). Meanwhile, the packet we transmit is blocked, sl_tx_timeout() will be called. Although slip_close() and sl_tx_timeout() use sl->lock to synchronize, we don`t judge whether sl->tty equals to NULL in sl_tx_timeout() and the null pointer dereference bug will happen. (Thread 1) | (Thread 2) | slip_close() | spin_lock_bh(&sl->lock) | ... ... | sl->tty = NULL //(1) sl_tx_timeout() | spin_unlock_bh(&sl->lock) spin_lock(&sl->lock); | ... | ... tty_chars_in_buffer(sl->tty)| if (tty->ops->..) //(2) | ... | synchronize_rcu() We set NULL to sl->tty in position (1) and dereference sl->tty in position (2). This patch adds check in sl_tx_timeout(). If sl->tty equals to NULL, sl_tx_timeout() will goto out. Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Reviewed-by: Jiri Slaby <jirislaby@kernel.org> Link: https://lore.kernel.org/r/20220405132206.55291-1-duoming@zju.edu.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-