1. 15 Feb, 2024 11 commits
  2. 14 Feb, 2024 29 commits
    • Catalin Popescu's avatar
      net: phy: dp83826: support TX data voltage tuning · d1d77120
      Catalin Popescu authored
      DP83826 offers the possibility to tune the voltage of logical
      levels of the MLT-3 encoded TX data. This is useful when there
      is a voltage drop in between the PHY and the connector and we
      want to increase the voltage levels to compensate for that drop.
      
      Prior to PHY configuration, the driver SW resets the PHY which has
      the same effect as the HW reset pin according to the datasheet.
      Hence, there's no need to force update the VOD_CFG registers to make
      sure they hold their reset values. VOD_CFG registers need to be
      updated only if the DT has been configured with values other than
      the reset ones.
      Reviewed-by: default avatarAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: default avatarCatalin Popescu <catalin.popescu@leica-geosystems.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d1d77120
    • Catalin Popescu's avatar
      dt-bindings: net: dp83826: support TX data voltage tuning · ed1d7dac
      Catalin Popescu authored
      Add properties ti,cfg-dac-minus-one-bp/ti,cfg-dac-plus-one-bp
      to support voltage tuning of logical levels -1/+1 of the MLT-3
      encoded TX data.
      Reviewed-by: default avatarKrzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
      Signed-off-by: default avatarCatalin Popescu <catalin.popescu@leica-geosystems.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ed1d7dac
    • David S. Miller's avatar
      Merge branch 'dev_base_lock-remove' · 7c754e6a
      David S. Miller authored
      Eric Dumazet says:
      
      ====================
      net: complete dev_base_lock removal
      
      Back in 2009 we started an effort to get rid of dev_base_lock
      in favor of RCU.
      
      It is time to finish this work.
      
      Say goodbye to dev_base_lock !
      
      v4: rebase, and move dev_addr_sem to net/core/dev.h in patch 06/13 (Jakub)
      
      v3: I misread kbot reports, the issue was with dev->operstate (patch 10/13)
          So dev->reg_state is back to u8, and dev->operstate becomes an u32.
          Sorry for the noise.
      
      v2: dev->reg_state must be a standard enum, some arches
          do not support cmpxchg() on u8.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7c754e6a
    • Eric Dumazet's avatar
      net: remove dev_base_lock · 1b3ef46c
      Eric Dumazet authored
      dev_base_lock is not needed anymore, all remaining users also hold RTNL.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1b3ef46c
    • Eric Dumazet's avatar
      net: remove dev_base_lock from register_netdevice() and friends. · e51b9624
      Eric Dumazet authored
      RTNL already protects writes to dev->reg_state, we no longer need to hold
      dev_base_lock to protect the readers.
      
      unlist_netdevice() second argument can be removed.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e51b9624
    • Eric Dumazet's avatar
      net: remove dev_base_lock from do_setlink() · 2dd4d828
      Eric Dumazet authored
      We hold RTNL here, and dev->link_mode readers already
      are using READ_ONCE().
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2dd4d828
    • Eric Dumazet's avatar
      net: add netdev_set_operstate() helper · 6a2968ee
      Eric Dumazet authored
      dev_base_lock is going away, add netdev_set_operstate() helper
      so that hsr does not have to know core internals.
      
      Remove dev_base_lock acquisition from rfc2863_policy()
      
      v3: use an "unsigned int" for dev->operstate,
          so that try_cmpxchg() can work on all arches.
              ( https://lore.kernel.org/oe-kbuild-all/202402081918.OLyGaea3-lkp@intel.com/ )
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6a2968ee
    • Eric Dumazet's avatar
      net: remove stale mentions of dev_base_lock in comments · 328771de
      Eric Dumazet authored
      Change comments incorrectly mentioning dev_base_lock.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      328771de
    • Eric Dumazet's avatar
      net-sysfs: convert netstat_show() to RCU · e154bb7a
      Eric Dumazet authored
      dev_get_stats() can be called from RCU, there is no need
      to acquire dev_base_lock.
      
      Change dev_isalive() comment to reflect we no longer use
      dev_base_lock from net/core/net-sysfs.c
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e154bb7a
    • Eric Dumazet's avatar
      net-sysfs: convert dev->operstate reads to lockless ones · 004d1383
      Eric Dumazet authored
      operstate_show() can omit dev_base_lock acquisition only
      to read dev->operstate.
      
      Annotate accesses to dev->operstate.
      
      Writers still acquire dev_base_lock for mutual exclusion.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      004d1383
    • Eric Dumazet's avatar
      net-sysfs: use dev_addr_sem to remove races in address_show() · c7d52737
      Eric Dumazet authored
      Using dev_base_lock is not preventing from reading garbage.
      
      Use dev_addr_sem instead.
      
      v4: place dev_addr_sem extern in net/core/dev.h (Jakub Kicinski)
       Link: https://lore.kernel.org/netdev/20240212175845.10f6680a@kernel.org/Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c7d52737
    • Eric Dumazet's avatar
      net-sysfs: convert netdev_show() to RCU · 12692e3d
      Eric Dumazet authored
      Make clear dev_isalive() can be called with RCU protection.
      
      Then convert netdev_show() to RCU, to remove dev_base_lock
      dependency.
      
      Also add RCU to broadcast_show().
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      12692e3d
    • Eric Dumazet's avatar
      net: convert dev->reg_state to u8 · 4d42b37d
      Eric Dumazet authored
      Prepares things so that dev->reg_state reads can be lockless,
      by adding WRITE_ONCE() on write side.
      
      READ_ONCE()/WRITE_ONCE() do not support bitfields.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4d42b37d
    • Eric Dumazet's avatar
      dev: annotate accesses to dev->link · a6473fe9
      Eric Dumazet authored
      Following patch will read dev->link locklessly,
      annotate the write from do_setlink().
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a6473fe9
    • Eric Dumazet's avatar
      ip_tunnel: annotate data-races around t->parms.link · f694eee9
      Eric Dumazet authored
      t->parms.link is read locklessly, annotate these reads
      and opposite writes accordingly.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f694eee9
    • Eric Dumazet's avatar
      net: annotate data-races around dev->name_assign_type · 1c07dbb0
      Eric Dumazet authored
      name_assign_type_show() runs locklessly, we should annotate
      accesses to dev->name_assign_type.
      
      Alternative would be to grab devnet_rename_sem semaphore
      from name_assign_type_show(), but this would not bring
      more accuracy.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1c07dbb0
    • David S. Miller's avatar
      Merge branch 'per-epoll-context-busy-poll' · b7f9ef72
      David S. Miller authored
      Joe Damato says:
      
      ====================
      Per epoll context busy poll support
      
      Greetings:
      
      Welcome to v8.
      
      TL;DR This builds on commit bf3b9f63 ("epoll: Add busy poll support to
      epoll with socket fds.") by allowing user applications to enable
      epoll-based busy polling, set a busy poll packet budget, and enable or
      disable prefer busy poll on a per epoll context basis.
      
      This makes epoll-based busy polling much more usable for user
      applications than the current system-wide sysctl and hardcoded budget.
      
      To allow for this, two ioctls have been added for epoll contexts for
      getting and setting a new struct, struct epoll_params.
      
      ioctl was chosen vs a new syscall after reviewing a suggestion by Willem
      de Bruijn [1]. I am open to using a new syscall instead of an ioctl, but it
      seemed that:
        - Busy poll affects all existing epoll_wait and epoll_pwait variants in
          the same way, so new verions of many syscalls might be needed. It
          seems much simpler for users to use the correct
          epoll_wait/epoll_pwait for their app and add a call to ioctl to enable
          or disable busy poll as needed. This also probably means less work to
          get an existing epoll app using busy poll.
      
        - previously added epoll_pwait2 helped to bring epoll closer to
          existing syscalls (like pselect and ppoll) and this busy poll change
          reflected as a new syscall would not have the same effect.
      
      Note: patch 1/4 as of v4 uses an or (||) instead of an xor. I thought about
      it some more and I realized that if the user enables both the per-epoll
      context setting and the system wide sysctl, then busy poll should be
      enabled and not disabled. Using xor doesn't seem to make much sense after
      thinking through this a bit.
      
      Longer explanation:
      
      Presently epoll has support for a very useful form of busy poll based on
      the incoming NAPI ID (see also: SO_INCOMING_NAPI_ID [2]).
      
      This form of busy poll allows epoll_wait to drive NAPI packet processing
      which allows for a few interesting user application designs which can
      reduce latency and also potentially improve L2/L3 cache hit rates by
      deferring NAPI until userland has finished its work.
      
      The documentation available on this is, IMHO, a bit confusing so please
      allow me to explain how one might use this:
      
      1. Ensure each application thread has its own epoll instance mapping
      1-to-1 with NIC RX queues. An n-tuple filter would likely be used to
      direct connections with specific dest ports to these queues.
      
      2. Optionally: Setup IRQ coalescing for the NIC RX queues where busy
      polling will occur. This can help avoid the userland app from being
      pre-empted by a hard IRQ while userland is running. Note this means that
      userland must take care to call epoll_wait and not take too long in
      userland since it now drives NAPI via epoll_wait.
      
      3. Optionally: Consider using napi_defer_hard_irqs and gro_flush_timeout to
      further restrict IRQ generation from the NIC. These settings are
      system-wide so their impact must be carefully weighed against the running
      applications.
      
      4. Ensure that all incoming connections added to an epoll instance
      have the same NAPI ID. This can be done with a BPF filter when
      SO_REUSEPORT is used or getsockopt + SO_INCOMING_NAPI_ID when a single
      accept thread is used which dispatches incoming connections to threads.
      
      5. Lastly, busy poll must be enabled via a sysctl
      (/proc/sys/net/core/busy_poll).
      
      Please see Eric Dumazet's paper about busy polling [3] and a recent
      academic paper about measured performance improvements of busy polling [4]
      (albeit with a modification that is not currently present in the kernel)
      for additional context.
      
      The unfortunate part about step 5 above is that this enables busy poll
      system-wide which affects all user applications on the system,
      including epoll-based network applications which were not intended to
      be used this way or applications where increased CPU usage for lower
      latency network processing is unnecessary or not desirable.
      
      If the user wants to run one low latency epoll-based server application
      with epoll-based busy poll, but would like to run the rest of the
      applications on the system (which may also use epoll) without busy poll,
      this system-wide sysctl presents a significant problem.
      
      This change preserves the system-wide sysctl, but adds a mechanism (via
      ioctl) to enable or disable busy poll for epoll contexts as needed by
      individual applications, making epoll-based busy poll more usable.
      
      Note that this change includes an or (as of v4) instead of an xor. If the
      user has enabled both the system-wide sysctl and also the per epoll-context
      busy poll settings, then epoll should probably busy poll (vs being
      disabled).
      
      Thanks,
      Joe
      
      v7 -> v8:
         - Reviewed-by tag from Eric Dumazet applied to commit message of patch
           1/4.
      
         - patch 4/4:
           - EPIOCSPARAMS and EPIOCGPARAMS updated to use WRITE_ONCE and
             READ_ONCE, as requested by Eric Dumazet
           - Wrapped a long line (via netdev/checkpatch)
      
      v6 -> v7:
         - Acked-by tags from Stanislav Fomichev applied to commit messages of
           all patches.
         - Reviewed-by tags from Jakub Kicinski, Eric Dumazet applied to commit
           messages of patches 2 and 3. Jiri Slaby's Reviewed-by applied to patch
           4.
      
         - patch 1/4:
           - busy_poll_usecs reduced from u64 to u32.
           - Unnecessary parens removed (via netdev/checkpatch)
           - Wrapped long line (via netdev/checkpatch)
           - Remove inline from busy_loop_ep_timeout as objdump suggests the
             function is already inlined
           - Moved struct eventpoll assignment to declaration
           - busy_loop_ep_timeout is moved within CONFIG_NET_RX_BUSY_POLL and the
             ifdefs internally have been removed as per Eric Dumazet's review
           - Removed ep_busy_loop_on from the !defined CONFIG_NET_RX_BUSY_POLL
             section as it is only called when CONFIG_NET_RX_BUSY_POLL is
             defined
      
         - patch 3/4:
           - Fix whitespace alignment issue (via netdev/checkpatch)
      
         - patch 4/4:
           - epoll_params.busy_poll_usecs has been reduced to u32
           - epoll_params.busy_poll_usecs is now checked to ensure it is <=
             S32_MAX
           - __pad has been reduced to a single u8
           - memchr_inv has been dropped and replaced with a simple check for the
             single __pad byte
           - Removed space after cast (via netdev/checkpatch)
           - Wrap long line (via netdev/checkpatch)
           - Move struct eventpoll *ep assignment to declaration as per Jiri
             Slaby's review
           - Remove unnecessary !! as per Jiri Slaby's review
           - Reorganized variables to be reverse christmas tree order
      
      v5 -> v6:
        - patch 1/3 no functional change, but commit message corrected to explain
          that an or (||) is being used instead of xor.
      
        - patch 3/4 is a new patch which adds support for per epoll context
          prefer busy poll setting.
      
        - patch 4/4 updated to allow getting/setting per epoll context prefer
          busy poll setting; this setting is limited to either 0 or 1.
      
      v4 -> v5:
        - patch 3/3 updated to use memchr_inv to ensure that __pad is zero for
          the EPIOCSPARAMS ioctl. Recommended by Greg K-H [5], Dave Chinner [6],
          and Jiri Slaby [7].
      
      v3 -> v4:
        - patch 1/3 was updated to include an important functional change:
          ep_busy_loop_on was updated to use or (||) instead of xor (^). After
          thinking about it a bit more, I thought xor didn't make much sense.
          Enabling both the per-epoll context and the system-wide sysctl should
          probably enable busy poll, not disable it. So, or (||) makes more
          sense, I think.
      
        - patch 3/3 was updated:
          - to change the epoll_params fields to be __u64, __u16, and __u8 and
            to pad the struct to a multiple of 64bits. Suggested by Greg K-H [8]
            and Arnd Bergmann [9].
          - remove an unused pr_fmt, left over from the previous revision.
          - ioctl now returns -EINVAL when epoll_params.busy_poll_usecs >
            U32_MAX.
      
      v2 -> v3:
        - cover letter updated to mention why ioctl seems (to me) like a better
          choice vs a new syscall.
      
        - patch 3/4 was modified in 3 ways:
          - when an unknown ioctl is received, -ENOIOCTLCMD is returned instead
            of -EINVAL as the ioctl documentation requires.
          - epoll_params.busy_poll_budget can only be set to a value larger than
            NAPI_POLL_WEIGHT if code is run by privileged (CAP_NET_ADMIN) users.
            Otherwise, -EPERM is returned.
          - busy poll specific ioctl code moved out to its own function. On
            kernels without busy poll support, -EOPNOTSUPP is returned. This also
            makes the kernel build robot happier without littering the code with
            more #ifdefs.
      
        - dropped patch 4/4 after Eric Dumazet's review of it when it was sent
          independently to the list [10].
      
      v1 -> v2:
        - cover letter updated to make a mention of napi_defer_hard_irqs and
          gro_flush_timeout as an added step 3 and to cite both Eric Dumazet's
          busy polling paper and a paper from University of Waterloo for
          additional context. Specifically calling out the xor in patch 1/4
          incase it is missed by reviewers.
      
        - Patch 2/4 has its commit message updated, but no functional changes.
          Commit message now describes that allowing for a settable budget helps
          to improve throughput and is more consistent with other busy poll
          mechanisms that allow a settable budget via SO_BUSY_POLL_BUDGET.
      
        - Patch 3/4 was modified to check if the epoll_params.busy_poll_budget
          exceeds NAPI_POLL_WEIGHT. The larger value is allowed, but an error is
          printed. This was done for consistency with netif_napi_add_weight,
          which does the same.
      
        - Patch 3/4 the struct epoll_params was updated to fix the type of the
          data field; it was uint8_t and was changed to u8.
      
        - Patch 4/4 added to check if SO_BUSY_POLL_BUDGET exceeds
          NAPI_POLL_WEIGHT. The larger value is allowed, but an error is
          printed. This was done for consistency with netif_napi_add_weight,
          which does the same.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b7f9ef72
    • Joe Damato's avatar
      eventpoll: Add epoll ioctl for epoll_params · 18e2bf0e
      Joe Damato authored
      Add an ioctl for getting and setting epoll_params. User programs can use
      this ioctl to get and set the busy poll usec time, packet budget, and
      prefer busy poll params for a specific epoll context.
      
      Parameters are limited:
        - busy_poll_usecs is limited to <= s32_max
        - busy_poll_budget is limited to <= NAPI_POLL_WEIGHT by unprivileged
          users (!capable(CAP_NET_ADMIN))
        - prefer_busy_poll must be 0 or 1
        - __pad must be 0
      Signed-off-by: default avatarJoe Damato <jdamato@fastly.com>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Reviewed-by: default avatarJiri Slaby <jirislaby@kernel.org>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      18e2bf0e
    • Joe Damato's avatar
      eventpoll: Add per-epoll prefer busy poll option · de57a251
      Joe Damato authored
      When using epoll-based busy poll, the prefer_busy_poll option is hardcoded
      to false. Users may want to enable prefer_busy_poll to be used in
      conjunction with gro_flush_timeout and defer_hard_irqs_count to keep device
      IRQs masked.
      
      Other busy poll methods allow enabling or disabling prefer busy poll via
      SO_PREFER_BUSY_POLL, but epoll-based busy polling uses a hardcoded value.
      
      Fix this edge case by adding support for a per-epoll context
      prefer_busy_poll option. The default is false, as it was hardcoded before
      this change.
      Signed-off-by: default avatarJoe Damato <jdamato@fastly.com>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Reviewed-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      de57a251
    • Joe Damato's avatar
      eventpoll: Add per-epoll busy poll packet budget · c6aa2a77
      Joe Damato authored
      When using epoll-based busy poll, the packet budget is hardcoded to
      BUSY_POLL_BUDGET (8). Users may desire larger busy poll budgets, which
      can potentially increase throughput when busy polling under high network
      load.
      
      Other busy poll methods allow setting the busy poll budget via
      SO_BUSY_POLL_BUDGET, but epoll-based busy polling uses a hardcoded
      value.
      
      Fix this edge case by adding support for a per-epoll context busy poll
      packet budget. If not specified, the default value (BUSY_POLL_BUDGET) is
      used.
      Signed-off-by: default avatarJoe Damato <jdamato@fastly.com>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Reviewed-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c6aa2a77
    • Joe Damato's avatar
      eventpoll: support busy poll per epoll instance · 85455c79
      Joe Damato authored
      Allow busy polling on a per-epoll context basis. The per-epoll context
      usec timeout value is preferred, but the pre-existing system wide sysctl
      value is still supported if it specified.
      
      busy_poll_usecs is a u32, but in a follow up patch the ioctl provided to
      the user only allows setting a value from 0 to S32_MAX.
      Signed-off-by: default avatarJoe Damato <jdamato@fastly.com>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      85455c79
    • Kamal Heib's avatar
      net: ena: Remove redundant assignment · 723615a1
      Kamal Heib authored
      There is no point in initializing an ndo to NULL, therefore the
      assignment is redundant and can be removed.
      Signed-off-by: default avatarKamal Heib <kheib@redhat.com>
      Reviewed-by: default avatarBrett Creeley <brett.creeley@amd.com>
      Acked-by: default avatarArthur Kiyanovski <akiyano@amazon.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      723615a1
    • Hariprasad Kelam's avatar
      Octeontx2-af: Fetch MAC channel info from firmware · 99781449
      Hariprasad Kelam authored
      Packet ingress and egress MAC/serdes channel numbers are configurable
      on CN10K series of silicons. These channel numbers inturn used while
      installing MCAM rules to match ingress/egress port. Fetch these channel
      numbers from firmware at driver init time.
      Signed-off-by: default avatarHariprasad Kelam <hkelam@marvell.com>
      Signed-off-by: default avatarSunil Kovvuri Goutham <sgoutham@marvell.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      99781449
    • David S. Miller's avatar
      Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue · b53e8464
      David S. Miller authored
      Tony Nguyen says:
      
      ====================
      Intel Wired LAN Driver Updates 2024-02-12 (ice)
      
      This series contains updates to ice driver only.
      
      Grzegorz adds support for E825-C devices.
      
      Wojciech reworks devlink reload to fulfill expected conditions (remove
      and readd).
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b53e8464
    • David S. Miller's avatar
      Merge tag 'linux-can-next-for-6.9-20240213' of... · e1a00373
      David S. Miller authored
      Merge tag 'linux-can-next-for-6.9-20240213' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
      
      Marc Kleine-Budde says:
      
      ====================
      linux-can-next-for-6.9-20240213
      
      this is a pull request of 23 patches for net-next/master.
      
      The first patch is by Nicolas Maier and targets the CAN Broadcast
      Manager (bcm), it adds message flags to distinguish between own local
      and remote traffic.
      
      Oliver Hartkopp contributes a patch for the CAN ISOTP protocol that
      adds dynamic flow control parameters.
      
      Stefan Mätje's patch series add support for the esd PCIe/402 CAN
      interface family.
      
      Markus Schneider-Pargmann contributes 14 patches for the m_can to
      optimize for the SPI attached tcan4x5x controller.
      
      A patch by Vincent Mailhol replaces Wolfgang Grandegger by Vincent
      Mailhol as the CAN drivers Co-Maintainer.
      
      Jimmy Assarsson's patch add support for the Kvaser M.2 PCIe 4xCAN
      adapter.
      
      A patch by Daniil Dulov removed a redundant NULL check in the softing
      driver.
      
      Oliver Hartkopp contributes a patch to add CANXL virtual CAN network
      identifier support.
      
      A patch by myself removes Naga Sureshkumar Relli as the maintainer of
      the xilinx_can driver, as their email bounces.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e1a00373
    • Jakub Kicinski's avatar
      Merge branch 'add-multi-buff-support-for-xdp-running-in-generic-mode' · f77581bf
      Jakub Kicinski authored
      Lorenzo Bianconi says:
      
      ====================
      add multi-buff support for xdp running in generic mode
      
      Introduce multi-buffer support for xdp running in generic mode not always
      linearizing the skb in netif_receive_generic_xdp routine.
      Introduce generic percpu page_pools allocator.
      ====================
      
      Link: https://lore.kernel.org/r/cover.1707729884.git.lorenzo@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      f77581bf
    • Lorenzo Bianconi's avatar
    • Lorenzo Bianconi's avatar
      xdp: add multi-buff support for xdp running in generic mode · e6d5dbdd
      Lorenzo Bianconi authored
      Similar to native xdp, do not always linearize the skb in
      netif_receive_generic_xdp routine but create a non-linear xdp_buff to be
      processed by the eBPF program. This allow to add multi-buffer support
      for xdp running in generic mode.
      Acked-by: default avatarJesper Dangaard Brouer <hawk@kernel.org>
      Reviewed-by: default avatarToke Hoiland-Jorgensen <toke@redhat.com>
      Signed-off-by: default avatarLorenzo Bianconi <lorenzo@kernel.org>
      Link: https://lore.kernel.org/r/1044d6412b1c3e95b40d34993fd5f37cd2f319fd.1707729884.git.lorenzo@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      e6d5dbdd
    • Lorenzo Bianconi's avatar
      xdp: rely on skb pointer reference in do_xdp_generic and netif_receive_generic_xdp · 4d2bb0bf
      Lorenzo Bianconi authored
      Rely on skb pointer reference instead of the skb pointer in do_xdp_generic
      and netif_receive_generic_xdp routine signatures.
      This is a preliminary patch to add multi-buff support for xdp running in
      generic mode where we will need to reallocate the skb to avoid
      linearization and we will need to make it visible to do_xdp_generic()
      caller.
      Acked-by: default avatarJesper Dangaard Brouer <hawk@kernel.org>
      Reviewed-by: default avatarToke Hoiland-Jorgensen <toke@redhat.com>
      Signed-off-by: default avatarLorenzo Bianconi <lorenzo@kernel.org>
      Link: https://lore.kernel.org/r/c09415b1f48c8620ef4d76deed35050a7bddf7c2.1707729884.git.lorenzo@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      4d2bb0bf