1. 17 Mar, 2023 40 commits
    • Eric Dumazet's avatar
      net/packet: convert po->origdev to an atomic flag · ee5675ec
      Eric Dumazet authored
      syzbot/KCAN reported that po->origdev can be read
      while another thread is changing its value.
      
      We can avoid this splat by converting this field
      to an actual bit.
      
      Following patches will convert remaining 1bit fields.
      
      Fixes: 80feaacb ("[AF_PACKET]: Add option to return orig_dev to userspace.")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ee5675ec
    • Eric Dumazet's avatar
      net/packet: annotate accesses to po->xmit · b9d83ab8
      Eric Dumazet authored
      po->xmit can be set from setsockopt(PACKET_QDISC_BYPASS),
      while read locklessly.
      
      Use READ_ONCE()/WRITE_ONCE() to avoid potential load/store
      tearing issues.
      
      Fixes: d346a3fa ("packet: introduce PACKET_QDISC_BYPASS socket option")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b9d83ab8
    • David S. Miller's avatar
      Merge branch 'gve-xdp-support' · dc021e6c
      David S. Miller authored
      Praveen Kaligineedi says:
      
      ====================
      gve: Add XDP support for GQI-QPL format
      
      Adding support for XDP DROP, PASS, TX, REDIRECT for GQI QPL format.
      Add AF_XDP zero-copy support.
      
      When an XDP program is installed, dedicated TX queues are created to
      handle XDP traffic. The user needs to ensure that the number of
      configured TX queues is equal to the number of configured RX queues; and
      the number of TX/RX queues is less than or equal to half the maximum
      number of TX/RX queues.
      
      The XDP traffic from AF_XDP sockets and from other NICs (arriving via
      XDP_REDIRECT) will also egress through the dedicated XDP TX queues.
      
      Although these changes support AF_XDP socket in zero-copy mode, there is
      still a copy happening within the driver between XSK buffer pool and QPL
      bounce buffers in GQI-QPL format.
      
      The following example demonstrates how the XDP packets are mapped to
      TX queues:
      
      Example configuration:
      Max RX queues : 2N, Max TX queues : 2N
      Configured RX queues : N, Configured TX queues : N
      
      TX queue mapping:
      TX queues with queue id 0,...,N-1 will handle traffic from the stack.
      TX queues with queue id N,...,2N-1 will handle XDP traffic.
      
      For the XDP packets transmitted using XDP_TX action:
      <Egress TX queue id> = N + <Ingress RX queue id>
      
      For the XDP packets that arrive from other NICs via XDP_REDIRECT action:
      <Egress TX queue id> = N + ( smp_processor_id % N )
      
      For AF_XDP zero-copy mode:
      <Egress TX queue id> = N + <AF_XDP TX queue id>
      
      Changes in v2:
      - Removed gve_close/gve_open when adding XDP dedicated queues. Instead
      we add and register additional TX queues when the XDP program is
      installed. If the allocation/registration fails we return error and do
      not install the XDP program. Added a new patch to enable adding TX queues
      without gve_close/gve_open
      - Removed xdp tx spin lock from this patch. It is needed for XDP_REDIRECT
      support as both XDP_REDIRECT and XDP_TX traffic share the dedicated XDP
      queues. Moved the code to add xdp tx spinlock to the subsequent patch
      that adds XDP_REDIRECT support.
      - Added netdev_err when the user tries to set rx/tx queues to the values
      not supported when XDP is enabled.
      - Removed rcu annotation for xdp_prog. We disable the napi prior to
      adding/removing the xdp_prog and reenable it after the program has
      been installed for all the queues.
      - Ring the tx doorbell once for napi instead of every XDP TX packet.
      - Added a new helper function for freeing the FIFO buffer
      - Unregister xdp rxq for all the queues when the registration
      fails during XDP program installation
      - Register xsk rxq only when XSK buff pool is enabled
      - Removed code accessing internal xsk_buff_pool fields
      - Removed sleep driven code when disabling XSK buff pool. Disable
      napi and re-enable it after disabling XSK pool.
      - Make sure that we clean up dma mappings on XSK pool disable
      - Use napi_if_scheduled_mark_missed to avoid unnecessary napi move
      to the CPU calling ndo_xsk_wakeup()
      
      Changes in v3:
      - Padding bytes are used if the XDP TX packet headers do not
      fit at tail of TX FIFO. Taking these padding bytes into account
      while checking if enough space is available in TX FIFO.
      
      Changes in v4:
      - Turn on the carrier based on the link status synchronously rather
      than asynchronously when XDP is installed/uninstalled
      - Set the supported flags in net_device.xdp_features
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dc021e6c
    • Praveen Kaligineedi's avatar
      gve: Add AF_XDP zero-copy support for GQI-QPL format · fd8e4032
      Praveen Kaligineedi authored
      Adding AF_XDP zero-copy support.
      
      Note: Although these changes support AF_XDP socket in zero-copy
      mode, there is still a copy happening within the driver between
      XSK buffer pool and QPL bounce buffers in GQI-QPL format.
      In GQI-QPL queue format, the driver needs to allocate a fixed size
      memory, the size specified by vNIC device, for RX/TX and register this
      memory as a bounce buffer with the vNIC device when a queue is
      created. The number of pages in the bounce buffer is limited and the
      pages need to be made available to the vNIC by copying the RX data out
      to prevent head-of-line blocking. Therefore, we cannot pass the XSK
      buffer pool to the vNIC.
      
      The number of copies on RX path from the bounce buffer to XSK buffer is 2
      for AF_XDP copy mode (bounce buffer -> allocated page frag -> XSK buffer)
      and 1 for AF_XDP zero-copy mode (bounce buffer -> XSK buffer).
      
      This patch contains the following changes:
      1) Enable and disable XSK buffer pool
      2) Copy XDP packets from QPL bounce buffers to XSK buffer on rx
      3) Copy XDP packets from XSK buffer to QPL bounce buffers and
         ring the doorbell as part of XDP TX napi poll
      4) ndo_xsk_wakeup callback support
      Signed-off-by: default avatarPraveen Kaligineedi <pkaligineedi@google.com>
      Reviewed-by: default avatarJeroen de Borst <jeroendb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fd8e4032
    • Praveen Kaligineedi's avatar
      gve: Add XDP REDIRECT support for GQI-QPL format · 39a7f4aa
      Praveen Kaligineedi authored
      This patch contains the following changes:
      1) Support for XDP REDIRECT action on rx
      2) ndo_xdp_xmit callback support
      
      In GQI-QPL queue format, the driver needs to allocate a fixed size
      memory, the size specified by vNIC device, for RX/TX and register this
      memory as a bounce buffer with the vNIC device when a queue is created.
      The number of pages in the bounce buffer is limited and the pages need to
      be made available to the vNIC by copying the RX data out to prevent
      head-of-line blocking. The XDP_REDIRECT packets are therefore immediately
      copied to a newly allocated page.
      Signed-off-by: default avatarPraveen Kaligineedi <pkaligineedi@google.com>
      Reviewed-by: default avatarJeroen de Borst <jeroendb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      39a7f4aa
    • Praveen Kaligineedi's avatar
      gve: Add XDP DROP and TX support for GQI-QPL format · 75eaae15
      Praveen Kaligineedi authored
      Add support for XDP PASS, DROP and TX actions.
      
      This patch contains the following changes:
      1) Support installing/uninstalling XDP program
      2) Add dedicated XDP TX queues
      3) Add support for XDP DROP action
      4) Add support for XDP TX action
      Signed-off-by: default avatarPraveen Kaligineedi <pkaligineedi@google.com>
      Reviewed-by: default avatarJeroen de Borst <jeroendb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      75eaae15
    • Praveen Kaligineedi's avatar
      gve: Changes to add new TX queues · 7fc2bf78
      Praveen Kaligineedi authored
      Changes to enable adding and removing TX queues without calling
      gve_close() and gve_open().
      
      Made the following changes:
      1) priv->tx, priv->rx and priv->qpls arrays are allocated based on
         max tx queues and max rx queues
      2) Changed gve_adminq_create_tx_queues(), gve_adminq_destroy_tx_queues(),
      gve_tx_alloc_rings() and gve_tx_free_rings() functions to add/remove a
      subset of TX queues rather than all the TX queues.
      Signed-off-by: default avatarPraveen Kaligineedi <pkaligineedi@google.com>
      Reviewed-by: default avatarJeroen de Borst <jeroendb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7fc2bf78
    • Praveen Kaligineedi's avatar
      gve: XDP support GQI-QPL: helper function changes · 2e80aeae
      Praveen Kaligineedi authored
      This patch adds/modifies helper functions needed to add XDP
      support.
      Signed-off-by: default avatarPraveen Kaligineedi <pkaligineedi@google.com>
      Reviewed-by: default avatarJeroen de Borst <jeroendb@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2e80aeae
    • David S. Miller's avatar
      Merge branch 'net-sk_err-lockless-annotate' · ec4040ae
      David S. Miller authored
      Eric Dumazet says:
      
      ====================
      net: annotate lockless accesses to sk_err[_soft]
      
      This patch series is inspired by yet another syzbot report.
      
      Most poll() handlers are lockless and read sk->sk_err
      while other cpus can change it.
      
      Add READ_ONCE/WRITE_ONCE() to major/usual offenders.
      
      More to come later.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ec4040ae
    • Eric Dumazet's avatar
      af_unix: annotate lockless accesses to sk->sk_err · cc04410a
      Eric Dumazet authored
      unix_poll() and unix_dgram_poll() read sk->sk_err
      without any lock held.
      
      Add relevant READ_ONCE()/WRITE_ONCE() annotations.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc04410a
    • Eric Dumazet's avatar
      mptcp: annotate lockless accesses to sk->sk_err · 9ae8e5ad
      Eric Dumazet authored
      mptcp_poll() reads sk->sk_err without socket lock held/owned.
      
      Add READ_ONCE() and WRITE_ONCE() to avoid load/store tearing.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9ae8e5ad
    • Eric Dumazet's avatar
      tcp: annotate lockless access to sk->sk_err · e13ec3da
      Eric Dumazet authored
      tcp_poll() reads sk->sk_err without socket lock held/owned.
      
      We should used READ_ONCE() here, and update writers
      to use WRITE_ONCE().
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e13ec3da
    • Eric Dumazet's avatar
      net: annotate lockless accesses to sk->sk_err_soft · 2f2d9972
      Eric Dumazet authored
      This field can be read/written without lock synchronization.
      
      tcp and dccp have been handled in different patches.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2f2d9972
    • Eric Dumazet's avatar
      dccp: annotate lockless accesses to sk->sk_err_soft · 9a25f0cb
      Eric Dumazet authored
      This field can be read/written without lock synchronization.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9a25f0cb
    • Eric Dumazet's avatar
      tcp: annotate lockless accesses to sk->sk_err_soft · cee1af82
      Eric Dumazet authored
      This field can be read/written without lock synchronization.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cee1af82
    • Vadim Fedorenko's avatar
      vlan: partially enable SIOCSHWTSTAMP in container · 731b73db
      Vadim Fedorenko authored
      Setting timestamp filter was explicitly disabled on vlan devices in
      containers because it might affect other processes on the host. But it's
      absolutely legit in case when real device is in the same namespace.
      
      Fixes: 873017af ("vlan: disable SIOCSHWTSTAMP in container")
      Signed-off-by: default avatarVadim Fedorenko <vadim.fedorenko@linux.dev>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      731b73db
    • David S. Miller's avatar
      Merge branch 'pcs_get_state-fixes' · e05c5181
      David S. Miller authored
      Russell King (Oracle) says:
      
      ====================
      Minor fixes for pcs_get_state() implementations
      
      This series contains a number fixes for minor issues with some
      pcs_get_state() implementations, particualrly for the phylink
      state->an_enabled member. As they are minor, I'm suggesting we
      queue them in net-next as there is follow-on work for these, and
      there is no urgency for them to be in -rc.
      
      Just like phylib, state->advertising's Autoneg bit is a copy of
      state->an_enabled, and thus it is my intention to remove
      state->an_enabled from phylink to simplify things.
      
      This series gets rid of state->an_enabled assignments or
      reporting that should never have been there.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e05c5181
    • Russell King (Oracle)'s avatar
      net: pcs: lynx: don't print an_enabled in pcs_get_state() · ecec0ebb
      Russell King (Oracle) authored
      an_enabled will be going away, and in any case, pcs_get_state() should
      not be updating this member. Remove the print.
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      Reviewed-by: default avatarSteen Hegelund <Steen.Hegelund@microchip.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ecec0ebb
    • Russell King (Oracle)'s avatar
      net: pcs: xpcs: remove double-read of link state when using AN · ef63461c
      Russell King (Oracle) authored
      Phylink does not want the current state of the link when reading the
      PCS link state - it wants the latched state. Don't double-read the
      MII status register. Phylink will re-read as necessary to capture
      transient link-down events as of dbae3388 ("net: phylink: Force
      retrigger in case of latched link-fail indicator").
      
      The above referenced commit is a dependency for this change, and thus
      this change should not be backported to any kernel that does not
      contain the above referenced commit.
      
      Fixes: fcb26bd2 ("net: phy: Add Synopsys DesignWare XPCS MDIO module")
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ef63461c
    • David S. Miller's avatar
      Merge branch 'vxlan-MDB-support' · abf36703
      David S. Miller authored
      Ido Schimmel says:
      
      ====================
      vxlan: Add MDB support
      
      tl;dr
      =====
      
      This patchset implements MDB support in the VXLAN driver, allowing it to
      selectively forward IP multicast traffic to VTEPs with interested
      receivers instead of flooding it to all the VTEPs as BUM. The motivating
      use case is intra and inter subnet multicast forwarding using EVPN
      [1][2], which means that MDB entries are only installed by the user
      space control plane and no snooping is implemented, thereby avoiding a
      lot of unnecessary complexity in the kernel.
      
      Background
      ==========
      
      Both the bridge and VXLAN drivers have an FDB that allows them to
      forward Ethernet frames based on their destination MAC addresses and
      VLAN/VNI. These FDBs are managed using the same PF_BRIDGE/RTM_*NEIGH
      netlink messages and bridge(8) utility.
      
      However, only the bridge driver has an MDB that allows it to selectively
      forward IP multicast packets to bridge ports with interested receivers
      behind them, based on (S, G) and (*, G) MDB entries. When these packets
      reach the VXLAN driver they are flooded using the "all-zeros" FDB entry
      (00:00:00:00:00:00). The entry either includes the list of all the VTEPs
      in the tenant domain (when ingress replication is used) or the multicast
      address of the BUM tunnel (when P2MP tunnels are used), to which all the
      VTEPs join.
      
      Networks that make heavy use of multicast in the overlay can benefit
      from a solution that allows them to selectively forward IP multicast
      traffic only to VTEPs with interested receivers. Such a solution is
      described in the next section.
      
      Motivation
      ==========
      
      RFC 7432 [3] defines a "MAC/IP Advertisement route" (type 2) [4] that
      allows VTEPs in the EVPN network to advertise and learn reachability
      information for unicast MAC addresses. Traffic destined to a unicast MAC
      address can therefore be selectively forwarded to a single VTEP behind
      which the MAC is located.
      
      The same is not true for IP multicast traffic. Such traffic is simply
      flooded as BUM to all VTEPs in the broadcast domain (BD) / subnet,
      regardless if a VTEP has interested receivers for the multicast stream
      or not. This is especially problematic for overlay networks that make
      heavy use of multicast.
      
      The issue is addressed by RFC 9251 [1] that defines a "Selective
      Multicast Ethernet Tag Route" (type 6) [5] which allows VTEPs in the
      EVPN network to advertise multicast streams that they are interested in.
      This is done by having each VTEP suppress IGMP/MLD packets from being
      transmitted to the NVE network and instead communicate the information
      over BGP to other VTEPs.
      
      The draft in [2] further extends RFC 9251 with procedures to allow
      efficient forwarding of IP multicast traffic not only in a given subnet,
      but also between different subnets in a tenant domain.
      
      The required changes in the bridge driver to support the above were
      already merged in merge commit 8150f0cf ("Merge branch
      'bridge-mcast-extensions-for-evpn'"). However, full support entails MDB
      support in the VXLAN driver so that it will be able to selectively
      forward IP multicast traffic only to VTEPs with interested receivers.
      The implementation of this MDB is described in the next section.
      
      Implementation
      ==============
      
      The user interface is extended to allow user space to specify the
      destination VTEP(s) and related parameters. Example usage:
      
       # bridge mdb add dev vxlan0 port vxlan0 grp 239.1.1.1 permanent dst 198.51.100.1
       # bridge mdb add dev vxlan0 port vxlan0 grp 239.1.1.1 permanent dst 192.0.2.1
      
       $ bridge -d -s mdb show
       dev vxlan0 port vxlan0 grp 239.1.1.1 permanent filter_mode exclude proto static dst 192.0.2.1    0.00
       dev vxlan0 port vxlan0 grp 239.1.1.1 permanent filter_mode exclude proto static dst 198.51.100.1    0.00
      
      Since the MDB is fully managed by user space and since snooping is not
      implemented, only permanent entries can be installed and temporary
      entries are rejected by the kernel.
      
      The netlink interface is extended with a few new attributes in the
      RTM_NEWMDB / RTM_DELMDB request messages:
      
      [ struct nlmsghdr ]
      [ struct br_port_msg ]
      [ MDBA_SET_ENTRY ]
      	struct br_mdb_entry
      [ MDBA_SET_ENTRY_ATTRS ]
      	[ MDBE_ATTR_SOURCE ]
      		struct in_addr / struct in6_addr
      	[ MDBE_ATTR_SRC_LIST ]
      		[ MDBE_SRC_LIST_ENTRY ]
      			[ MDBE_SRCATTR_ADDRESS ]
      				struct in_addr / struct in6_addr
      		[ ...]
      	[ MDBE_ATTR_GROUP_MODE ]
      		u8
      	[ MDBE_ATTR_RTPORT ]
      		u8
      	[ MDBE_ATTR_DST ]	// new
      		struct in_addr / struct in6_addr
      	[ MDBE_ATTR_DST_PORT ]	// new
      		u16
      	[ MDBE_ATTR_VNI ]	// new
      		u32
      	[ MDBE_ATTR_IFINDEX ]	// new
      		s32
      	[ MDBE_ATTR_SRC_VNI ]	// new
      		u32
      
      RTM_NEWMDB / RTM_DELMDB responses and notifications are extended with
      corresponding attributes.
      
      One MDB entry that can be installed in the VXLAN MDB, but not in the
      bridge MDB is the catchall entry (0.0.0.0 / ::). It is used to transmit
      unregistered multicast traffic that is not link-local and is especially
      useful when inter-subnet multicast forwarding is required. See patch #12
      for a detailed explanation and motivation. It is similar to the
      "all-zeros" FDB entry that can be installed in the VXLAN FDB, but not
      the bridge FDB.
      
      "added_by_star_ex" entries
      --------------------------
      
      The bridge driver automatically installs (S, G) MDB port group entries
      marked as "added_by_star_ex" whenever it detects that an (S, G) entry
      can prevent traffic from being forwarded via a port associated with an
      EXCLUDE (*, G) entry. The bridge will add the port to the port group of
      the (S, G) entry, thereby creating a new port group entry. The
      complexity associated with these entries is not trivial, but it needs to
      reside in the bridge driver because it automatically installs MDB
      entries in response to snooped IGMP / MLD packets.
      
      The same in not true for the VXLAN MDB which is entirely managed by user
      space who is fully capable of forming the correct replication lists on
      its own. In addition, the complexity associated with the
      "added_by_star_ex" entries in the VXLAN driver is higher compared to the
      bridge: Whenever a remote VTEP is added to the catchall entry, it needs
      to be added to all the existing MDB entries, as such a remote requested
      all the multicast traffic to be forwarded to it. Similarly, whenever an
      (*, G) or (S, G) entry is added, all the remotes associated with the
      catchall entry need to be added to it.
      
      Given the above, this patchset does not implement support for such
      entries.  One argument against this decision can be that in the future
      someone might want to populate the VXLAN MDB in response to decapsulated
      IGMP / MLD packets and not according to EVPN routes. Regardless of my
      doubts regarding this possibility, it can be implemented using a new
      VXLAN device knob that will also enable the "added_by_star_ex"
      functionality.
      
      Testing
      =======
      
      Tested using existing VXLAN and MDB selftests under "net/" and
      "net/forwarding/". Added a dedicated selftest in the last patch.
      
      Patchset overview
      =================
      
      Patches #1-#3 are small preparations in the bridge driver. I plan to
      submit them separately together with an MDB dump test case.
      
      Patches #4-#6 are additional preparations centered around the extraction
      of the MDB netlink handlers from the bridge driver to the common
      rtnetlink code. This allows reusing the existing MDB netlink messages
      for the configuration of the VXLAN MDB.
      
      Patches #7-#9 include more small preparations in the common rtnetlink
      code and the VXLAN driver.
      
      Patch #10 implements the MDB control path in the VXLAN driver, which
      will allow user space to create, delete, replace and dump MDB entries.
      
      Patches #11-#12 implement the MDB data path in the VXLAN driver,
      allowing it to selectively forward IP multicast traffic according to the
      matched MDB entry.
      
      Patch #13 finally enables MDB support in the VXLAN driver.
      
      iproute2 patches can be found here [6].
      
      Note that in order to fully support the specifications in [1] and [2],
      additional functionality is required from the data path. However, it can
      be achieved using existing kernel interfaces which is why it is not
      described here.
      
      Changelog
      =========
      
      Since v1 [7]:
      
      Patch #9: Use htons() in 'case' instead of ntohs() in 'switch'.
      
      Since RFC [8]:
      
      Patch #3: Use NL_ASSERT_DUMP_CTX_FITS().
      Patch #3: memset the entire context when moving to the next device.
      Patch #3: Reset sequence counters when moving to the next device.
      Patch #3: Use NL_SET_ERR_MSG_ATTR() in rtnl_validate_mdb_entry().
      Patch #7: Remove restrictions regarding mixing of multicast and unicast
      remote destination IPs in an MDB entry. While such configuration does
      not make sense to me, it is no forbidden by the VXLAN FDB code and does
      not crash the kernel.
      Patch #7: Fix check regarding all-zeros MDB entry and source.
      Patch #11: New patch.
      
      [1] https://datatracker.ietf.org/doc/html/rfc9251
      [2] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast
      [3] https://datatracker.ietf.org/doc/html/rfc7432
      [4] https://datatracker.ietf.org/doc/html/rfc7432#section-7.2
      [5] https://datatracker.ietf.org/doc/html/rfc9251#section-9.1
      [6] https://github.com/idosch/iproute2/commits/submit/mdb_vxlan_rfc_v1
      [7] https://lore.kernel.org/netdev/20230313145349.3557231-1-idosch@nvidia.com/
      [8] https://lore.kernel.org/netdev/20230204170801.3897900-1-idosch@nvidia.com/
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      abf36703
    • Ido Schimmel's avatar
      selftests: net: Add VXLAN MDB test · 62199e3f
      Ido Schimmel authored
      Add test cases for VXLAN MDB, testing the control and data paths. Two
      different sets of namespaces (i.e., ns{1,2}_v4 and ns{1,2}_v6) are used
      in order to test VXLAN MDB with both IPv4 and IPv6 underlays,
      respectively.
      
      Example truncated output:
      
       # ./test_vxlan_mdb.sh
       [...]
       Tests passed: 620
       Tests failed:   0
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      62199e3f
    • Ido Schimmel's avatar
      vxlan: Enable MDB support · 08f876a7
      Ido Schimmel authored
      Now that the VXLAN MDB control and data paths are in place we can expose
      the VXLAN MDB functionality to user space.
      
      Set the VXLAN MDB net device operations to the appropriate functions,
      thereby allowing the rtnetlink code to reach the VXLAN driver.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      08f876a7
    • Ido Schimmel's avatar
      vxlan: Add MDB data path support · 0f83e69f
      Ido Schimmel authored
      Integrate MDB support into the Tx path of the VXLAN driver, allowing it
      to selectively forward IP multicast traffic according to the matched MDB
      entry.
      
      If MDB entries are configured (i.e., 'VXLAN_F_MDB' is set) and the
      packet is an IP multicast packet, perform up to three different lookups
      according to the following priority:
      
      1. For an (S, G) entry, using {Source VNI, Source IP, Destination IP}.
      2. For a (*, G) entry, using {Source VNI, Destination IP}.
      3. For the catchall MDB entry (0.0.0.0 or ::), using the source VNI.
      
      The catchall MDB entry is similar to the catchall FDB entry
      (00:00:00:00:00:00) that is currently used to transmit BUM (broadcast,
      unknown unicast and multicast) traffic. However, unlike the catchall FDB
      entry, this entry is only used to transmit unregistered IP multicast
      traffic that is not link-local. Therefore, when configured, the catchall
      FDB entry will only transmit BULL (broadcast, unknown unicast,
      link-local multicast) traffic.
      
      The catchall MDB entry is useful in deployments where inter-subnet
      multicast forwarding is used and not all the VTEPs in a tenant domain
      are members in all the broadcast domains. In such deployments it is
      advantageous to transmit BULL (broadcast, unknown unicast and link-local
      multicast) and unregistered IP multicast traffic on different tunnels.
      If the same tunnel was used, a VTEP only interested in IP multicast
      traffic would also pull all the BULL traffic and drop it as it is not a
      member in the originating broadcast domain [1].
      
      If the packet did not match an MDB entry (or if the packet is not an IP
      multicast packet), return it to the Tx path, allowing it to be forwarded
      according to the FDB.
      
      If the packet did match an MDB entry, forward it to the associated
      remote VTEPs. However, if the entry is a (*, G) entry and the associated
      remote is in INCLUDE mode, then skip over it as the source IP is not in
      its source list (otherwise the packet would have matched on an (S, G)
      entry). Similarly, if the associated remote is marked as BLOCKED (can
      only be set on (S, G) entries), then skip over it as well as the remote
      is in EXCLUDE mode and the source IP is in its source list.
      
      [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-2.6Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0f83e69f
    • Ido Schimmel's avatar
      vxlan: mdb: Add an internal flag to indicate MDB usage · bc6c6b01
      Ido Schimmel authored
      Add an internal flag to indicate whether MDB entries are configured or
      not. Set the flag after installing the first MDB entry and clear it
      before deleting the last one.
      
      The flag will be consulted by the data path which will only perform an
      MDB lookup if the flag is set, thereby keeping the MDB overhead to a
      minimum when the MDB is not used.
      
      Another option would have been to use a static key, but it is global and
      not per-device, unlike the current approach.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bc6c6b01
    • Ido Schimmel's avatar
      vxlan: mdb: Add MDB control path support · a3a48de5
      Ido Schimmel authored
      Implement MDB control path support, enabling the creation, deletion,
      replacement and dumping of MDB entries in a similar fashion to the
      bridge driver. Unlike the bridge driver, each entry stores a list of
      remote VTEPs to which matched packets need to be replicated to and not a
      list of bridge ports.
      
      The motivating use case is the installation of MDB entries by a user
      space control plane in response to received EVPN routes. As such, only
      allow permanent MDB entries to be installed and do not implement
      snooping functionality, avoiding a lot of unnecessary complexity.
      
      Since entries can only be modified by user space under RTNL, use RTNL as
      the write lock. Use RCU to ensure that MDB entries and remotes are not
      freed while being accessed from the data path during transmission.
      
      In terms of uAPI, reuse the existing MDB netlink interface, but add a
      few new attributes to request and response messages:
      
      * IP address of the destination VXLAN tunnel endpoint where the
        multicast receivers reside.
      
      * UDP destination port number to use to connect to the remote VXLAN
        tunnel endpoint.
      
      * VXLAN VNI Network Identifier to use to connect to the remote VXLAN
        tunnel endpoint. Required when Ingress Replication (IR) is used and
        the remote VTEP is not a member of originating broadcast domain
        (VLAN/VNI) [1].
      
      * Source VNI Network Identifier the MDB entry belongs to. Used only when
        the VXLAN device is in external mode.
      
      * Interface index of the outgoing interface to reach the remote VXLAN
        tunnel endpoint. This is required when the underlay destination IP is
        multicast (P2MP), as the multicast routing tables are not consulted.
      
      All the new attributes are added under the 'MDBA_SET_ENTRY_ATTRS' nest
      which is strictly validated by the bridge driver, thereby automatically
      rejecting the new attributes.
      
      [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-3.2.2Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a3a48de5
    • Ido Schimmel's avatar
      vxlan: Expose vxlan_xmit_one() · 6ab271aa
      Ido Schimmel authored
      Given a packet and a remote destination, the function will take care of
      encapsulating the packet and transmitting it to the destination.
      
      Expose it so that it could be used in subsequent patches by the MDB code
      to transmit a packet to the remote destination(s) stored in the MDB
      entry.
      
      It will allow us to keep the MDB code self-contained, not exposing its
      data structures to the rest of the VXLAN driver.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6ab271aa
    • Ido Schimmel's avatar
      vxlan: Move address helpers to private headers · f307c8bf
      Ido Schimmel authored
      Move the helpers out of the core C file to the private header so that
      they could be used by the upcoming MDB code.
      
      While at it, constify the second argument of vxlan_nla_get_addr().
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f307c8bf
    • Ido Schimmel's avatar
      rtnetlink: bridge: mcast: Relax group address validation in common code · da654c80
      Ido Schimmel authored
      In the upcoming VXLAN MDB implementation, the 0.0.0.0 and :: MDB entries
      will act as catchall entries for unregistered IP multicast traffic in a
      similar fashion to the 00:00:00:00:00:00 VXLAN FDB entry that is used to
      transmit BUM traffic.
      
      In deployments where inter-subnet multicast forwarding is used, not all
      the VTEPs in a tenant domain are members in all the broadcast domains.
      It is therefore advantageous to transmit BULL (broadcast, unknown
      unicast and link-local multicast) and unregistered IP multicast traffic
      on different tunnels. If the same tunnel was used, a VTEP only
      interested in IP multicast traffic would also pull all the BULL traffic
      and drop it as it is not a member in the originating broadcast domain
      [1].
      
      Prepare for this change by allowing the 0.0.0.0 group address in the
      common rtnetlink MDB code and forbid it in the bridge driver. A similar
      change is not needed for IPv6 because the common code only validates
      that the group address is not the all-nodes address.
      
      [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-2.6Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      da654c80
    • Ido Schimmel's avatar
      rtnetlink: bridge: mcast: Move MDB handlers out of bridge driver · cc7f5022
      Ido Schimmel authored
      Currently, the bridge driver registers handlers for MDB netlink
      messages, making it impossible for other drivers to implement MDB
      support.
      
      As a preparation for VXLAN MDB support, move the MDB handlers out of the
      bridge driver to the core rtnetlink code. The rtnetlink code will call
      into individual drivers by invoking their previously added MDB net
      device operations.
      
      Note that while the diffstat is large, the change is mechanical. It
      moves code out of the bridge driver to rtnetlink code. Also note that a
      similar change was made in 2012 with commit 77162022 ("net: add
      generic PF_BRIDGE:RTM_ FDB hooks") that moved FDB handlers out of the
      bridge driver to the core rtnetlink code.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc7f5022
    • Ido Schimmel's avatar
      bridge: mcast: Implement MDB net device operations · c009de10
      Ido Schimmel authored
      Implement the previously added MDB net device operations in the bridge
      driver so that they could be invoked by core rtnetlink code in the next
      patch.
      
      The operations are identical to the existing br_mdb_{dump,add,del}
      functions. The '_new' suffix will be removed in the next patch. The
      functions are re-implemented in this patch to make the conversion in the
      next patch easier to review.
      
      Add dummy implementations when 'CONFIG_BRIDGE_IGMP_SNOOPING' is
      disabled, so that an error will be returned to user space when it is
      trying to add or delete an MDB entry. This is consistent with existing
      behavior where the bridge driver does not even register rtnetlink
      handlers for RTM_{NEW,DEL,GET}MDB messages when this Kconfig option is
      disabled.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c009de10
    • Ido Schimmel's avatar
      net: Add MDB net device operations · 8c44fa12
      Ido Schimmel authored
      Add MDB net device operations that will be invoked by rtnetlink code in
      response to received RTM_{NEW,DEL,GET}MDB messages. Subsequent patches
      will implement these operations in the bridge and VXLAN drivers.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8c44fa12
    • David S. Miller's avatar
      Merge branch 'J784S4-CPSW9G-bindings' · ec47dcb4
      David S. Miller authored
      Siddharth Vadapalli says:
      
      ====================
      Add J784S4 CPSW9G NET Bindings
      
      This series cleans up the bindings by reordering the compatibles, followed
      by adding the bindings for CPSW9G instance of CPSW Ethernet Switch on TI's
      J784S4 SoC.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ec47dcb4
    • Siddharth Vadapalli's avatar
      dt-bindings: net: ti: k3-am654-cpsw-nuss: Add J784S4 CPSW9G support · e0c9c2a7
      Siddharth Vadapalli authored
      Update bindings for TI K3 J784S4 SoC which contains 9 ports (8 external
      ports) CPSW9G module and add compatible for it.
      Signed-off-by: default avatarSiddharth Vadapalli <s-vadapalli@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e0c9c2a7
    • Siddharth Vadapalli's avatar
      dt-bindings: net: ti: k3-am654-cpsw-nuss: Fix compatible order · 40235ede
      Siddharth Vadapalli authored
      Reorder compatibles to follow alphanumeric order.
      Signed-off-by: default avatarSiddharth Vadapalli <s-vadapalli@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      40235ede
    • Shradha Gupta's avatar
      net: mana: Add new MANA VF performance counters for easier troubleshooting · bd7fc6e1
      Shradha Gupta authored
      Extended performance counter stats in 'ethtool -S <interface>' output
      for MANA VF to facilitate troubleshooting.
      
      Tested-on: Ubuntu22
      Signed-off-by: default avatarShradha Gupta <shradhagupta@linux.microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bd7fc6e1
    • Mengyuan Lou's avatar
      net: wangxun: Implement the ndo change mtu interface · 81dc0741
      Mengyuan Lou authored
      Add ngbe and txgbe ndo_change_mtu support.
      Signed-off-by: default avatarMengyuan Lou <mengyuanlou@net-swift.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      81dc0741
    • Luiz Angelo Daros de Luca's avatar
      net: dsa: realtek: rtl8365mb: add change_mtu · c36a77c3
      Luiz Angelo Daros de Luca authored
      The rtl8365mb was using a fixed MTU size of 1536, which was probably
      inspired by the rtl8366rb's initial frame size. However, unlike that
      family, the rtl8365mb family can specify the max frame size in bytes,
      rather than in fixed steps.
      
      DSA calls change_mtu for the CPU port once the max MTU value among the
      ports changes. As the max frame size is defined globally, the switch
      is configured only when the call affects the CPU port.
      
      The available specifications do not directly define the max supported
      frame size, but it mentions a 16k limit. This driver will use the 0x3FFF
      limit as it is used in the vendor API code. However, the switch sets the
      max frame size to 16368 bytes (0x3FF0) after it resets.
      
      change_mtu uses MTU size, or ethernet payload size, while the switch
      works with frame size. The frame size is calculated considering the
      ethernet header (14 bytes), a possible 802.1Q tag (4 bytes), the payload
      size (MTU), and the Ethernet FCS (4 bytes). The CPU tag (8 bytes) is
      consumed before the switch enforces the limit.
      
      During setup, the driver will use the default 1500-byte MTU of DSA to
      set the maximum frame size. The current sum will be
      VLAN_ETH_HLEN+1500+ETH_FCS_LEN, which results in 1522 bytes.  Although
      it is lower than the previous initial value of 1536 bytes, the driver
      will increase the frame size for a larger MTU. However, if something
      requires more space without increasing the MTU, such as QinQ, we would
      need to add the extra length to the rtl8365mb_port_change_mtu() formula.
      
      MTU was tested up to 2018 (with 802.1Q) as that is as far as mt7620
      (where rtl8367s is stacked) can go. The register was manually
      manipulated byte-by-byte to ensure the MTU to frame size conversion was
      correct. For frames without 802.1Q tag, the frame size limit will be 4
      bytes over the required size.
      
      There is a jumbo register, enabled by default at 6k frame size.
      However, the jumbo settings do not seem to limit nor expand the maximum
      tested MTU (2018), even when jumbo is disabled. More tests are needed
      with a device that can handle larger frames.
      Signed-off-by: default avatarLuiz Angelo Daros de Luca <luizluca@gmail.com>
      Reviewed-by: default avatarAlexander Duyck <alexanderduyck@fb.com>
      Reviewed-by: default avatarAlvin Šipraga <alsi@bang-olufsen.dk>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c36a77c3
    • Jakub Kicinski's avatar
      Merge branch 'add-ptp-support-for-sama7g5' · b883d1ee
      Jakub Kicinski authored
      Durai Manickam says:
      
      ====================
      Add PTP support for sama7g5
      
      This patch series is intended to add PTP capability to the GEM and
      EMAC for sama7g5.
      ====================
      
      Link: https://lore.kernel.org/r/20230315095053.53969-1-durai.manickamkr@microchip.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      b883d1ee
    • Durai Manickam KR's avatar
      9bae0dd0
    • Durai Manickam KR's avatar
      net: macb: Add PTP support to GEM for sama7g5 · abc783a7
      Durai Manickam KR authored
      Add PTP capability to the Gigabit Ethernet MAC.
      Signed-off-by: default avatarDurai Manickam KR <durai.manickamkr@microchip.com>
      Reviewed-by: default avatarClaudiu Beznea <claudiu.beznea@microchip.com>
      Reviewed-by: default avatarMichal Swiatkowski <michal.swiatkowski@linux.intel.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      abc783a7