1. 29 Mar, 2023 13 commits
    • Donald Hunter's avatar
      netlink: specs: add partial specification for openvswitch · 643ef4a6
      Donald Hunter authored
      The openvswitch family has a fixed header, uses struct attrs and has array
      values. This partial spec demonstrates these features in the YNL CLI. These
      specs are sufficient to create, delete and dump datapaths and to dump vports:
      
      $ ./tools/net/ynl/cli.py \
          --spec Documentation/netlink/specs/ovs_datapath.yaml \
          --do dp-new --json '{ "dp-ifindex": 0, "name": "demo", "upcall-pid": 0}'
      None
      
      $ ./tools/net/ynl/cli.py \
          --spec Documentation/netlink/specs/ovs_datapath.yaml \
          --dump dp-get --json '{ "dp-ifindex": 0 }'
      [{'dp-ifindex': 3,
        'masks-cache-size': 256,
        'megaflow-stats': {'cache-hits': 0,
                           'mask-hit': 0,
                           'masks': 0,
                           'pad1': 0,
                           'padding': 0},
        'name': 'test',
        'stats': {'flows': 0, 'hit': 0, 'lost': 0, 'missed': 0},
        'user-features': {'dispatch-upcall-per-cpu',
                          'tc-recirc-sharing',
                          'unaligned'}},
       {'dp-ifindex': 48,
        'masks-cache-size': 256,
        'megaflow-stats': {'cache-hits': 0,
                           'mask-hit': 0,
                           'masks': 0,
                           'pad1': 0,
                           'padding': 0},
        'name': 'demo',
        'stats': {'flows': 0, 'hit': 0, 'lost': 0, 'missed': 0},
        'user-features': set()}]
      
      $ ./tools/net/ynl/cli.py \
          --spec Documentation/netlink/specs/ovs_datapath.yaml \
          --do dp-del --json '{ "dp-ifindex": 0, "name": "demo"}'
      None
      
      $ ./tools/net/ynl/cli.py \
          --spec Documentation/netlink/specs/ovs_vport.yaml \
          --dump vport-get --json '{ "dp-ifindex": 3 }'
      [{'dp-ifindex': 3,
        'ifindex': 3,
        'name': 'test',
        'port-no': 0,
        'stats': {'rx-bytes': 0,
                  'rx-dropped': 0,
                  'rx-errors': 0,
                  'rx-packets': 0,
                  'tx-bytes': 0,
                  'tx-dropped': 0,
                  'tx-errors': 0,
                  'tx-packets': 0},
        'type': 'internal',
        'upcall-pid': [0],
        'upcall-stats': {'fail': 0, 'success': 0}}]
      Signed-off-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      643ef4a6
    • Donald Hunter's avatar
      tools: ynl: Add fixed-header support to ynl · f036d936
      Donald Hunter authored
      Add support for netlink families that add an optional fixed header structure
      after the genetlink header and before any attributes. The fixed-header can be
      specified on a per op basis, or once for all operations, which serves as a
      default value that can be overridden.
      Signed-off-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      f036d936
    • Donald Hunter's avatar
      tools: ynl: Add struct attr decoding to ynl · 26071913
      Donald Hunter authored
      Add support for decoding attributes that contain C structs.
      Signed-off-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      26071913
    • Donald Hunter's avatar
      tools: ynl: Add C array attribute decoding to ynl · b423c3c8
      Donald Hunter authored
      Add support for decoding C arrays from binay blobs in genetlink-legacy
      messages.
      Signed-off-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Reviewed-by: default avatarJakub Kicinski <kuba@kernel.org>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      b423c3c8
    • Donald Hunter's avatar
      tools: ynl: Add struct parsing to nlspec · bec0b7a2
      Donald Hunter authored
      Add python classes for struct definitions to nlspec
      Signed-off-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Reviewed-by: default avatarJakub Kicinski <kuba@kernel.org>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      bec0b7a2
    • Jakub Kicinski's avatar
      Merge tag 'mlx5-updates-2023-03-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux · de749452
      Jakub Kicinski authored
      Saeed Mahameed says:
      
      ====================
      mlx5-updates-2023-03-20
      
      mlx5 dynamic msix
      
      This patch series adds support for dynamic msix vectors allocation in mlx5.
      
      Eli Cohen Says:
      
      ================
      
      The following series of patches modifies mlx5_core to work with the
      dynamic MSIX API. Currently, mlx5_core allocates all the interrupt
      vectors it needs and distributes them amongst the consumers. With the
      introduction of dynamic MSIX support, which allows for allocation of
      interrupts more than once, we now allocate vectors as we need them.
      This allows other drivers running on top of mlx5_core to allocate
      interrupt vectors for their own use. An example for this is mlx5_vdpa,
      which uses these vectors to propagate interrupts directly from the
      hardware to the vCPU [1].
      
      As a preparation for using this series, a use after free issue is fixed
      in lib/cpu_rmap.c and the allocator for rmap entries has been modified.
      A complementary API for irq_cpu_rmap_add() has also been introduced.
      
      [1] https://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux.git/patch/?id=0f2bf1fcae96a83b8c5581854713c9fc3407556e
      
      ================
      
      * tag 'mlx5-updates-2023-03-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
        net/mlx5: Provide external API for allocating vectors
        net/mlx5: Use one completion vector if eth is disabled
        net/mlx5: Refactor calculation of required completion vectors
        net/mlx5: Move devlink registration before mlx5_load
        net/mlx5: Use dynamic msix vectors allocation
        net/mlx5: Refactor completion irq request/release code
        net/mlx5: Improve naming of pci function vectors
        net/mlx5: Use newer affinity descriptor
        net/mlx5: Modify struct mlx5_irq to use struct msi_map
        net/mlx5: Fix wrong comment
        net/mlx5e: Coding style fix, add empty line
        lib: cpu_rmap: Add irq_cpu_rmap_remove to complement irq_cpu_rmap_add
        lib: cpu_rmap: Use allocator for rmap entries
        lib: cpu_rmap: Avoid use after free on rmap->obj array entries
      ====================
      
      Link: https://lore.kernel.org/r/20230324231341.29808-1-saeed@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      de749452
    • Jakub Kicinski's avatar
      docs: netdev: clarify the need to sending reverts as patches · e70f94c6
      Jakub Kicinski authored
      We don't state explicitly that reverts need to be submitted
      as a patch. It occasionally comes up.
      Reviewed-by: default avatarJacob Keller <jacob.e.keller@intel.com>
      Reviewed-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Link: https://lore.kernel.org/r/20230327172646.2622943-1-kuba@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      e70f94c6
    • Tom Rix's avatar
      net: ethernet: 8390: axnet_cs: remove unused xfer_count variable · e48cefb9
      Tom Rix authored
      clang with W=1 reports
      drivers/net/ethernet/8390/axnet_cs.c:653:9: error: variable
        'xfer_count' set but not used [-Werror,-Wunused-but-set-variable]
          int xfer_count = count;
              ^
      This variable is not used so remove it.
      Signed-off-by: default avatarTom Rix <trix@redhat.com>
      Reviewed-by: default avatarSimon Horman <simon.horman@corigine.com>
      Link: https://lore.kernel.org/r/20230327235423.1777590-1-trix@redhat.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      e48cefb9
    • Wolfram Sang's avatar
      Revert "sh_eth: remove open coded netif_running()" · cdeccd13
      Wolfram Sang authored
      This reverts commit ce1fdb06. It turned
      out this actually introduces a race condition. netif_running() is not a
      suitable check for get_stats.
      Reported-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarWolfram Sang <wsa+renesas@sang-engineering.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Reviewed-by: default avatarSimon Horman <simon.horman@corigine.com>
      Link: https://lore.kernel.org/r/20230327152112.15635-1-wsa+renesas@sang-engineering.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      cdeccd13
    • Jakub Kicinski's avatar
      Merge branch 'net-refcount-address-dst_entry-reference-count-scalability-issues' · 2600badf
      Jakub Kicinski authored
      Thomas Gleixner says:
      
      ====================
      net, refcount: Address dst_entry reference count scalability issues
      
      This is version 3 of this series. Version 2 can be found here:
      
           https://lore.kernel.org/lkml/20230307125358.772287565@linutronix.de
      
      Wangyang and Arjan reported a bottleneck in the networking code related to
      struct dst_entry::__refcnt. Performance tanks massively when concurrency on
      a dst_entry increases.
      
      This happens when there are a large amount of connections to or from the
      same IP address. The memtier benchmark when run on the same host as
      memcached amplifies this massively. But even over real network connections
      this issue can be observed at an obviously smaller scale (due to the
      network bandwith limitations in my setup, i.e. 1Gb). How to reproduce:
      
        Run memcached with -t $N and memtier_benchmark with -t $M and --ratio=1:100
        on the same machine. localhost connections amplify the problem.
      
        Start with the defaults for $N and $M and increase them. Depending on
        your machine this will tank at some point. But even in reasonably small
        $N, $M scenarios the refcount operations and the resulting false sharing
        fallout becomes visible in perf top. At some point it becomes the
        dominating issue.
      
      There are two factors which make this reference count a scalability issue:
      
         1) False sharing
      
            dst_entry:__refcnt is located at offset 64 of dst_entry, which puts
            it into a seperate cacheline vs. the read mostly members located at
            the beginning of the struct.
      
            That prevents false sharing vs. the struct members in the first 64
            bytes of the structure, but there is also
      
            	    dst_entry::lwtstate
      
            which is located after the reference count and in the same cache
            line. This member is read after a reference count has been acquired.
      
            The other problem is struct rtable, which embeds a struct dst_entry
            at offset 0. struct dst_entry has a size of 112 bytes, which means
            that the struct members of rtable which follow the dst member share
            the same cache line as dst_entry::__refcnt. Especially
      
            	  rtable::rt_genid
      
            is also read by the contexts which have a reference count acquired
            already.
      
            When dst_entry:__refcnt is incremented or decremented via an atomic
            operation these read accesses stall and contribute to the performance
            problem.
      
         2) atomic_inc_not_zero()
      
            A reference on dst_entry:__refcnt is acquired via
            atomic_inc_not_zero() and released via atomic_dec_return().
      
            atomic_inc_not_zero() is implemted via a atomic_try_cmpxchg() loop,
            which exposes O(N^2) behaviour under contention with N concurrent
            operations. Contention scalability is degrading with even a small
            amount of contenders and gets worse from there.
      
            Lightweight instrumentation exposed an average of 8!! retry loops per
            atomic_inc_not_zero() invocation in a inc()/dec() loop running
            concurrently on 112 CPUs.
      
            There is nothing which can be done to make atomic_inc_not_zero() more
            scalable.
      
      The following series addresses these issues:
      
          1) Reorder and pad struct dst_entry to prevent the false sharing.
      
          2) Implement and use a reference count implementation which avoids the
             atomic_inc_not_zero() problem.
      
             It is slightly less performant in the case of the final 0 -> -1
             transition, but the deconstruction of these objects is a low
             frequency event. get()/put() pairs are in the hotpath and that's
             what this implementation optimizes for.
      
             The algorithm of this reference count is only suitable for RCU
             managed objects. Therefore it cannot replace the refcount_t
             algorithm, which is also based on atomic_inc_not_zero(), due to a
             subtle race condition related to the 0 -> -1 transition and the final
             verdict to mark the reference count dead. See details in patch 2/3.
      
             It might be just my lack of imagination which declares this to be
             impossible and I'd be happy to be proven wrong.
      
             As a bonus the new rcuref implementation provides underflow/overflow
             detection and mitigation while being performance wise on par with
             open coded atomic_inc_not_zero() / atomic_dec_return() pairs even in
             the non-contended case.
      
      The combination of these two changes results in performance gains in micro
      benchmarks and also localhost and networked memtier benchmarks talking to
      memcached. It's hard to quantify the benchmark results as they depend
      heavily on the micro-architecture and the number of concurrent operations.
      
      The overall gain of both changes for localhost memtier ranges from 1.2X to
      3.2X and from +2% to %5% range for networked operations on a 1Gb connection.
      
      A micro benchmark which enforces maximized concurrency shows a gain between
      1.2X and 4.7X!!!
      
      Obviously this is focussed on a particular problem and therefore needs to
      be discussed in detail. It also requires wider testing outside of the cases
      which this is focussed on.
      
      Though the false sharing issue is obvious and should be addressed
      independent of the more focussed reference count changes.
      
      The series is also available from git:
      
        git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git rcuref
      
      Changes vs. V2:
      
        - Rename __refcnt to __rcuref (Linus)
      
        - Fix comments and changelogs (Mark, Qiuxu)
      
        - Fixup kernel doc of generated atomic_add_negative() variants
      
      I want to say thanks to Wangyang who analyzed the issue and provided the
      initial fix for the false sharing problem. Further thanks go to Arjan
      Peter, Marc, Will and Borislav for valuable input and providing test
      results on machines which I do not have access to, and to Linus and
      Eric, Qiuxu and Mark for helpful feedback.
      ====================
      
      Link: https://lore.kernel.org/r/20230323102649.764958589@linutronix.deSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      2600badf
    • Thomas Gleixner's avatar
      net: dst: Switch to rcuref_t reference counting · bc9d3a9f
      Thomas Gleixner authored
      Under high contention dst_entry::__refcnt becomes a significant bottleneck.
      
      atomic_inc_not_zero() is implemented with a cmpxchg() loop, which goes into
      high retry rates on contention.
      
      Switch the reference count to rcuref_t which results in a significant
      performance gain. Rename the reference count member to __rcuref to reflect
      the change.
      
      The gain depends on the micro-architecture and the number of concurrent
      operations and has been measured in the range of +25% to +130% with a
      localhost memtier/memcached benchmark which amplifies the problem
      massively.
      
      Running the memtier/memcached benchmark over a real (1Gb) network
      connection the conversion on top of the false sharing fix for struct
      dst_entry::__refcnt results in a total gain in the 2%-5% range over the
      upstream baseline.
      Reported-by: default avatarWangyang Guo <wangyang.guo@intel.com>
      Reported-by: default avatarArjan Van De Ven <arjan.van.de.ven@intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://lore.kernel.org/r/20230307125538.989175656@linutronix.de
      Link: https://lore.kernel.org/r/20230323102800.215027837@linutronix.deSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      bc9d3a9f
    • Wangyang Guo's avatar
      net: dst: Prevent false sharing vs. dst_entry:: __refcnt · d288a162
      Wangyang Guo authored
      dst_entry::__refcnt is highly contended in scenarios where many connections
      happen from and to the same IP. The reference count is an atomic_t, so the
      reference count operations have to take the cache-line exclusive.
      
      Aside of the unavoidable reference count contention there is another
      significant problem which is caused by that: False sharing.
      
      perf top identified two affected read accesses. dst_entry::lwtstate and
      rtable::rt_genid.
      
      dst_entry:__refcnt is located at offset 64 of dst_entry, which puts it into
      a seperate cacheline vs. the read mostly members located at the beginning
      of the struct.
      
      That prevents false sharing vs. the struct members in the first 64
      bytes of the structure, but there is also
      
        dst_entry::lwtstate
      
      which is located after the reference count and in the same cache line. This
      member is read after a reference count has been acquired.
      
      struct rtable embeds a struct dst_entry at offset 0. struct dst_entry has a
      size of 112 bytes, which means that the struct members of rtable which
      follow the dst member share the same cache line as dst_entry::__refcnt.
      Especially
      
        rtable::rt_genid
      
      is also read by the contexts which have a reference count acquired
      already.
      
      When dst_entry:__refcnt is incremented or decremented via an atomic
      operation these read accesses stall. This was found when analysing the
      memtier benchmark in 1:100 mode, which amplifies the problem extremly.
      
      Move the rt[6i]_uncached[_list] members out of struct rtable and struct
      rt6_info into struct dst_entry to provide padding and move the lwtstate
      member after that so it ends up in the same cache line.
      
      The resulting improvement depends on the micro-architecture and the number
      of CPUs. It ranges from +20% to +120% with a localhost memtier/memcached
      benchmark.
      
      [ tglx: Rearrange struct ]
      Signed-off-by: default avatarWangyang Guo <wangyang.guo@intel.com>
      Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarDavid Ahern <dsahern@kernel.org>
      Link: https://lore.kernel.org/r/20230323102800.042297517@linutronix.deSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      d288a162
    • Jakub Kicinski's avatar
  2. 28 Mar, 2023 19 commits
  3. 27 Mar, 2023 8 commits