1. 12 Apr, 2019 1 commit
    • David S. Miller's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next · bb23581b
      David S. Miller authored
      Daniel Borkmann says:
      
      ====================
      pull-request: bpf-next 2019-04-12
      
      The following pull-request contains BPF updates for your *net-next* tree.
      
      The main changes are:
      
      1) Improve BPF verifier scalability for large programs through two
         optimizations: i) remove verifier states that are not useful in pruning,
         ii) stop walking parentage chain once first LIVE_READ is seen. Combined
         gives approx 20x speedup. Increase limits for accepting large programs
         under root, and add various stress tests, from Alexei.
      
      2) Implement global data support in BPF. This enables static global variables
         for .data, .rodata and .bss sections to be properly handled which allows
         for more natural program development. This also opens up the possibility
         to optimize program workflow by compiling ELFs only once and later only
         rewriting section data before reload, from Daniel and with test cases and
         libbpf refactoring from Joe.
      
      3) Add config option to generate BTF type info for vmlinux as part of the
         kernel build process. DWARF debug info is converted via pahole to BTF.
         Latter relies on libbpf and makes use of BTF deduplication algorithm which
         results in 100x savings compared to DWARF data. Resulting .BTF section is
         typically about 2MB in size, from Andrii.
      
      4) Add BPF verifier support for stack access with variable offset from
         helpers and add various test cases along with it, from Andrey.
      
      5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header
         so that L2 encapsulation can be used for tc tunnels, from Alan.
      
      6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that
         users can define a subset of allowed __sk_buff fields that get fed into
         the test program, from Stanislav.
      
      7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up
         various UBSAN warnings in bpftool, from Yonghong.
      
      8) Generate a pkg-config file for libbpf, from Luca.
      
      9) Dump program's BTF id in bpftool, from Prashant.
      
      10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP
          program, from Magnus.
      
      11) kallsyms related fixes for the case when symbols are not present in
          BPF selftests and samples, from Daniel
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bb23581b
  2. 11 Apr, 2019 34 commits
  3. 10 Apr, 2019 5 commits
    • Jakub Kicinski's avatar
      net: strparser: fix comment · 93e21254
      Jakub Kicinski authored
      Fix comment.
      Signed-off-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      93e21254
    • David Ahern's avatar
      ipv4: Handle RTA_GATEWAY set to 0 · d73f80f9
      David Ahern authored
      Govindarajulu reported a regression with Network Manager which sends an
      RTA_GATEWAY attribute with the address set to 0. Fixup the handling of
      RTA_GATEWAY to only set fc_gw_family if the gateway address is actually
      set.
      
      Fixes: f35b794b ("ipv4: Prepare fib_config for IPv6 gateway")
      Reported-by: default avatarGovindarajulu Varadarajan <govind.varadar@gmail.com>
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d73f80f9
    • David S. Miller's avatar
      Merge branch 'net-sched-move-back-qlen-to-per-CPU-accounting' · 44b9b6ca
      David S. Miller authored
      Paolo Abeni says:
      
      ====================
      net: sched: move back qlen to per CPU accounting
      
      The commit 46b1c18f ("net: sched: put back q.qlen into a single location")
      introduced some measurable regression in the contended scenarios for
      lock qdisc.
      
      As Eric suggested we could replace q.qlen access with calls to qdisc_is_empty()
      in the datapath and revert the above commit. The TC subsystem updates
      qdisc->is_empty in a somewhat loose way: notably 'is_empty' is set only when
      the qdisc dequeue() calls return a NULL ptr. That is, the invocation after
      the last packet is dequeued.
      
      The above is good enough for BYPASS implementation - the only downside is that
      we end up avoiding the optimization for a very small time-frame - but will
      break hard things when internal structures consistency for classful qdisc
      relies on child qdisc_is_empty().
      
      A more strict 'is_empty' update adds a relevant complexity to its life-cycle, so
      this series takes a different approach: we allow lockless qdisc to switch from
      per CPU accounting to global stats accounting when the NOLOCK bit is cleared.
      Since most pieces of infrastructure are already in place, this requires very
      little changes to the pfifo_fast qdisc, and any later NOLOCK qdisc can hook
      there with little effort - no need to maintain two different implementations.
      
      The first 2 patches removes direct qlen access from non core TC code, the 3rd
      and 4th patches place and use the infrastructure to allow stats account
      switching and the 5th patch is the actual revert.
      
       v1 -> v2:
        - fixed build issues
        - more descriptive commit message for patch 5/5
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      44b9b6ca
    • Paolo Abeni's avatar
      Revert: "net: sched: put back q.qlen into a single location" · 73eb628d
      Paolo Abeni authored
      This revert commit 46b1c18f ("net: sched: put back q.qlen into
      a single location").
      After the previous patch, when a NOLOCK qdisc is enslaved to a
      locking qdisc it switches to global stats accounting. As a consequence,
      when a classful qdisc accesses directly a child qdisc's qlen, such
      qdisc is not doing per CPU accounting and qlen value is consistent.
      
      In the control path nobody uses directly qlen since commit
      e5f0e8f8 ("net: sched: introduce and use qdisc tree flush/purge
      helpers"), so we can remove the contented atomic ops from the
      datapath.
      
      v1 -> v2:
       - complete the qdisc_qstats_atomic_qlen_dec() ->
         qdisc_qstats_cpu_qlen_dec() replacement, fix build issue
       - more descriptive commit message
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      73eb628d
    • Paolo Abeni's avatar
      net: sched: when clearing NOLOCK, clear TCQ_F_CPUSTATS, too · 8a53e616
      Paolo Abeni authored
      Since stats updating is always consistent with TCQ_F_CPUSTATS flag,
      we can disable it at qdisc creation time flipping such bit.
      
      In my experiments, if the NOLOCK flag is cleared, per CPU stats
      accounting does not give any measurable performance gain, but it
      waste some memory.
      
      Let's clear TCQ_F_CPUSTATS together with NOLOCK, when enslaving
      a NOLOCK qdisc to 'lock' one.
      
      Use stats update helper inside pfifo_fast, to cope correctly with
      TCQ_F_CPUSTATS flag change.
      
      As a side effect, q.qlen value for any child qdiscs is always
      consistent for all lock classfull qdiscs.
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8a53e616