1. 24 Oct, 2023 9 commits
    • Albert Huang's avatar
      xsk: Avoid starving the xsk further down the list · 99b29a49
      Albert Huang authored
      In the previous implementation, when multiple xsk sockets were
      associated with a single xsk_buff_pool, a situation could arise
      where the xsk_tx_list maintained data at the front for one xsk
      socket while starving the xsk sockets at the back of the list.
      This could result in issues such as the inability to transmit packets,
      increased latency, and jitter. To address this problem, we introduce
      a new variable called tx_budget_spent, which limits each xsk to transmit
      a maximum of MAX_PER_SOCKET_BUDGET tx descriptors. This allocation ensures
      equitable opportunities for subsequent xsk sockets to send tx descriptors.
      The value of MAX_PER_SOCKET_BUDGET is set to 32.
      Signed-off-by: default avatarAlbert Huang <huangjie.albert@bytedance.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMagnus Karlsson <magnus.karlsson@intel.com>
      Link: https://lore.kernel.org/bpf/20231023125732.82261-1-huangjie.albert@bytedance.com
      99b29a49
    • Alexei Starovoitov's avatar
      Merge branch 'exact-states-comparison-for-iterator-convergence-checks' · dedd6c89
      Alexei Starovoitov authored
      Eduard Zingerman says:
      
      ====================
      exact states comparison for iterator convergence checks
      
      Iterator convergence logic in is_state_visited() uses state_equals()
      for states with branches counter > 0 to check if iterator based loop
      converges. This is not fully correct because state_equals() relies on
      presence of read and precision marks on registers. These marks are not
      guaranteed to be finalized while state has branches.
      Commit message for patch #3 describes a program that exhibits such
      behavior.
      
      This patch-set aims to fix iterator convergence logic by adding notion
      of exact states comparison. Exact comparison does not rely on presence
      of read or precision marks and thus is more strict.
      As explained in commit message for patch #3 exact comparisons require
      addition of speculative register bounds widening. The end result for
      BPF verifier users could be summarized as follows:
      
      (!) After this update verifier would reject programs that conjure an
          imprecise value on the first loop iteration and use it as precise
          on the second (for iterator based loops).
      
      I urge people to at least skim over the commit message for patch #3.
      
      Patches are organized as follows:
      - patches #1,2: moving/extracting utility functions;
      - patch #3: introduces exact mode for states comparison and adds
        widening heuristic;
      - patch #4: adds test-cases that demonstrate why the series is
        necessary;
      - patch #5: extends patch #3 with a notion of state loop entries,
        these entries have to be tracked to correctly identify that
        different verifier states belong to the same states loop;
      - patch #6: adds a test-case that demonstrates a program
        which requires loop entry tracking for correct verification;
      - patch #7: just adds a few debug prints.
      
      The following actions are planned as a followup for this patch-set:
      - implementation has to be adapted for callbacks handling logic as a
        part of a fix for [1];
      - it is necessary to explore ways to improve widening heuristic to
        handle iters_task_vma test w/o need to insert barrier_var() calls;
      - explored states eviction logic on cache miss has to be extended
        to either:
        - allow eviction of checkpoint states -or-
        - be sped up in case if there are many active checkpoints associated
          with the same instruction.
      
      The patch-set is a followup for mailing list discussion [1].
      
      Changelog:
      - V2 [3] -> V3:
        - correct check for stack spills in widen_imprecise_scalars(),
          added test case progs/iters.c:widen_spill to check the behavior
          (suggested by Andrii);
        - allow eviction of checkpoint states in is_state_visited() to avoid
          pathological verifier performance when iterator based loop does not
          converge (discussion with Alexei).
      - V1 [2] -> V2, applied changes suggested by Alexei offlist:
        - __explored_state() function removed;
        - same_callsites() function is now used in clean_live_states();
        - patches #1,2 are added as preparatory code movement;
        - in process_iter_next_call() a safeguard is added to verify that
          cur_st->parent exists and has expected insn index / call sites.
      
      [1] https://lore.kernel.org/bpf/97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com/
      [2] https://lore.kernel.org/bpf/20231021005939.1041-1-eddyz87@gmail.com/
      [3] https://lore.kernel.org/bpf/20231022010812.9201-1-eddyz87@gmail.com/
      ====================
      
      Link: https://lore.kernel.org/r/20231024000917.12153-1-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      dedd6c89
    • Eduard Zingerman's avatar
      bpf: print full verifier states on infinite loop detection · b4d82395
      Eduard Zingerman authored
      Additional logging in is_state_visited(): if infinite loop is detected
      print full verifier state for both current and equivalent states.
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-8-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      b4d82395
    • Eduard Zingerman's avatar
      selftests/bpf: test if state loops are detected in a tricky case · 64870fee
      Eduard Zingerman authored
      A convoluted test case for iterators convergence logic that
      demonstrates that states with branch count equal to 0 might still be
      a part of not completely explored loop.
      
      E.g. consider the following state diagram:
      
                     initial     Here state 'succ' was processed first,
                       |         it was eventually tracked to produce a
                       V         state identical to 'hdr'.
          .---------> hdr        All branches from 'succ' had been explored
          |            |         and thus 'succ' has its .branches == 0.
          |            V
          |    .------...        Suppose states 'cur' and 'succ' correspond
          |    |       |         to the same instruction + callsites.
          |    V       V         In such case it is necessary to check
          |   ...     ...        whether 'succ' and 'cur' are identical.
          |    |       |         If 'succ' and 'cur' are a part of the same loop
          |    V       V         they have to be compared exactly.
          |   succ <- cur
          |    |
          |    V
          |   ...
          |    |
          '----'
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-7-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      64870fee
    • Eduard Zingerman's avatar
      bpf: correct loop detection for iterators convergence · 2a099282
      Eduard Zingerman authored
      It turns out that .branches > 0 in is_state_visited() is not a
      sufficient condition to identify if two verifier states form a loop
      when iterators convergence is computed. This commit adds logic to
      distinguish situations like below:
      
       (I)            initial       (II)            initial
                        |                             |
                        V                             V
           .---------> hdr                           ..
           |            |                             |
           |            V                             V
           |    .------...                    .------..
           |    |       |                     |       |
           |    V       V                     V       V
           |   ...     ...               .-> hdr     ..
           |    |       |                |    |       |
           |    V       V                |    V       V
           |   succ <- cur               |   succ <- cur
           |    |                        |    |
           |    V                        |    V
           |   ...                       |   ...
           |    |                        |    |
           '----'                        '----'
      
      For both (I) and (II) successor 'succ' of the current state 'cur' was
      previously explored and has branches count at 0. However, loop entry
      'hdr' corresponding to 'succ' might be a part of current DFS path.
      If that is the case 'succ' and 'cur' are members of the same loop
      and have to be compared exactly.
      Co-developed-by: default avatarAndrii Nakryiko <andrii.nakryiko@gmail.com>
      Co-developed-by: default avatarAlexei Starovoitov <alexei.starovoitov@gmail.com>
      Reviewed-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-6-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      2a099282
    • Eduard Zingerman's avatar
      selftests/bpf: tests with delayed read/precision makrs in loop body · 389ede06
      Eduard Zingerman authored
      These test cases try to hide read and precision marks from loop
      convergence logic: marks would only be assigned on subsequent loop
      iterations or after exploring states pushed to env->head stack first.
      Without verifier fix to use exact states comparison logic for
      iterators convergence these tests (except 'triple_continue') would be
      errorneously marked as safe.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-5-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      389ede06
    • Eduard Zingerman's avatar
      bpf: exact states comparison for iterator convergence checks · 2793a8b0
      Eduard Zingerman authored
      Convergence for open coded iterators is computed in is_state_visited()
      by examining states with branches count > 1 and using states_equal().
      states_equal() computes sub-state relation using read and precision marks.
      Read and precision marks are propagated from children states,
      thus are not guaranteed to be complete inside a loop when branches
      count > 1. This could be demonstrated using the following unsafe program:
      
           1. r7 = -16
           2. r6 = bpf_get_prandom_u32()
           3. while (bpf_iter_num_next(&fp[-8])) {
           4.   if (r6 != 42) {
           5.     r7 = -32
           6.     r6 = bpf_get_prandom_u32()
           7.     continue
           8.   }
           9.   r0 = r10
          10.   r0 += r7
          11.   r8 = *(u64 *)(r0 + 0)
          12.   r6 = bpf_get_prandom_u32()
          13. }
      
      Here verifier would first visit path 1-3, create a checkpoint at 3
      with r7=-16, continue to 4-7,3 with r7=-32.
      
      Because instructions at 9-12 had not been visitied yet existing
      checkpoint at 3 does not have read or precision mark for r7.
      Thus states_equal() would return true and verifier would discard
      current state, thus unsafe memory access at 11 would not be caught.
      
      This commit fixes this loophole by introducing exact state comparisons
      for iterator convergence logic:
      - registers are compared using regs_exact() regardless of read or
        precision marks;
      - stack slots have to have identical type.
      
      Unfortunately, this is too strict even for simple programs like below:
      
          i = 0;
          while(iter_next(&it))
            i++;
      
      At each iteration step i++ would produce a new distinct state and
      eventually instruction processing limit would be reached.
      
      To avoid such behavior speculatively forget (widen) range for
      imprecise scalar registers, if those registers were not precise at the
      end of the previous iteration and do not match exactly.
      
      This a conservative heuristic that allows to verify wide range of
      programs, however it precludes verification of programs that conjure
      an imprecise value on the first loop iteration and use it as precise
      on the second.
      
      Test case iter_task_vma_for_each() presents one of such cases:
      
              unsigned int seen = 0;
              ...
              bpf_for_each(task_vma, vma, task, 0) {
                      if (seen >= 1000)
                              break;
                      ...
                      seen++;
              }
      
      Here clang generates the following code:
      
      <LBB0_4>:
            24:       r8 = r6                          ; stash current value of
                      ... body ...                       'seen'
            29:       r1 = r10
            30:       r1 += -0x8
            31:       call bpf_iter_task_vma_next
            32:       r6 += 0x1                        ; seen++;
            33:       if r0 == 0x0 goto +0x2 <LBB0_6>  ; exit on next() == NULL
            34:       r7 += 0x10
            35:       if r8 < 0x3e7 goto -0xc <LBB0_4> ; loop on seen < 1000
      
      <LBB0_6>:
            ... exit ...
      
      Note that counter in r6 is copied to r8 and then incremented,
      conditional jump is done using r8. Because of this precision mark for
      r6 lags one state behind of precision mark on r8 and widening logic
      kicks in.
      
      Adding barrier_var(seen) after conditional is sufficient to force
      clang use the same register for both counting and conditional jump.
      
      This issue was discussed in the thread [1] which was started by
      Andrew Werner <awerner32@gmail.com> demonstrating a similar bug
      in callback functions handling. The callbacks would be addressed
      in a followup patch.
      
      [1] https://lore.kernel.org/bpf/97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com/Co-developed-by: default avatarAndrii Nakryiko <andrii.nakryiko@gmail.com>
      Co-developed-by: default avatarAlexei Starovoitov <alexei.starovoitov@gmail.com>
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-4-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      2793a8b0
    • Eduard Zingerman's avatar
      bpf: extract same_callsites() as utility function · 4c97259a
      Eduard Zingerman authored
      Extract same_callsites() from clean_live_states() as a utility function.
      This function would be used by the next patch in the set.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-3-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      4c97259a
    • Eduard Zingerman's avatar
      bpf: move explored_state() closer to the beginning of verifier.c · 3c4e420c
      Eduard Zingerman authored
      Subsequent patches would make use of explored_state() function.
      Move it up to avoid adding unnecessary prototype.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-2-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3c4e420c
  2. 23 Oct, 2023 2 commits
  3. 20 Oct, 2023 18 commits
  4. 19 Oct, 2023 2 commits
  5. 18 Oct, 2023 2 commits
  6. 17 Oct, 2023 7 commits
    • Daniel Borkmann's avatar
      selftests/bpf: Add additional mprog query test coverage · 24516309
      Daniel Borkmann authored
      Add several new test cases which assert corner cases on the mprog query
      mechanism, for example, around passing in a too small or a larger array
      than the current count.
      
        ./test_progs -t tc_opts
        #252     tc_opts_after:OK
        #253     tc_opts_append:OK
        #254     tc_opts_basic:OK
        #255     tc_opts_before:OK
        #256     tc_opts_chain_classic:OK
        #257     tc_opts_chain_mixed:OK
        #258     tc_opts_delete_empty:OK
        #259     tc_opts_demixed:OK
        #260     tc_opts_detach:OK
        #261     tc_opts_detach_after:OK
        #262     tc_opts_detach_before:OK
        #263     tc_opts_dev_cleanup:OK
        #264     tc_opts_invalid:OK
        #265     tc_opts_max:OK
        #266     tc_opts_mixed:OK
        #267     tc_opts_prepend:OK
        #268     tc_opts_query:OK
        #269     tc_opts_query_attach:OK
        #270     tc_opts_replace:OK
        #271     tc_opts_revision:OK
        Summary: 20/0 PASSED, 0 SKIPPED, 0 FAILED
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Reviewed-by: default avatarAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20231017081728.24769-1-daniel@iogearbox.net
      24516309
    • Yafang Shao's avatar
      selftests/bpf: Add selftest for bpf_task_under_cgroup() in sleepable prog · 44cb03f1
      Yafang Shao authored
      The result is as follows:
      
        $ tools/testing/selftests/bpf/test_progs --name=task_under_cgroup
        #237     task_under_cgroup:OK
        Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
      
      Without the previous patch, there will be RCU warnings in dmesg when
      CONFIG_PROVE_RCU is enabled. While with the previous patch, there will
      be no warnings.
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Link: https://lore.kernel.org/bpf/20231007135945.4306-2-laoar.shao@gmail.com
      44cb03f1
    • Yafang Shao's avatar
      bpf: Fix missed rcu read lock in bpf_task_under_cgroup() · 29a7e00f
      Yafang Shao authored
      When employed within a sleepable program not under RCU protection, the
      use of 'bpf_task_under_cgroup()' may trigger a warning in the kernel log,
      particularly when CONFIG_PROVE_RCU is enabled:
      
        [ 1259.662357] WARNING: suspicious RCU usage
        [ 1259.662358] 6.5.0+ #33 Not tainted
        [ 1259.662360] -----------------------------
        [ 1259.662361] include/linux/cgroup.h:423 suspicious rcu_dereference_check() usage!
      
      Other info that might help to debug this:
      
        [ 1259.662366] rcu_scheduler_active = 2, debug_locks = 1
        [ 1259.662368] 1 lock held by trace/72954:
        [ 1259.662369]  #0: ffffffffb5e3eda0 (rcu_read_lock_trace){....}-{0:0}, at: __bpf_prog_enter_sleepable+0x0/0xb0
      
      Stack backtrace:
      
        [ 1259.662385] CPU: 50 PID: 72954 Comm: trace Kdump: loaded Not tainted 6.5.0+ #33
        [ 1259.662391] Call Trace:
        [ 1259.662393]  <TASK>
        [ 1259.662395]  dump_stack_lvl+0x6e/0x90
        [ 1259.662401]  dump_stack+0x10/0x20
        [ 1259.662404]  lockdep_rcu_suspicious+0x163/0x1b0
        [ 1259.662412]  task_css_set.part.0+0x23/0x30
        [ 1259.662417]  bpf_task_under_cgroup+0xe7/0xf0
        [ 1259.662422]  bpf_prog_7fffba481a3bcf88_lsm_run+0x5c/0x93
        [ 1259.662431]  bpf_trampoline_6442505574+0x60/0x1000
        [ 1259.662439]  bpf_lsm_bpf+0x5/0x20
        [ 1259.662443]  ? security_bpf+0x32/0x50
        [ 1259.662452]  __sys_bpf+0xe6/0xdd0
        [ 1259.662463]  __x64_sys_bpf+0x1a/0x30
        [ 1259.662467]  do_syscall_64+0x38/0x90
        [ 1259.662472]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
        [ 1259.662479] RIP: 0033:0x7f487baf8e29
        [...]
        [ 1259.662504]  </TASK>
      
      This issue can be reproduced by executing a straightforward program, as
      demonstrated below:
      
      SEC("lsm.s/bpf")
      int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size)
      {
              struct cgroup *cgrp = NULL;
              struct task_struct *task;
              int ret = 0;
      
              if (cmd != BPF_LINK_CREATE)
                      return 0;
      
              // The cgroup2 should be mounted first
              cgrp = bpf_cgroup_from_id(1);
              if (!cgrp)
                      goto out;
              task = bpf_get_current_task_btf();
              if (bpf_task_under_cgroup(task, cgrp))
                      ret = -1;
              bpf_cgroup_release(cgrp);
      
      out:
              return ret;
      }
      
      After running the program, if you subsequently execute another BPF program,
      you will encounter the warning.
      
      It's worth noting that task_under_cgroup_hierarchy() is also utilized by
      bpf_current_task_under_cgroup(). However, bpf_current_task_under_cgroup()
      doesn't exhibit this issue because it cannot be used in sleepable BPF
      programs.
      
      Fixes: b5ad4cdc ("bpf: Add bpf_task_under_cgroup() kfunc")
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Cc: Feng Zhou <zhoufeng.zf@bytedance.com>
      Cc: KP Singh <kpsingh@kernel.org>
      Link: https://lore.kernel.org/bpf/20231007135945.4306-1-laoar.shao@gmail.com
      29a7e00f
    • Sebastian Andrzej Siewior's avatar
      net, bpf: Add a warning if NAPI cb missed xdp_do_flush(). · 9a675ba5
      Sebastian Andrzej Siewior authored
      A few drivers were missing a xdp_do_flush() invocation after
      XDP_REDIRECT.
      
      Add three helper functions each for one of the per-CPU lists. Return
      true if the per-CPU list is non-empty and flush the list.
      
      Add xdp_do_check_flushed() which invokes each helper functions and
      creates a warning if one of the functions had a non-empty list.
      
      Hide everything behind CONFIG_DEBUG_NET.
      Suggested-by: default avatarJesper Dangaard Brouer <hawk@kernel.org>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Acked-by: default avatarJakub Kicinski <kuba@kernel.org>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20231016125738.Yt79p1uF@linutronix.de
      9a675ba5
    • Andrii Nakryiko's avatar
      libbpf: Don't assume SHT_GNU_verdef presence for SHT_GNU_versym section · 137df118
      Andrii Nakryiko authored
      Fix too eager assumption that SHT_GNU_verdef ELF section is going to be
      present whenever binary has SHT_GNU_versym section. It seems like either
      SHT_GNU_verdef or SHT_GNU_verneed can be used, so failing on missing
      SHT_GNU_verdef actually breaks use cases in production.
      
      One specific reported issue, which was used to manually test this fix,
      was trying to attach to `readline` function in BASH binary.
      
      Fixes: bb7fa093 ("libbpf: Support symbol versioning for uprobe")
      Reported-by: default avatarLiam Wisehart <liamwisehart@meta.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarManu Bretelle <chantr4@gmail.com>
      Reviewed-by: default avatarFangrui Song <maskray@google.com>
      Acked-by: default avatarHengqi Chen <hengqi.chen@gmail.com>
      Link: https://lore.kernel.org/bpf/20231016182840.4033346-1-andrii@kernel.org
      137df118
    • Jakub Kicinski's avatar
      Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next · a3c2dd96
      Jakub Kicinski authored
      Daniel Borkmann says:
      
      ====================
      pull-request: bpf-next 2023-10-16
      
      We've added 90 non-merge commits during the last 25 day(s) which contain
      a total of 120 files changed, 3519 insertions(+), 895 deletions(-).
      
      The main changes are:
      
      1) Add missed stats for kprobes to retrieve the number of missed kprobe
         executions and subsequent executions of BPF programs, from Jiri Olsa.
      
      2) Add cgroup BPF sockaddr hooks for unix sockets. The use case is
         for systemd to reimplement the LogNamespace feature which allows
         running multiple instances of systemd-journald to process the logs
         of different services, from Daan De Meyer.
      
      3) Implement BPF CPUv4 support for s390x BPF JIT, from Ilya Leoshkevich.
      
      4) Improve BPF verifier log output for scalar registers to better
         disambiguate their internal state wrt defaults vs min/max values
         matching, from Andrii Nakryiko.
      
      5) Extend the BPF fib lookup helpers for IPv4/IPv6 to support retrieving
         the source IP address with a new BPF_FIB_LOOKUP_SRC flag,
         from Martynas Pumputis.
      
      6) Add support for open-coded task_vma iterator to help with symbolization
         for BPF-collected user stacks, from Dave Marchevsky.
      
      7) Add libbpf getters for accessing individual BPF ring buffers which
         is useful for polling them individually, for example, from Martin Kelly.
      
      8) Extend AF_XDP selftests to validate the SHARED_UMEM feature,
         from Tushar Vyavahare.
      
      9) Improve BPF selftests cross-building support for riscv arch,
         from Björn Töpel.
      
      10) Add the ability to pin a BPF timer to the same calling CPU,
         from David Vernet.
      
      11) Fix libbpf's bpf_tracing.h macros for riscv to use the generic
         implementation of PT_REGS_SYSCALL_REGS() to access syscall arguments,
         from Alexandre Ghiti.
      
      12) Extend libbpf to support symbol versioning for uprobes, from Hengqi Chen.
      
      13) Fix bpftool's skeleton code generation to guarantee that ELF data
          is 8 byte aligned, from Ian Rogers.
      
      14) Inherit system-wide cpu_mitigations_off() setting for Spectre v1/v4
          security mitigations in BPF verifier, from Yafang Shao.
      
      15) Annotate struct bpf_stack_map with __counted_by attribute to prepare
          BPF side for upcoming __counted_by compiler support, from Kees Cook.
      
      * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (90 commits)
        bpf: Ensure proper register state printing for cond jumps
        bpf: Disambiguate SCALAR register state output in verifier logs
        selftests/bpf: Make align selftests more robust
        selftests/bpf: Improve missed_kprobe_recursion test robustness
        selftests/bpf: Improve percpu_alloc test robustness
        selftests/bpf: Add tests for open-coded task_vma iter
        bpf: Introduce task_vma open-coded iterator kfuncs
        selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
        bpf: Don't explicitly emit BTF for struct btf_iter_num
        bpf: Change syscall_nr type to int in struct syscall_tp_t
        net/bpf: Avoid unused "sin_addr_len" warning when CONFIG_CGROUP_BPF is not set
        bpf: Avoid unnecessary audit log for CPU security mitigations
        selftests/bpf: Add tests for cgroup unix socket address hooks
        selftests/bpf: Make sure mount directory exists
        documentation/bpf: Document cgroup unix socket address hooks
        bpftool: Add support for cgroup unix socket address hooks
        libbpf: Add support for cgroup unix socket address hooks
        bpf: Implement cgroup sockaddr hooks for unix sockets
        bpf: Add bpf_sock_addr_set_sun_path() to allow writing unix sockaddr from bpf
        bpf: Propagate modified uaddrlen from cgroup sockaddr programs
        ...
      ====================
      
      Link: https://lore.kernel.org/r/20231016204803.30153-1-daniel@iogearbox.netSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      a3c2dd96
    • Yunsheng Lin's avatar
      page_pool: fragment API support for 32-bit arch with 64-bit DMA · 90de47f0
      Yunsheng Lin authored
      Currently page_pool_alloc_frag() is not supported in 32-bit
      arch with 64-bit DMA because of the overlap issue between
      pp_frag_count and dma_addr_upper in 'struct page' for those
      arches, which seems to be quite common, see [1], which means
      driver may need to handle it when using fragment API.
      
      It is assumed that the combination of the above arch with an
      address space >16TB does not exist, as all those arches have
      64b equivalent, it seems logical to use the 64b version for a
      system with a large address space. It is also assumed that dma
      address is page aligned when we are dma mapping a page aligned
      buffer, see [2].
      
      That means we're storing 12 bits of 0 at the lower end for a
      dma address, we can reuse those bits for the above arches to
      support 32b+12b, which is 16TB of memory.
      
      If we make a wrong assumption, a warning is emitted so that
      user can report to us.
      
      1. https://lore.kernel.org/all/20211117075652.58299-1-linyunsheng@huawei.com/
      2. https://lore.kernel.org/all/20230818145145.4b357c89@kernel.org/Tested-by: default avatarAlexander Lobakin <aleksander.lobakin@intel.com>
      Signed-off-by: default avatarYunsheng Lin <linyunsheng@huawei.com>
      CC: Lorenzo Bianconi <lorenzo@kernel.org>
      CC: Alexander Duyck <alexander.duyck@gmail.com>
      CC: Liang Chen <liangchen.linux@gmail.com>
      CC: Guillaume Tucker <guillaume.tucker@collabora.com>
      CC: Matthew Wilcox <willy@infradead.org>
      CC: Linux-MM <linux-mm@kvack.org>
      Link: https://lore.kernel.org/r/20231013064827.61135-2-linyunsheng@huawei.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      90de47f0