1. 09 Nov, 2021 1 commit
    • John Fastabend's avatar
      bpf: sockmap, strparser, and tls are reusing qdisc_skb_cb and colliding · e0dc3b93
      John Fastabend authored
      Strparser is reusing the qdisc_skb_cb struct to stash the skb message handling
      progress, e.g. offset and length of the skb. First this is poorly named and
      inherits a struct from qdisc that doesn't reflect the actual usage of cb[] at
      this layer.
      
      But, more importantly strparser is using the following to access its metadata.
      
        (struct _strp_msg *)((void *)skb->cb + offsetof(struct qdisc_skb_cb, data))
      
      Where _strp_msg is defined as:
      
        struct _strp_msg {
              struct strp_msg            strp;                 /*     0     8 */
              int                        accum_len;            /*     8     4 */
      
              /* size: 12, cachelines: 1, members: 2 */
              /* last cacheline: 12 bytes */
        };
      
      So we use 12 bytes of ->data[] in struct. However in BPF code running parser
      and verdict the user has read capabilities into the data[] array as well. Its
      not too problematic, but we should not be exposing internal state to BPF
      program. If its really needed then we can use the probe_read() APIs which allow
      reading kernel memory. And I don't believe cb[] layer poses any API breakage by
      moving this around because programs can't depend on cb[] across layers.
      
      In order to fix another issue with a ctx rewrite we need to stash a temp
      variable somewhere. To make this work cleanly this patch builds a cb struct
      for sk_skb types called sk_skb_cb struct. Then we can use this consistently
      in the strparser, sockmap space. Additionally we can start allowing ->cb[]
      write access after this.
      
      Fixes: 604326b4 ("bpf, sockmap: convert to generic sk_msg interface")
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarJussi Maki <joamaki@gmail.com>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20211103204736.248403-5-john.fastabend@gmail.com
      e0dc3b93
  2. 08 Nov, 2021 3 commits
    • John Fastabend's avatar
      bpf, sockmap: Fix race in ingress receive verdict with redirect to self · c5d2177a
      John Fastabend authored
      A socket in a sockmap may have different combinations of programs attached
      depending on configuration. There can be no programs in which case the socket
      acts as a sink only. There can be a TX program in this case a BPF program is
      attached to sending side, but no RX program is attached. There can be an RX
      program only where sends have no BPF program attached, but receives are hooked
      with BPF. And finally, both TX and RX programs may be attached. Giving us the
      permutations:
      
       None, Tx, Rx, and TxRx
      
      To date most of our use cases have been TX case being used as a fast datapath
      to directly copy between local application and a userspace proxy. Or Rx cases
      and TxRX applications that are operating an in kernel based proxy. The traffic
      in the first case where we hook applications into a userspace application looks
      like this:
      
        AppA  redirect   AppB
         Tx <-----------> Rx
         |                |
         +                +
         TCP <--> lo <--> TCP
      
      In this case all traffic from AppA (after 3whs) is copied into the AppB
      ingress queue and no traffic is ever on the TCP recieive_queue.
      
      In the second case the application never receives, except in some rare error
      cases, traffic on the actual user space socket. Instead the send happens in
      the kernel.
      
                 AppProxy       socket pool
             sk0 ------------->{sk1,sk2, skn}
              ^                      |
              |                      |
              |                      v
             ingress              lb egress
             TCP                  TCP
      
      Here because traffic is never read off the socket with userspace recv() APIs
      there is only ever one reader on the sk receive_queue. Namely the BPF programs.
      
      However, we've started to introduce a third configuration where the BPF program
      on receive should process the data, but then the normal case is to push the
      data into the receive queue of AppB.
      
             AppB
             recv()                (userspace)
           -----------------------
             tcp_bpf_recvmsg()     (kernel)
               |             |
               |             |
               |             |
             ingress_msgQ    |
               |             |
             RX_BPF          |
               |             |
               v             v
             sk->receive_queue
      
      This is different from the App{A,B} redirect because traffic is first received
      on the sk->receive_queue.
      
      Now for the issue. The tcp_bpf_recvmsg() handler first checks the ingress_msg
      queue for any data handled by the BPF rx program and returned with PASS code
      so that it was enqueued on the ingress msg queue. Then if no data exists on
      that queue it checks the socket receive queue. Unfortunately, this is the same
      receive_queue the BPF program is reading data off of. So we get a race. Its
      possible for the recvmsg() hook to pull data off the receive_queue before the
      BPF hook has a chance to read it. It typically happens when an application is
      banging on recv() and getting EAGAINs. Until they manage to race with the RX
      BPF program.
      
      To fix this we note that before this patch at attach time when the socket is
      loaded into the map we check if it needs a TX program or just the base set of
      proto bpf hooks. Then it uses the above general RX hook regardless of if we
      have a BPF program attached at rx or not. This patch now extends this check to
      handle all cases enumerated above, TX, RX, TXRX, and none. And to fix above
      race when an RX program is attached we use a new hook that is nearly identical
      to the old one except now we do not let the recv() call skip the RX BPF program.
      Now only the BPF program pulls data from sk->receive_queue and recv() only
      pulls data from the ingress msgQ post BPF program handling.
      
      With this resolved our AppB from above has been up and running for many hours
      without detecting any errors. We do this by correlating counters in RX BPF
      events and the AppB to ensure data is never skipping the BPF program. Selftests,
      was not able to detect this because we only run them for a short period of time
      on well ordered send/recvs so we don't get any of the noise we see in real
      application environments.
      
      Fixes: 51199405 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarJussi Maki <joamaki@gmail.com>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20211103204736.248403-4-john.fastabend@gmail.com
      c5d2177a
    • John Fastabend's avatar
      bpf, sockmap: Remove unhash handler for BPF sockmap usage · b8b8315e
      John Fastabend authored
      We do not need to handle unhash from BPF side we can simply wait for the
      close to happen. The original concern was a socket could transition from
      ESTABLISHED state to a new state while the BPF hook was still attached.
      But, we convinced ourself this is no longer possible and we also improved
      BPF sockmap to handle listen sockets so this is no longer a problem.
      
      More importantly though there are cases where unhash is called when data is
      in the receive queue. The BPF unhash logic will flush this data which is
      wrong. To be correct it should keep the data in the receive queue and allow
      a receiving application to continue reading the data. This may happen when
      tcp_abort() is received for example. Instead of complicating the logic in
      unhash simply moving all this to tcp_close() hook solves this.
      
      Fixes: 51199405 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarJussi Maki <joamaki@gmail.com>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20211103204736.248403-3-john.fastabend@gmail.com
      b8b8315e
    • John Fastabend's avatar
      bpf, sockmap: Use stricter sk state checks in sk_lookup_assign · 40a34121
      John Fastabend authored
      In order to fix an issue with sockets in TCP sockmap redirect cases we plan
      to allow CLOSE state sockets to exist in the sockmap. However, the check in
      bpf_sk_lookup_assign() currently only invalidates sockets in the
      TCP_ESTABLISHED case relying on the checks on sockmap insert to ensure we
      never SOCK_CLOSE state sockets in the map.
      
      To prepare for this change we flip the logic in bpf_sk_lookup_assign() to
      explicitly test for the accepted cases. Namely, a tcp socket in TCP_LISTEN
      or a udp socket in TCP_CLOSE state. This also makes the code more resilent
      to future changes.
      Suggested-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20211103204736.248403-2-john.fastabend@gmail.com
      40a34121
  3. 06 Nov, 2021 4 commits
  4. 05 Nov, 2021 25 commits
  5. 04 Nov, 2021 7 commits