1. 12 Dec, 2020 2 commits
  2. 09 Dec, 2020 9 commits
  3. 08 Dec, 2020 2 commits
  4. 07 Dec, 2020 12 commits
  5. 06 Dec, 2020 3 commits
  6. 05 Dec, 2020 9 commits
  7. 04 Dec, 2020 3 commits
    • Bongsu Jeon's avatar
      nfc: s3fwrn5: skip the NFC bootloader mode · 4fb7b98c
      Bongsu Jeon authored
      If there isn't a proper NFC firmware image, Bootloader mode will be
      skipped.
      Signed-off-by: default avatarBongsu Jeon <bongsu.jeon@samsung.com>
      Reviewed-by: default avatarKrzysztof Kozlowski <krzk@kernel.org>
      Link: https://lore.kernel.org/r/20201203225257.2446-1-bongsu.jeon@samsung.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      4fb7b98c
    • Jakub Kicinski's avatar
      Merge branch 'perf-optimizations-for-tcp-recv-zerocopy' · 43be3a3c
      Jakub Kicinski authored
      Arjun Roy says:
      
      ====================
      Perf. optimizations for TCP Recv. Zerocopy
      
      This patchset contains several optimizations for TCP Recv. Zerocopy.
      
      Summarized:
      1. It is possible that a read payload is not exactly page aligned -
      that there may exist "straggler" bytes that we cannot map into the
      caller's address space cleanly. For this, we allow the caller to
      provide as argument a "hybrid copy buffer", turning
      getsockopt(TCP_ZEROCOPY_RECEIVE) into a "hybrid" operation that allows
      the caller to avoid a subsequent recvmsg() call to read the
      stragglers.
      
      2. Similarly, for "small" read payloads that are either below the size
      of a page, or small enough that remapping pages is not a performance
      win - we allow the user to short-circuit the remapping operations
      entirely and simply copy into the buffer provided.
      
      Some of the patches in the middle of this set are refactors to support
      this "short-circuiting" optimization.
      
      3. We allow the user to provide a hint that performing a page zap
      operation (and the accompanying TLB shootdown) may not be necessary,
      for the provided region that the kernel will attempt to map pages
      into. This allows us to avoid this expensive operation while holding
      the socket lock, which provides a significant performance advantage.
      
      With all of these changes combined, "medium" sized receive traffic
      (multiple tens to few hundreds of KB) see significant efficiency gains
      when using TCP receive zerocopy instead of regular recvmsg(). For
      example, with RPC-style traffic with 32KB messages, there is a roughly
      15% efficiency improvement when using zerocopy. Without these changes,
      there is a roughly 60-70% efficiency reduction with such messages when
      employing zerocopy.
      ====================
      
      Link: https://lore.kernel.org/r/20201202225349.935284-1-arjunroy.kdev@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      43be3a3c
    • Arjun Roy's avatar
      net-zerocopy: Defer vm zap unless actually needed. · 94ab9eb9
      Arjun Roy authored
      Zapping pages is required only if we are calling vm_insert_page into a
      region where pages had previously been mapped. Receive zerocopy allows
      reusing such regions, and hitherto called zap_page_range() before
      calling vm_insert_page() in that range.
      
      zap_page_range() can also be triggered from userspace with
      madvise(MADV_DONTNEED). If userspace is configured to call this before
      reusing a segment, or if there was nothing mapped at this virtual
      address to begin with, we can avoid calling zap_page_range() under the
      socket lock. That said, if userspace does not do that, then we are
      still responsible for calling zap_page_range().
      
      This patch adds a flag that the user can use to hint to the kernel
      that a zap is not required. If the flag is not set, or if an older
      user application does not have a flags field at all, then the kernel
      calls zap_page_range as before. Also, if the flag is set but a zap is
      still required, the kernel performs that zap as necessary. Thus
      incorrectly indicating that a zap can be avoided does not change the
      correctness of operation. It also increases the batchsize for
      vm_insert_pages and prefetches the page struct for the batch since
      we're about to bump the refcount.
      
      An alternative mechanism could be to not have a flag, assume by
      default a zap is not needed, and fall back to zapping if needed.
      However, this would harm performance for older applications for which
      a zap is necessary, and thus we implement it with an explicit flag
      so newer applications can opt in.
      
      When using RPC-style traffic with medium sized (tens of KB) RPCs, this
      change yields an efficency improvement of about 30% for QPS/CPU usage.
      Signed-off-by: default avatarArjun Roy <arjunroy@google.com>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      94ab9eb9