1. 19 May, 2020 34 commits
  2. 18 May, 2020 4 commits
  3. 17 May, 2020 2 commits
    • John Hubbard's avatar
      rds: convert get_user_pages() --> pin_user_pages() · dbfe7d74
      John Hubbard authored
      This code was using get_user_pages_fast(), in a "Case 2" scenario
      (DMA/RDMA), using the categorization from [1]. That means that it's
      time to convert the get_user_pages_fast() + put_page() calls to
      pin_user_pages_fast() + unpin_user_pages() calls.
      
      There is some helpful background in [2]: basically, this is a small
      part of fixing a long-standing disconnect between pinning pages, and
      file systems' use of those pages.
      
      [1] Documentation/core-api/pin_user_pages.rst
      
      [2] "Explicit pinning of user-space pages":
          https://lwn.net/Articles/807108/
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: netdev@vger.kernel.org
      Cc: linux-rdma@vger.kernel.org
      Cc: rds-devel@oss.oracle.com
      Signed-off-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dbfe7d74
    • David S. Miller's avatar
      Merge branch 'mptcp-do-not-block-on-subflow-socket' · 9740a7ae
      David S. Miller authored
      Florian Westphal says:
      
      ====================
      mptcp: do not block on subflow socket
      
      This series reworks mptcp_sendmsg logic to avoid blocking on the subflow
      socket.
      
      It does so by removing the wait loop from mptcp_sendmsg_frag helper.
      
      In order to do that, it moves prerequisites that are currently
      handled in mptcp_sendmsg_frag (and cause it to wait until they are
      met, e.g. frag cache refill) into the callers.
      
      The worker can just reschedule in case no subflow socket is ready,
      since it can't wait -- doing so would block other work items and
      doesn't make sense anyway because we should not (re)send data
      in case resources are already low.
      
      The sendmsg path can use the existing wait logic until memory
      becomes available.
      
      Because large send requests can result in multiple mptcp_sendmsg_frag
      calls from mptcp_sendmsg, we may need to restart the socket lookup in
      case subflow can't accept more data or memory is low.
      
      Doing so blocks on the mptcp socket, and existing wait handling
      releases the msk lock while blocking.
      
      Lastly, no need to use GFP_ATOMIC for extension allocation:
      extend __skb_ext_alloc with gfp_t arg instead of hard-coded ATOMIC and
      then relax the allocation constraints for mptcp case: those requests
      occur in process context.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9740a7ae