1. 02 Feb, 2021 1 commit
    • Chao Leng's avatar
      nvme-rdma: add clean action for failed reconnection · 958dc1d3
      Chao Leng authored
      
      A crash happens when inject failed reconnection.
      If reconnect failed after start io queues, the queues will be unquiesced
      and new requests continue to be delivered. Reconnection error handling
      process directly free queues without cancel suspend requests. The
      suppend request will time out, and then crash due to use the queue
      after free.
      
      Add sync queues and cancel suppend requests for reconnection error
      handling.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      958dc1d3
  2. 25 Jan, 2021 1 commit
  3. 18 Jan, 2021 1 commit
    • Chao Leng's avatar
      nvme-rdma: avoid request double completion for concurrent nvme_rdma_timeout · 7674073b
      Chao Leng authored
      
      A crash happens when inject completing request long time(nearly 30s).
      Each name space has a request queue, when inject completing request long
      time, multi request queues may have time out requests at the same time,
      nvme_rdma_timeout will execute concurrently. Multi requests in different
      request queues may be queued in the same rdma queue, multi
      nvme_rdma_timeout may call nvme_rdma_stop_queue at the same time.
      The first nvme_rdma_timeout will clear NVME_RDMA_Q_LIVE and continue
      stopping the rdma queue(drain qp), but the others check NVME_RDMA_Q_LIVE
      is already cleared, and then directly complete the requests, complete
      request before the qp is fully drained may lead to a use-after-free
      condition.
      
      Add a multex lock to serialize nvme_rdma_stop_queue.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Tested-by: default avatarIsrael Rukshin <israelr@nvidia.com>
      Reviewed-by: default avatarIsrael Rukshin <israelr@nvidia.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      7674073b
  4. 01 Dec, 2020 1 commit
  5. 12 Nov, 2020 1 commit
  6. 03 Nov, 2020 2 commits
  7. 28 Oct, 2020 1 commit
  8. 27 Oct, 2020 1 commit
  9. 22 Oct, 2020 2 commits
    • Chao Leng's avatar
      nvme-rdma: fix crash due to incorrect cqe · a87da50f
      Chao Leng authored
      
      A crash happened due to injecting error test.
      When a CQE has incorrect command id due do an error injection, the host
      may find a request which is already freed.  Dereferencing req->mr->rkey
      causes a crash in nvme_rdma_process_nvme_rsp because the mr is already
      freed.
      
      Add a check for the mr to fix it.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      a87da50f
    • Chao Leng's avatar
      nvme-rdma: fix crash when connect rejected · 43efdb8e
      Chao Leng authored
      
      A crash can happened when a connect is rejected.   The host establishes
      the connection after received ConnectReply, and then continues to send
      the fabrics Connect command.  If the controller does not receive the
      ReadyToUse capsule, host may receive a ConnectReject reply.
      
      Call nvme_rdma_destroy_queue_ib after the host received the
      RDMA_CM_EVENT_REJECTED event.  Then when the fabrics Connect command
      times out, nvme_rdma_timeout calls nvme_rdma_complete_rq to fail the
      request.  A crash happenes due to use after free in
      nvme_rdma_complete_rq.
      
      nvme_rdma_destroy_queue_ib is redundant when handling the
      RDMA_CM_EVENT_REJECTED event as nvme_rdma_destroy_queue_ib is already
      called in connection failure handler.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      43efdb8e
  10. 08 Sep, 2020 1 commit
  11. 28 Aug, 2020 3 commits
    • Sagi Grimberg's avatar
      nvme-rdma: fix reset hang if controller died in the middle of a reset · 2362acb6
      Sagi Grimberg authored
      
      If the controller becomes unresponsive in the middle of a reset, we
      will hang because we are waiting for the freeze to complete, but that
      cannot happen since we have commands that are inflight holding the
      q_usage_counter, and we can't blindly fail requests that times out.
      
      So give a timeout and if we cannot wait for queue freeze before
      unfreezing, fail and have the error handling take care how to
      proceed (either schedule a reconnect of remove the controller).
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      2362acb6
    • Sagi Grimberg's avatar
      nvme-rdma: fix timeout handler · 0475a8dc
      Sagi Grimberg authored
      
      When a request times out in a LIVE state, we simply trigger error
      recovery and let the error recovery handle the request cancellation,
      however when a request times out in a non LIVE state, we make sure to
      complete it immediately as it might block controller setup or teardown
      and prevent forward progress.
      
      However tearing down the entire set of I/O and admin queues causes
      freeze/unfreeze imbalance (q->mq_freeze_depth) because and is really
      an overkill to what we actually need, which is to just fence controller
      teardown that may be running, stop the queue, and cancel the request if
      it is not already completed.
      
      Now that we have the controller teardown_lock, we can safely serialize
      request cancellation. This addresses a hang caused by calling extra
      queue freeze on controller namespaces, causing unfreeze to not complete
      correctly.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJames Smart <james.smart@broadcom.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      0475a8dc
    • Sagi Grimberg's avatar
      nvme-rdma: serialize controller teardown sequences · 5110f402
      Sagi Grimberg authored
      
      In the timeout handler we may need to complete a request because the
      request that timed out may be an I/O that is a part of a serial sequence
      of controller teardown or initialization. In order to complete the
      request, we need to fence any other context that may compete with us
      and complete the request that is timing out.
      
      In this case, we could have a potential double completion in case
      a hard-irq or a different competing context triggered error recovery
      and is running inflight request cancellation concurrently with the
      timeout handler.
      
      Protect using a ctrl teardown_lock to serialize contexts that may
      complete a cancelled request due to error recovery or a reset.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJames Smart <james.smart@broadcom.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      5110f402
  12. 23 Aug, 2020 1 commit
  13. 21 Aug, 2020 1 commit
  14. 29 Jul, 2020 3 commits
    • Sagi Grimberg's avatar
      nvme-rdma: fix controller reset hang during traffic · 9f98772b
      Sagi Grimberg authored
      commit fe35ec58 ("block: update hctx map when use multiple maps")
      exposed an issue where we may hang trying to wait for queue freeze
      during I/O. We call blk_mq_update_nr_hw_queues which in case of multiple
      queue maps (which we have now for default/read/poll) is attempting to
      freeze the queue. However we never started queue freeze when starting the
      reset, which means that we have inflight pending requests that entered the
      queue that we will not complete once the queue is quiesced.
      
      So start a freeze before we quiesce the queue, and unfreeze the queue
      after we successfully connected the I/O queues (and make sure to call
      blk_mq_update_nr_hw_queues only after we are sure that the queue was
      already frozen).
      
      This follows to how the pci driver handles resets.
      
      Fixes: fe35ec58
      
       ("block: update hctx map when use multiple maps")
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      9f98772b
    • Sagi Grimberg's avatar
      nvme: fix deadlock in disconnect during scan_work and/or ana_work · ecca390e
      Sagi Grimberg authored
      A deadlock happens in the following scenario with multipath:
      1) scan_work(nvme0) detects a new nsid while nvme0
          is an optimized path to it, path nvme1 happens to be
          inaccessible.
      
      2) Before scan_work is complete nvme0 disconnect is initiated
          nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING
      
      3) scan_work(1) attempts to submit IO,
          but nvme_path_is_optimized() observes nvme0 is not LIVE.
          Since nvme1 is a possible path IO is requeued and scan_work hangs.
      
      --
      Workqueue: nvme-wq nvme_scan_work [nvme_core]
      kernel: Call Trace:
      kernel:  __schedule+0x2b9/0x6c0
      kernel:  schedule+0x42/0xb0
      kernel:  io_schedule+0x16/0x40
      kernel:  do_read_cache_page+0x438/0x830
      kernel:  read_cache_page+0x12/0x20
      kernel:  read_dev_sector+0x27/0xc0
      kernel:  read_lba+0xc1/0x220
      kernel:  efi_partition+0x1e6/0x708
      kernel:  check_partition+0x154/0x244
      kernel:  rescan_partitions+0xae/0x280
      kernel:  __blkdev_get+0x40f/0x560
      kernel:  blkdev_get+0...
      ecca390e
    • Yamin Friedman's avatar
      nvme-rdma: use new shared CQ mechanism · 287f329e
      Yamin Friedman authored
      
      Has the driver use shared CQs providing ~10%-20% improvement as seen in
      the patch introducing shared CQs. Instead of opening a CQ for each QP
      per controller connected, a CQ for each QP will be provided by the RDMA
      core driver that will be shared between the QPs on that core reducing
      interrupt overhead.
      Signed-off-by: default avatarYamin Friedman <yaminf@mellanox.com>
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      287f329e
  15. 24 Jun, 2020 4 commits
  16. 27 May, 2020 2 commits
  17. 31 Mar, 2020 1 commit
  18. 25 Mar, 2020 3 commits
  19. 17 Mar, 2020 1 commit
  20. 10 Mar, 2020 1 commit
  21. 14 Feb, 2020 1 commit
    • Nigel Kirkland's avatar
      nvme: prevent warning triggered by nvme_stop_keep_alive · 97b2512a
      Nigel Kirkland authored
      
      Delayed keep alive work is queued on system workqueue and may be cancelled
      via nvme_stop_keep_alive from nvme_reset_wq, nvme_fc_wq or nvme_wq.
      
      Check_flush_dependency detects mismatched attributes between the work-queue
      context used to cancel the keep alive work and system-wq. Specifically
      system-wq does not have the WQ_MEM_RECLAIM flag, whereas the contexts used
      to cancel keep alive work have WQ_MEM_RECLAIM flag.
      
      Example warning:
      
        workqueue: WQ_MEM_RECLAIM nvme-reset-wq:nvme_fc_reset_ctrl_work [nvme_fc]
      	is flushing !WQ_MEM_RECLAIM events:nvme_keep_alive_work [nvme_core]
      
      To avoid the flags mismatch, delayed keep alive work is queued on nvme_wq.
      
      However this creates a secondary concern where work and a request to cancel
      that work may be in the same work queue - namely err_work in the rdma and
      tcp transports, which will want to flush/cancel the keep alive work which
      will now be on nvme_wq.
      
      After reviewing the transports, it looks like err_work can be moved to
      nvme_reset_wq. In fact that aligns them better with transition into
      RESETTING and performing related reset work in nvme_reset_wq.
      
      Change nvme-rdma and nvme-tcp to perform err_work in nvme_reset_wq.
      Signed-off-by: default avatarNigel Kirkland <nigel.kirkland@broadcom.com>
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      97b2512a
  22. 26 Nov, 2019 1 commit
    • Israel Rukshin's avatar
      nvme-rdma: Avoid preallocating big SGL for data · 38e18002
      Israel Rukshin authored
      
      nvme_rdma_alloc_tagset() preallocates a big buffer for the IO SGL based
      on SG_CHUNK_SIZE.
      
      Modern DMA engines are often capable of dealing with very big segments so
      the SG_CHUNK_SIZE is often too big. SG_CHUNK_SIZE results in a static 4KB
      SGL allocation per command.
      
      If a controller has lots of deep queues, preallocation for the sg list can
      consume substantial amounts of memory. For nvme-rdma, nr_hw_queues can be
      128 and each queue's depth 128. This means the resulting preallocation
      for the data SGL is 128*128*4K = 64MB per controller.
      
      Switch to runtime allocation for SGL for lists longer than 2 entries. This
      is the approach used by NVMe PCI so it should be reasonable for NVMeOF as
      well. Runtime SGL allocation has always been the case for the legacy I/O
      path so this is nothing new.
      
      The preallocated small SGL depends on SG_CHAIN so if the ARCH doesn't
      support SG_CHAIN, use only runtime allocation for the SGL.
      
      We didn't notice of a performance degradation, since for small IOs we'll
      use the inline SG and for the bigger IOs the allocation of a bigger SGL
      from slab is fast enough.
      Suggested-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarIsrael Rukshin <israelr@mellanox.com>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      38e18002
  23. 05 Nov, 2019 1 commit
  24. 04 Nov, 2019 2 commits
  25. 14 Oct, 2019 1 commit
  26. 27 Sep, 2019 1 commit
  27. 25 Sep, 2019 1 commit
    • Max Gurtovoy's avatar
      nvme-rdma: Fix max_hw_sectors calculation · ff13c1b8
      Max Gurtovoy authored
      
      By default, the NVMe/RDMA driver should support max io_size of 1MiB (or
      upto the maximum supported size by the HCA). Currently, one will see that
      /sys/class/block/<bdev>/queue/max_hw_sectors_kb is 1020 instead of 1024.
      
      A non power of 2 value can cause performance degradation due to
      unnecessary splitting of IO requests and unoptimized allocation units.
      
      The number of pages per MR has been fixed here, so there is no longer any
      need to reduce max_sectors by 1.
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      ff13c1b8