1. 09 Aug, 2023 2 commits
  2. 08 Aug, 2023 1 commit
  3. 05 Aug, 2023 1 commit
  4. 04 Aug, 2023 1 commit
  5. 01 Aug, 2023 1 commit
  6. 27 Jul, 2023 3 commits
  7. 25 Jul, 2023 1 commit
  8. 24 Jul, 2023 4 commits
  9. 21 Jul, 2023 6 commits
    • Mauricio Faria de Oliveira's avatar
      loop: do not enforce max_loop hard limit by (new) default · bb5faa99
      Mauricio Faria de Oliveira authored
      Problem:
      
      The max_loop parameter is used for 2 different purposes:
      
      1) initial number of loop devices to pre-create on init
      2) maximum number of loop devices to add on access/open()
      
      Historically, its default value (zero) caused 1) to create non-zero
      number of devices (CONFIG_BLK_DEV_LOOP_MIN_COUNT), and no hard limit on
      2) to add devices with autoloading.
      
      However, the default value changed in commit 85c50197 ("loop: Fix
      the max_loop commandline argument treatment when it is set to 0") to
      CONFIG_BLK_DEV_LOOP_MIN_COUNT, for max_loop=0 not to pre-create devices.
      
      That does improve 1), but unfortunately it breaks 2), as the default
      behavior changed from no-limit to hard-limit.
      
      Example:
      
      For example, this userspace code broke for N >= CONFIG, if the user
      relied on the default value 0 for max_loop:
      
          mknod("/dev/loopN");
          open("/dev/loopN");  // now fails with ENXIO
      
      Though affected users may "fix" it with (loop.)max_loop=0, this means to
      require a kernel parameter change on stable kernel update (that commit
      Fixes: an old commit in stable).
      
      Solution:
      
      The original semantics for the default value in 2) can be applied if the
      parameter is not set (ie, default behavior).
      
      This still keeps the intended function in 1) and 2) if set, and that
      commit's intended improvement in 1) if max_loop=0.
      
      Before 85c50197:
        - default:     1) CONFIG devices   2) no limit
        - max_loop=0:  1) CONFIG devices   2) no limit
        - max_loop=X:  1) X devices        2) X limit
      
      After 85c50197:
        - default:     1) CONFIG devices   2) CONFIG limit (*)
        - max_loop=0:  1) 0 devices (*)    2) no limit
        - max_loop=X:  1) X devices        2) X limit
      
      This commit:
        - default:     1) CONFIG devices   2) no limit (*)
        - max_loop=0:  1) 0 devices        2) no limit
        - max_loop=X:  1) X devices        2) X limit
      
      Future:
      
      The issue/regression from that commit only affects code under the
      CONFIG_BLOCK_LEGACY_AUTOLOAD deprecation guard, thus the fix too is
      contained under it.
      
      Once that deprecated functionality/code is removed, the purpose 2) of
      max_loop (hard limit) is no longer in use, so the module parameter
      description can be changed then.
      
      Tests:
      
      Linux 6.4-rc7
      CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
      CONFIG_BLOCK_LEGACY_AUTOLOAD=y
      
      - default (original)
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      	/dev/loop0
      	...
      	/dev/loop7
      
      	# ./test-loop
      	open: /dev/loop8: No such device or address
      
      - default (patched)
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      	/dev/loop0
      	...
      	/dev/loop7
      
      	# ./test-loop
      	#
      
      - max_loop=0 (original & patched):
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      
      	# ./test-loop
      	#
      
      - max_loop=8 (original & patched):
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      	/dev/loop0
      	...
      	/dev/loop7
      
      	# ./test-loop
      	open: /dev/loop8: No such device or address
      
      - max_loop=0 (patched; CONFIG_BLOCK_LEGACY_AUTOLOAD is not set)
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      
      	# ./test-loop
      	open: /dev/loop8: No such device or address
      
      Fixes: 85c50197 ("loop: Fix the max_loop commandline argument treatment when it is set to 0")
      Signed-off-by: default avatarMauricio Faria de Oliveira <mfo@canonical.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230720143033.841001-3-mfo@canonical.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      bb5faa99
    • Mauricio Faria de Oliveira's avatar
      loop: deprecate autoloading callback loop_probe() · 23881aec
      Mauricio Faria de Oliveira authored
      The 'probe' callback in __register_blkdev() is only used under the
      CONFIG_BLOCK_LEGACY_AUTOLOAD deprecation guard.
      
      The loop_probe() function is only used for that callback, so guard it
      too, accordingly.
      
      See commit fbdee71b ("block: deprecate autoloading based on dev_t").
      Signed-off-by: default avatarMauricio Faria de Oliveira <mfo@canonical.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230720143033.841001-2-mfo@canonical.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      23881aec
    • David Jeffery's avatar
      sbitmap: fix batching wakeup · 10639737
      David Jeffery authored
      Current code supposes that it is enough to provide forward progress by
      just waking up one wait queue after one completion batch is done.
      
      Unfortunately this way isn't enough, cause waiter can be added to wait
      queue just after it is woken up.
      
      Follows one example(64 depth, wake_batch is 8)
      
      1) all 64 tags are active
      
      2) in each wait queue, there is only one single waiter
      
      3) each time one completion batch(8 completions) wakes up just one
         waiter in each wait queue, then immediately one new sleeper is added
         to this wait queue
      
      4) after 64 completions, 8 waiters are wakeup, and there are still 8
         waiters in each wait queue
      
      5) after another 8 active tags are completed, only one waiter can be
         wakeup, and the other 7 can't be waken up anymore.
      
      Turns out it isn't easy to fix this problem, so simply wakeup enough
      waiters for single batch.
      
      Cc: Kemeng Shi <shikemeng@huaweicloud.com>
      Cc: Chengming Zhou <zhouchengming@bytedance.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Reviewed-by: default avatarGabriel Krisman Bertazi <krisman@suse.de>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Link: https://lore.kernel.org/r/20230721095715.232728-1-ming.lei@redhat.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      10639737
    • Ming Lei's avatar
      nvme-rdma: fix potential unbalanced freeze & unfreeze · 29b434d1
      Ming Lei authored
      Move start_freeze into nvme_rdma_configure_io_queues(), and there is
      at least two benefits:
      
      1) fix unbalanced freeze and unfreeze, since re-connection work may
      fail or be broken by removal
      
      2) IO during error recovery can be failfast quickly because nvme fabrics
      unquiesces queues after teardown.
      
      One side-effect is that !mpath request may timeout during connecting
      because of queue topo change, but that looks not one big deal:
      
      1) same problem exists with current code base
      
      2) compared with !mpath, mpath use case is dominant
      
      Fixes: 9f98772b ("nvme-rdma: fix controller reset hang during traffic")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Tested-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      29b434d1
    • Ming Lei's avatar
      nvme-tcp: fix potential unbalanced freeze & unfreeze · 99dc2640
      Ming Lei authored
      Move start_freeze into nvme_tcp_configure_io_queues(), and there is
      at least two benefits:
      
      1) fix unbalanced freeze and unfreeze, since re-connection work may
      fail or be broken by removal
      
      2) IO during error recovery can be failfast quickly because nvme fabrics
      unquiesces queues after teardown.
      
      One side-effect is that !mpath request may timeout during connecting
      because of queue topo change, but that looks not one big deal:
      
      1) same problem exists with current code base
      
      2) compared with !mpath, mpath use case is dominant
      
      Fixes: 2875b0ae ("nvme-tcp: fix controller reset hang during traffic")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Tested-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      99dc2640
    • Ming Lei's avatar
      nvme: fix possible hang when removing a controller during error recovery · 1b95e817
      Ming Lei authored
      Error recovery can be interrupted by controller removal, then the
      controller is left as quiesced, and IO hang can be caused.
      
      Fix the issue by unquiescing controller unconditionally when removing
      namespaces.
      
      This way is reasonable and safe given forward progress can be made
      when removing namespaces.
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reported-by: default avatarChunguang Xu <brookxu.cn@gmail.com>
      Closes: https://lore.kernel.org/linux-nvme/cover.1685350577.git.chunguang.xu@shopee.com/
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      1b95e817
  10. 20 Jul, 2023 2 commits
  11. 14 Jul, 2023 2 commits
  12. 13 Jul, 2023 4 commits
    • Jens Axboe's avatar
      Merge tag 'nvme-6.5-2023-07-13' of git://git.infradead.org/nvme into block-6.5 · 90b46229
      Jens Axboe authored
      Pull NVMe fixes from Keith:
      
      "nvme fixes for Linux 6.5
      
       - Don't require quirk to use duplicate namespace identifiers
         (Christoph, Sagi)
       - One more BOGUS_NID quirk (Pankaj)
       - IO timeout and error hanlding fixes for PCI (Keith)
       - Enhanced metadata format mask fix (Ankit)
       - Association race condition fix for fibre channel (Michael)
       - Correct debugfs error checks (Minjie)
       - Use PAGE_SECTORS_SHIFT where needed (Damien)
       - Reduce kernel logs for legacy nguid attribute (Keith)
       - Use correct dma direction when unmapping metadata (Ming)"
      
      * tag 'nvme-6.5-2023-07-13' of git://git.infradead.org/nvme:
        nvme-pci: fix DMA direction of unmapping integrity data
        nvme: don't reject probe due to duplicate IDs for single-ported PCIe devices
        nvme: ensure disabling pairs with unquiesce
        nvme-fc: fix race between error recovery and creating association
        nvme-fc: return non-zero status code when fails to create association
        nvme: fix parameter check in nvme_fault_inject_init()
        nvme: warn only once for legacy uuid attribute
        nvme: fix the NVME_ID_NS_NVM_STS_MASK definition
        nvmet: use PAGE_SECTORS_SHIFT
        nvme: add BOGUS_NID quirk for Samsung SM953
      90b46229
    • Chengming Zhou's avatar
      blk-mq: fix start_time_ns and alloc_time_ns for pre-allocated rq · 5c17f45e
      Chengming Zhou authored
      The iocost rely on rq start_time_ns and alloc_time_ns to tell saturation
      state of the block device. Most of the time request is allocated after
      rq_qos_throttle() and its alloc_time_ns or start_time_ns won't be affected.
      
      But for plug batched allocation introduced by the commit 47c122e3
      ("block: pre-allocate requests if plug is started and is a batch"), we can
      rq_qos_throttle() after the allocation of the request. This is what the
      blk_mq_get_cached_request() does.
      
      In this case, the cached request alloc_time_ns or start_time_ns is much
      ahead if blocked in any qos ->throttle().
      
      Fix it by setting alloc_time_ns and start_time_ns to now when the allocated
      request is actually used.
      Signed-off-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230710105516.2053478-1-chengming.zhou@linux.devSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5c17f45e
    • Ming Lei's avatar
      nvme-pci: fix DMA direction of unmapping integrity data · b8f6446b
      Ming Lei authored
      DMA direction should be taken in dma_unmap_page() for unmapping integrity
      data.
      
      Fix this DMA direction, and reported in Guangwu's test.
      Reported-by: default avatarGuangwu Zhang <guazhang@redhat.com>
      Fixes: 4aedb705 ("nvme-pci: split metadata handling from nvme_map_data / nvme_unmap_data")
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      b8f6446b
    • Christoph Hellwig's avatar
      nvme: don't reject probe due to duplicate IDs for single-ported PCIe devices · ac522fc6
      Christoph Hellwig authored
      While duplicate IDs are still very harmful, including the potential to easily
      see changing devices in /dev/disk/by-id, it turn out they are extremely
      common for cheap end user NVMe devices.
      
      Relax our check for them for so that it doesn't reject the probe on
      single-ported PCIe devices, but prints a big warning instead.  In doubt
      we'd still like to see quirk entries to disable the potential for
      changing supposed stable device identifier links, but this will at least
      allow users how have two (or more) of these devices to use them without
      having to manually add a new PCI ID entry with the quirk through sysfs or
      by patching the kernel.
      
      Fixes: 2079f41e ("nvme: check that EUI/GUID/UUID are globally unique")
      Cc: stable@vger.kernel.org # 6.0+
      Co-developed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      ac522fc6
  13. 12 Jul, 2023 6 commits
  14. 10 Jul, 2023 4 commits
  15. 05 Jul, 2023 2 commits