1. 11 Aug, 2022 1 commit
    • Mikulas Patocka's avatar
      dm bufio: fix some cases where the code sleeps with spinlock held · e3a7c294
      Mikulas Patocka authored
      Commit b32d4582 ("dm bufio: Add DM_BUFIO_CLIENT_NO_SLEEP flag")
      added a "NO_SLEEP" mode, it replaces a mutex with a spinlock, and it
      is only usable when the device is in read-only mode (because the write
      path may be sleeping while holding the dm_bufio_client lock).
      
      However, there are still two points where the code could sleep even in
      read-only mode. One is in __get_unclaimed_buffer -> __make_buffer_clean.
      The other is in __try_evict_buffer -> __make_buffer_clean. These functions
      will call __make_buffer_clean which sleeps if the buffer is being read.
      
      Fix these cases so that if c->no_sleep is set __make_buffer_clean
      will not be called and the buffer will be skipped instead.
      
      Fixes: b32d4582 ("dm bufio: Add DM_BUFIO_CLIENT_NO_SLEEP flag")
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      e3a7c294
  2. 09 Aug, 2022 4 commits
    • Mikulas Patocka's avatar
      dm writecache: fix smatch warning about invalid return from writecache_map · b7f362d6
      Mikulas Patocka authored
      There's a smatch warning "inconsistent returns '&wc->lock'" in
      dm-writecache. The reason for the warning is that writecache_map()
      doesn't drop the lock on the impossible path.
      
      Fix this warning by adding wc_unlock() after the BUG statement (so
      that it will be compiled-away anyway).
      
      Fixes: df699cc1 ("dm writecache: report invalid return from writecache_map helpers")
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Reported-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      b7f362d6
    • Mike Snitzer's avatar
      dm verity: fix verity_parse_opt_args parsing · f876df9f
      Mike Snitzer authored
      Commit df326e7a ("dm verity: allow optional args to alter primary
      args handling") introduced a bug where verity_parse_opt_args() wouldn't
      properly shift past an optional argument's additional params (by
      ignoring them).
      
      Fix this by avoiding returning with error if an unknown argument is
      encountered when @only_modifier_opts=true is passed to
      verity_parse_opt_args().
      
      In practice this regressed the cryptsetup testsuite's FEC testing
      because unknown optional arguments were encountered, wherey
      short-circuiting ever testing FEC mode. With this fix all of the
      cryptsetup testsuite's verity FEC tests pass.
      
      Fixes: df326e7a ("dm verity: allow optional args to alter primary args handling")
      Reported-by: default avatarMilan Broz <gmazyland@gmail.com&gt;>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      f876df9f
    • Mike Snitzer's avatar
      dm verity: fix DM_VERITY_OPTS_MAX value yet again · 8c22816d
      Mike Snitzer authored
      Must account for the possibility that "try_verify_in_tasklet" is used.
      
      This is the same issue that was fixed with commit 160f99db -- it
      is far too easy to miss that additional a new argument(s) require
      bumping DM_VERITY_OPTS_MAX accordingly.
      
      Fixes: 5721d4e5 ("dm verity: Add optional "try_verify_in_tasklet" feature")
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      8c22816d
    • Mike Snitzer's avatar
      dm bufio: simplify DM_BUFIO_CLIENT_NO_SLEEP locking · b33b6fdc
      Mike Snitzer authored
      Historically none of the bufio code runs in interrupt context but with
      the use of DM_BUFIO_CLIENT_NO_SLEEP a bufio client can, see: commit
      5721d4e5 ("dm verity: Add optional "try_verify_in_tasklet" feature")
      That said, the new tasklet usecase still doesn't require interrupts be
      disabled by bufio (let alone conditionally restore them).
      
      Yet with PREEMPT_RT, and falling back from tasklet to workqueue, care
      must be taken to properly synchronize between softirq and process
      context, otherwise ABBA deadlock may occur. While it is unnecessary to
      disable bottom-half preemption within a tasklet, we must consistently do
      so in process context to ensure locking is in the proper order.
      
      Fix these issues by switching from spin_lock_irq{save,restore} to using
      spin_{lock,unlock}_bh instead. Also remove the 'spinlock_flags' member
      in dm_bufio_client struct (that can be used unsafely if bufio must
      recurse on behalf of some caller, e.g. block layer's submit_bio).
      
      Fixes: 5721d4e5 ("dm verity: Add optional "try_verify_in_tasklet" feature")
      Reported-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      b33b6fdc
  3. 04 Aug, 2022 8 commits
  4. 28 Jul, 2022 10 commits
    • Nathan Huckleberry's avatar
      dm bufio: Add DM_BUFIO_CLIENT_NO_SLEEP flag · b32d4582
      Nathan Huckleberry authored
      Add an optional flag that ensures dm_bufio_client does not sleep
      (primary focus is to service dm_bufio_get without sleeping). This
      allows the dm-bufio cache to be queried from interrupt context.
      
      To ensure that dm-bufio does not sleep, dm-bufio must use a spinlock
      instead of a mutex. Additionally, to avoid deadlocks, special care
      must be taken so that dm-bufio does not sleep while holding the
      spinlock.
      
      But again: the scope of this no_sleep is initially confined to
      dm_bufio_get, so __alloc_buffer_wait_no_callback is _not_ changed to
      avoid sleeping because __bufio_new avoids allocation for NF_GET.
      Signed-off-by: default avatarNathan Huckleberry <nhuck@google.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      b32d4582
    • Nathan Huckleberry's avatar
      dm bufio: Add flags argument to dm_bufio_client_create · 0fcb100d
      Nathan Huckleberry authored
      Add a flags argument to dm_bufio_client_create and update all the
      callers. This is in preparation to add the DM_BUFIO_NO_SLEEP flag.
      Signed-off-by: default avatarNathan Huckleberry <nhuck@google.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      0fcb100d
    • Mike Snitzer's avatar
      dm: fix dm-raid crash if md_handle_request() splits bio · 9dd1cd32
      Mike Snitzer authored
      Commit ca522482 ("dm: pass NULL bdev to bio_alloc_clone")
      introduced the optimization to _not_ perform bio_associate_blkg()'s
      relatively costly work when DM core clones its bio. But in doing so it
      exposed the possibility for DM's cloned bio to alter DM target
      behavior (e.g. crash) if a target were to issue IO without first
      calling bio_set_dev().
      
      The DM raid target can trigger an MD crash due to its need to split
      the DM bio that is passed to md_handle_request(). The split will
      recurse to submit_bio_noacct() using a bio with an uninitialized
      ->bi_blkg. This NULL bio->bi_blkg causes blk_throtl_bio() to
      dereference a NULL blkg_to_tg(bio->bi_blkg).
      
      Fix this in DM core by adding a new 'needs_bio_set_dev' target flag that
      will make alloc_tio() call bio_set_dev() on behalf of the target.
      dm-raid is the only target that requires this flag. bio_set_dev()
      initializes the DM cloned bio's ->bi_blkg, using bio_associate_blkg,
      before passing the bio to md_handle_request().
      
      Long-term fix would be to audit and refactor MD code to rely on DM to
      split its bio, using dm_accept_partial_bio(), but there are MD raid
      personalities (e.g. raid1 and raid10) whose implementation are tightly
      coupled to handling the bio splitting inline.
      
      Fixes: ca522482 ("dm: pass NULL bdev to bio_alloc_clone")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      9dd1cd32
    • Mikulas Patocka's avatar
      dm raid: fix address sanitizer warning in raid_resume · 7dad24db
      Mikulas Patocka authored
      There is a KASAN warning in raid_resume when running the lvm test
      lvconvert-raid.sh. The reason for the warning is that mddev->raid_disks
      is greater than rs->raid_disks, so the loop touches one entry beyond
      the allocated length.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      7dad24db
    • Mikulas Patocka's avatar
      dm raid: fix address sanitizer warning in raid_status · 1fbeea21
      Mikulas Patocka authored
      There is this warning when using a kernel with the address sanitizer
      and running this testsuite:
      https://gitlab.com/cki-project/kernel-tests/-/tree/main/storage/swraid/scsi_raid
      
      ==================================================================
      BUG: KASAN: slab-out-of-bounds in raid_status+0x1747/0x2820 [dm_raid]
      Read of size 4 at addr ffff888079d2c7e8 by task lvcreate/13319
      CPU: 0 PID: 13319 Comm: lvcreate Not tainted 5.18.0-0.rc3.<snip> #1
      Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      Call Trace:
       <TASK>
       dump_stack_lvl+0x6a/0x9c
       print_address_description.constprop.0+0x1f/0x1e0
       print_report.cold+0x55/0x244
       kasan_report+0xc9/0x100
       raid_status+0x1747/0x2820 [dm_raid]
       dm_ima_measure_on_table_load+0x4b8/0xca0 [dm_mod]
       table_load+0x35c/0x630 [dm_mod]
       ctl_ioctl+0x411/0x630 [dm_mod]
       dm_ctl_ioctl+0xa/0x10 [dm_mod]
       __x64_sys_ioctl+0x12a/0x1a0
       do_syscall_64+0x5b/0x80
      
      The warning is caused by reading conf->max_nr_stripes in raid_status. The
      code in raid_status reads mddev->private, casts it to struct r5conf and
      reads the entry max_nr_stripes.
      
      However, if we have different raid type than 4/5/6, mddev->private
      doesn't point to struct r5conf; it may point to struct r0conf, struct
      r1conf, struct r10conf or struct mpconf. If we cast a pointer to one
      of these structs to struct r5conf, we will be reading invalid memory
      and KASAN warns about it.
      
      Fix this bug by reading struct r5conf only if raid type is 4, 5 or 6.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      1fbeea21
    • Mike Christie's avatar
      dm: Start pr_preempt from the same starting path · c6adada5
      Mike Christie authored
      pr_preempt has a similar issue as reserve where for all the
      reservation types except the All Registrants ones the preempt can
      create a reservation. And a follow up reservation or release needs to
      go down the same path the preempt did. This has the pr_preempt work
      like reserve and release where we always start from the first path in
      the first group.
      
      This commit has been tested with windows failover clustering's
      validation test and libiscsi's PGR tests to check for regressions.
      They both don't have tests to verify this case, so I tested it
      manually.
      Signed-off-by: default avatarMike Christie <michael.christie@oracle.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      c6adada5
    • Mike Christie's avatar
      dm: Fix PR release handling for non All Registrants · 08a3c338
      Mike Christie authored
      This commit fixes a bug where we are leaving the reservation in place
      even though pr_release has run and returned success.
      
      If we have a Write Exclusive, Exclusive Access, or Write/Exclusive
      Registrants only reservation, the release must be sent down the path
      that is the reservation holder. The problem is multipath_prepare_ioctl
      most likely selected path N for the reservation, then later when we do
      the release multipath_prepare_ioctl will select a completely different
      path. The device will then return success becuase the nvme and scsi
      specs say to return success if there is no reservation or if the
      release is sent down from a path that is not the holder. We then think
      we have released the reservation.
      
      This commit has us loop over each path and send a release so we can
      make sure the release is executed on the correct path. It has been
      tested with windows failover clustering's validation test which checks
      this case, and it has been tested manually (the libiscsi PGR tests
      don't have a test case for this yet, but I will be adding one).
      Signed-off-by: default avatarMike Christie <michael.christie@oracle.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      08a3c338
    • Mike Christie's avatar
      dm: Start pr_reserve from the same starting path · 70151087
      Mike Christie authored
      When an app does a pr_reserve it will go to whatever path we happen to
      be using at the time. This can result in errors when the app does a
      second pr_reserve call and expects success but gets a failure because
      the reserve is not done on the holder's path. This commit has us
      always start trying to do reserves from the first path in the first
      group.
      
      Windows failover clustering will produce the type of pattern above.
      With this commit, we will now pass its validation test for this case.
      Signed-off-by: default avatarMike Christie <michael.christie@oracle.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      70151087
    • Mike Christie's avatar
      dm: Allow dm_call_pr to be used for path searches · 8dd87f3c
      Mike Christie authored
      The specs state that if you send a reserve down a path that is already
      the holder success must be returned and if it goes down a path that
      is not the holder reservation conflict must be returned. Windows
      failover clustering will send a second reservation and expects that a
      device returns success. The problem for multipathing is that for an
      All Registrants reservation, we can send the reserve down any path but
      for all other reservation types there is one path that is the holder.
      
      To handle this we could add PR state to dm but that can get nasty.
      Look at target_core_pr.c for an example of the type of things we'd
      have to track. It will also get more complicated because other
      initiators can change the state so we will have to add in async
      event/sense handling.
      
      This commit, and the 3 commits that follow, tries to keep dm simple
      and keep just doing passthrough. This commit modifies dm_call_pr to be
      able to find the first usable path that can execute our pr_op then
      return. When dm_pr_reserve is converted to dm_call_pr in the next
      commit for the normal case we will use the same path for every
      reserve.
      Signed-off-by: default avatarMike Christie <michael.christie@oracle.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      8dd87f3c
    • Mike Snitzer's avatar
      dm: return early from dm_pr_call() if DM device is suspended · e120a5f1
      Mike Snitzer authored
      Otherwise PR ops may be issued while the broader DM device is being
      reconfigured, etc.
      
      Fixes: 9c72bad1 ("dm: call PR reserve/unreserve on each underlying device")
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      e120a5f1
  5. 15 Jul, 2022 1 commit
    • Luo Meng's avatar
      dm thin: fix use-after-free crash in dm_sm_register_threshold_callback · 3534e5a5
      Luo Meng authored
      Fault inject on pool metadata device reports:
        BUG: KASAN: use-after-free in dm_pool_register_metadata_threshold+0x40/0x80
        Read of size 8 at addr ffff8881b9d50068 by task dmsetup/950
      
        CPU: 7 PID: 950 Comm: dmsetup Tainted: G        W         5.19.0-rc6 #1
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-1.fc33 04/01/2014
        Call Trace:
         <TASK>
         dump_stack_lvl+0x34/0x44
         print_address_description.constprop.0.cold+0xeb/0x3f4
         kasan_report.cold+0xe6/0x147
         dm_pool_register_metadata_threshold+0x40/0x80
         pool_ctr+0xa0a/0x1150
         dm_table_add_target+0x2c8/0x640
         table_load+0x1fd/0x430
         ctl_ioctl+0x2c4/0x5a0
         dm_ctl_ioctl+0xa/0x10
         __x64_sys_ioctl+0xb3/0xd0
         do_syscall_64+0x35/0x80
         entry_SYSCALL_64_after_hwframe+0x46/0xb0
      
      This can be easily reproduced using:
        echo offline > /sys/block/sda/device/state
        dd if=/dev/zero of=/dev/mapper/thin bs=4k count=10
        dmsetup load pool --table "0 20971520 thin-pool /dev/sda /dev/sdb 128 0 0"
      
      If a metadata commit fails, the transaction will be aborted and the
      metadata space maps will be destroyed. If a DM table reload then
      happens for this failed thin-pool, a use-after-free will occur in
      dm_sm_register_threshold_callback (called from
      dm_pool_register_metadata_threshold).
      
      Fix this by in dm_pool_register_metadata_threshold() by returning the
      -EINVAL error if the thin-pool is in fail mode. Also fail pool_ctr()
      with a new error message: "Error registering metadata threshold".
      
      Fixes: ac8c3f3d ("dm thin: generate event when metadata threshold passed")
      Cc: stable@vger.kernel.org
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarLuo Meng <luomeng12@huawei.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@kernel.org>
      3534e5a5
  6. 14 Jul, 2022 6 commits
  7. 07 Jul, 2022 10 commits