1. 13 Nov, 2018 7 commits
  2. 24 Oct, 2018 2 commits
  3. 23 Oct, 2018 5 commits
  4. 22 Oct, 2018 2 commits
    • Kiran Kumar Modukuri's avatar
      UBUNTU: SAUCE: fscache: Fix race in decrementing refcount of op->npages · 3bf972a3
      Kiran Kumar Modukuri authored
      BugLink: https://bugs.launchpad.net/bugs/1797314
      
      [Trace]
      seen this in 4.4.x kernels and the same bug affects fscache in latest upstreams kernels.
      Jun 25 11:32:08  kernel: [4740718.880898] FS-Cache:
      Jun 25 11:32:08  kernel: [4740718.880920] FS-Cache: Assertion failed
      Jun 25 11:32:08  kernel: [4740718.880934] FS-Cache: 0 > 0 is false
      Jun 25 11:32:08  kernel: [4740718.881001] ------------[ cut here ]------------
      Jun 25 11:32:08  kernel: [4740718.881017] kernel BUG at /usr/src/linux-4.4.0/fs/fscache/operation.c:449!
      Jun 25 11:32:08  kernel: [4740718.881040] invalid opcode: 0000 [#1] SMP
      ...
      Jun 25 11:32:08  kernel: [4740718.892659] Call Trace:
      Jun 25 11:32:08  kernel: [4740718.893506]  [<ffffffffc1464cf9>] cachefiles_read_copier+0x3a9/0x410 [cachefiles]
      Jun 25 11:32:08  kernel: [4740718.894374]  [<ffffffffc037e272>] fscache_op_work_func+0x22/0x50 [fscache]
      Jun 25 11:32:08  kernel: [4740718.895180]  [<ffffffff81096da0>] process_one_work+0x150/0x3f0
      Jun 25 11:32:08  kernel: [4740718.895966]  [<ffffffff8109751a>] worker_thread+0x11a/0x470
      Jun 25 11:32:08  kernel: [4740718.896753]  [<ffffffff81808e59>] ? __schedule+0x359/0x980
      Jun 25 11:32:08  kernel: [4740718.897783]  [<ffffffff81097400>] ? rescuer_thread+0x310/0x310
      Jun 25 11:32:08  kernel: [4740718.898581]  [<ffffffff8109cdd6>] kthread+0xd6/0xf0
      Jun 25 11:32:08  kernel: [4740718.899469]  [<ffffffff8109cd00>] ? kthread_park+0x60/0x60
      Jun 25 11:32:08  kernel: [4740718.900477]  [<ffffffff8180d0cf>] ret_from_fork+0x3f/0x70
      Jun 25 11:32:08  kernel: [4740718.901514]  [<ffffffff8109cd00>] ? kthread_park+0x60/0x60
      
      [Problem]
              atomic_sub(n_pages, &op->n_pages);
              if (atomic_read(&op->n_pages) <= 0)
                      fscache_op_complete(&op->op, true);
      
      The code in fscache_retrieval_complete is using atomic_sub followed by an atomic_read.
      This causes two threads doing a decrement of pages to race with each other seeing the op->refcount 0 at same time,
      and end up calling fscache_op_complete in both the threads leading to the OOPs.
      
      [Fix]
      
      The fix is trivial to use atomic_sub_return instead of two calls.
      Signed-off-by: default avatarKiran Kumar Modukuri <kiran.modukuri@gmail.com>
      (backported from
       https://www.redhat.com/archives/linux-cachefs/2018-September/msg00001.html
       The message has been on-list since 21 September 2018 and has
       received no feedback whatsoever.
      
       I have cleaned up the commit message a little bit and dropped a
       whitespace change.)
      Signed-off-by: default avatarDaniel Axtens <daniel.axtens@canonical.com>
      Acked-by: default avatarKhalid Elmously <khalid.elmously@canonical.com>
      Acked-by: default avatarStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: default avatarKhalid Elmously <khalid.elmously@canonical.com>
      3bf972a3
    • Mauricio Faria de Oliveira's avatar
      UBUNTU: SAUCE: (no-up) virtio-scsi: Decrement reqs counter before SCSI command requeue · 526467a6
      Mauricio Faria de Oliveira authored
      BugLink: https://bugs.launchpad.net/bugs/1798110
      
      The 'reqs' counter is incremented in the SCSI .queuecommand() path,
      right before virtscsi_queuecommand() is called, in either
       - virtscsi_queuecommand_single(), or
       - virtscsi_queuecommand_multi(), via virtscsi_pick_vq{_mq}().
      
      And it's decremented only in the SCSI command completion callback
      (after the command is successfully queued and completed by adapter):
       - virtscsi_complete_cmd().
      
      This allows for the counter to be incremented but _not_ decremented
      if virtscsi_queuecommand() gets an error to add/kick the command to
      the virtio ring (i.e., virtscsi_kick_cmd() fails with not 0 nor EIO).
      
          static virtscsi_queuecommand(...)
          {
                  ...
                  ret = virtscsi_kick_cmd(...)
                  if (ret == -EIO) {
                          ...
                          virtscsi_complete_cmd(vscsi, cmd);
                          ...
                  } else if (ret != 0) {
                          return SCSI_MLQUEUE_HOST_BUSY;
                  }
                  return 0;
          }
      
      In that case, the return code SCSI_MLQUEUE_HOST_BUSY causes the SCSI
      command to be requeued by the SCSI layer, which sends it again later
      in the .queuecommand() path -- incrementing the reqs counter _again_.
      
      This may happen many times for the same SCSI command, depending on
      the virtio ring condition/implementation, so the reqs counter will
      never return to zero (it's decremented only once in the completion
      callback). And it may happen for (m)any SCSI commands in this path.
      
      Unfortunately.. that causes a problem with a downstream/SAUCE patch
      for Xenial, which uses the reqs counter to sync with the completion
      callback: commit f1f609d8 ("UBUNTU: SAUCE: (no-up) virtio-scsi:
      Fix race in target free"), and waits for the value to become zero.
      
      This problem plus that patch prevent the SCSI target removal from
      finishing, eventually causing a CPU soft lockup on another CPU that
      is waiting for some kernel resource that is/remains locked in the
      stack chain of this CPU.
      
      This has been verified 1) with a synthetic test case with QEMU+GDB
      that fakes the number of available elements in virtio ring for one
      time (harmless), so to force the SCSI command to be requeued, then
      uses QEMU monitor to remove the virtio-scsi target.
      
      _AND_ 2) with the test-case reported by the customer (a for-loop on
      a cloud instance that repeatedly mounts the virtio-scsi drive, copy
      data out of it, unmount it, then detach the virtio-scsi drive).
      (Here, the problem usually happens in the 1st or 2nd iteration, but
      with the patch it has run for 35 iterations without any problems).
      
      Upstream has done away with the reqs counter (originally used only
      to check if any requests were still active, for steering;  not for
      our sync purposes).  Instead of trying to find an alternative sync
      way for now let's just fix the behavior which we know is incorrect.
      Signed-off-by: default avatarMauricio Faria de Oliveira <mfo@canonical.com>
      Tested-by: default avatarMauricio Faria de Oliveira <mfo@canonical.com>
      Tested-by: default avatarDavid Coronel <david.coronel@canonical.com>
      Acked-by: default avatarKhalid Elmously <khalid.elmously@canonical.com>
      Acked-by: default avatarStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: default avatarKhalid Elmously <khalid.elmously@canonical.com>
      526467a6
  5. 19 Oct, 2018 24 commits