1. 02 Jan, 2018 2 commits
    • Sebastian Andrzej Siewior's avatar
      crypto: mcryptd - protect the per-CPU queue with a lock · 2e234e70
      Sebastian Andrzej Siewior authored
      commit 9abffc6f upstream.
      
      mcryptd_enqueue_request() grabs the per-CPU queue struct and protects
      access to it with disabled preemption. Then it schedules a worker on the
      same CPU. The worker in mcryptd_queue_worker() guards access to the same
      per-CPU variable with disabled preemption.
      
      If we take CPU-hotplug into account then it is possible that between
      queue_work_on() and the actual invocation of the worker the CPU goes
      down and the worker will be scheduled on _another_ CPU. And here the
      preempt_disable() protection does not work anymore. The easiest thing is
      to add a spin_lock() to guard access to the list.
      
      Another detail: mcryptd_queue_worker() is not processing more than
      MCRYPTD_BATCH invocation in a row. If there are still items left, then
      it will invoke queue_work() to proceed with more later. *I* would
      suggest to simply drop that check because it does not use a system
      workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if
      preemption is required then the scheduler should do it.
      However if queue_work() is used then the work item is marked as CPU
      unbound. That means it will try to run on the local CPU but it may run
      on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y.
      Again, the preempt_disable() won't work here but lock which was
      introduced will help.
      In order to keep work-item on the local CPU (and avoid RR) I changed it
      to queue_work_on().
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2e234e70
    • Takashi Iwai's avatar
      ACPI: APEI / ERST: Fix missing error handling in erst_reader() · db09203e
      Takashi Iwai authored
      commit bb82e0b4 upstream.
      
      The commit f6f82851 ("pstore: pass allocated memory region back to
      caller") changed the check of the return value from erst_read() in
      erst_reader() in the following way:
      
              if (len == -ENOENT)
                      goto skip;
      -       else if (len < 0) {
      -               rc = -1;
      +       else if (len < sizeof(*rcd)) {
      +               rc = -EIO;
                      goto out;
      
      This introduced another bug: since the comparison with sizeof() is
      cast to unsigned, a negative len value doesn't hit any longer.
      As a result, when an error is returned from erst_read(), the code
      falls through, and it may eventually lead to some weird thing like
      memory corruption.
      
      This patch adds the negative error value check more explicitly for
      addressing the issue.
      
      Fixes: f6f82851 (pstore: pass allocated memory region back to caller)
      Tested-by: default avatarJerry Tang <jtang@suse.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      db09203e
  2. 25 Dec, 2017 38 commits