Commit 31956166 authored by Alistair Popple's avatar Alistair Popple Committed by Linus Torvalds

mm/mmu_notifier.c: fix race in mmu_interval_notifier_remove()

In some cases it is possible for mmu_interval_notifier_remove() to race
with mn_tree_inv_end() allowing it to return while the notifier data
structure is still in use.  Consider the following sequence:

  CPU0 - mn_tree_inv_end()            CPU1 - mmu_interval_notifier_remove()
  ----------------------------------- ------------------------------------
                                      spin_lock(subscriptions->lock);
                                      seq = subscriptions->invalidate_seq;
  spin_lock(subscriptions->lock);     spin_unlock(subscriptions->lock);
  subscriptions->invalidate_seq++;
                                      wait_event(invalidate_seq != seq);
                                      return;
  interval_tree_remove(interval_sub); kfree(interval_sub);
  spin_unlock(subscriptions->lock);
  wake_up_all();

As the wait_event() condition is true it will return immediately.  This
can lead to use-after-free type errors if the caller frees the data
structure containing the interval notifier subscription while it is
still on a deferred list.  Fix this by taking the appropriate lock when
reading invalidate_seq to ensure proper synchronisation.

I observed this whilst running stress testing during some development.
You do have to be pretty unlucky, but it leads to the usual problems of
use-after-free (memory corruption, kernel crash, difficult to diagnose
WARN_ON, etc).

Link: https://lkml.kernel.org/r/20220420043734.476348-1-apopple@nvidia.com
Fixes: 99cb252f ("mm/mmu_notifier: add an interval tree notifier")
Signed-off-by: default avatarAlistair Popple <apopple@nvidia.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ecc04463
...@@ -1036,6 +1036,18 @@ int mmu_interval_notifier_insert_locked( ...@@ -1036,6 +1036,18 @@ int mmu_interval_notifier_insert_locked(
} }
EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked); EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked);
static bool
mmu_interval_seq_released(struct mmu_notifier_subscriptions *subscriptions,
unsigned long seq)
{
bool ret;
spin_lock(&subscriptions->lock);
ret = subscriptions->invalidate_seq != seq;
spin_unlock(&subscriptions->lock);
return ret;
}
/** /**
* mmu_interval_notifier_remove - Remove a interval notifier * mmu_interval_notifier_remove - Remove a interval notifier
* @interval_sub: Interval subscription to unregister * @interval_sub: Interval subscription to unregister
...@@ -1083,7 +1095,7 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub) ...@@ -1083,7 +1095,7 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub)
lock_map_release(&__mmu_notifier_invalidate_range_start_map); lock_map_release(&__mmu_notifier_invalidate_range_start_map);
if (seq) if (seq)
wait_event(subscriptions->wq, wait_event(subscriptions->wq,
READ_ONCE(subscriptions->invalidate_seq) != seq); mmu_interval_seq_released(subscriptions, seq));
/* pairs with mmgrab in mmu_interval_notifier_insert() */ /* pairs with mmgrab in mmu_interval_notifier_insert() */
mmdrop(mm); mmdrop(mm);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment