Commit b95a4012 authored by Uros Bizjak's avatar Uros Bizjak Committed by Joerg Roedel

iommufd: Use atomic_long_try_cmpxchg() in incr_user_locked_vm()

Use atomic_long_try_cmpxchg() instead of
atomic_long_cmpxchg (*ptr, old, new) != old in incr_user_locked_vm().
cmpxchg returns success in ZF flag, so this change saves a compare
after cmpxchg (and related move instruction in front of cmpxchg).

Also, atomic_long_try_cmpxchg() implicitly assigns old *ptr
value to "old" when cmpxchg fails. There is no need to re-read
the value in the loop.
Signed-off-by: default avatarUros Bizjak <ubizjak@gmail.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: default avatarJason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/20240522082729.971123-3-ubizjak@gmail.comSigned-off-by: default avatarJoerg Roedel <jroedel@suse.de>
parent c94ad1d5
......@@ -809,13 +809,14 @@ static int incr_user_locked_vm(struct iopt_pages *pages, unsigned long npages)
lock_limit = task_rlimit(pages->source_task, RLIMIT_MEMLOCK) >>
PAGE_SHIFT;
cur_pages = atomic_long_read(&pages->source_user->locked_vm);
do {
cur_pages = atomic_long_read(&pages->source_user->locked_vm);
new_pages = cur_pages + npages;
if (new_pages > lock_limit)
return -ENOMEM;
} while (atomic_long_cmpxchg(&pages->source_user->locked_vm, cur_pages,
new_pages) != cur_pages);
} while (!atomic_long_try_cmpxchg(&pages->source_user->locked_vm,
&cur_pages, new_pages));
return 0;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment