Commit a6b714ab authored by Dmitry Vyukov's avatar Dmitry Vyukov Committed by Khalid Elmously

locking/x86: Remove the unused atomic_inc_short() methd

BugLink: https://bugs.launchpad.net/bugs/1859640

commit 31b35f6b upstream.

It is completely unused and implemented only on x86.
Remove it.
Suggested-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170526172900.91058-1-dvyukov@google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarConnor Kuehl <connor.kuehl@canonical.com>
Signed-off-by: default avatarKhalid Elmously <khalid.elmously@canonical.com>
parent 63b23fdf
...@@ -24,8 +24,7 @@ ...@@ -24,8 +24,7 @@
* has an opportunity to return -EFAULT to the user if needed. * has an opportunity to return -EFAULT to the user if needed.
* The 64-bit routines just return a "long long" with the value, * The 64-bit routines just return a "long long" with the value,
* since they are only used from kernel space and don't expect to fault. * since they are only used from kernel space and don't expect to fault.
* Support for 16-bit ops is included in the framework but we don't provide * Support for 16-bit ops is included in the framework but we don't provide any.
* any (x86_64 has an atomic_inc_short(), so we might want to some day).
* *
* Note that the caller is advised to issue a suitable L1 or L2 * Note that the caller is advised to issue a suitable L1 or L2
* prefetch on the address being manipulated to avoid extra stalls. * prefetch on the address being manipulated to avoid extra stalls.
......
...@@ -220,19 +220,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) ...@@ -220,19 +220,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
return c; return c;
} }
/**
* atomic_inc_short - increment of a short integer
* @v: pointer to type int
*
* Atomically adds 1 to @v
* Returns the new value of @u
*/
static __always_inline short int atomic_inc_short(short int *v)
{
asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v));
return *v;
}
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
# include <asm/atomic64_32.h> # include <asm/atomic64_32.h>
#else #else
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment