Commit ca766f04 authored by Vineet Gupta's avatar Vineet Gupta

ARC: atomic: !LLSC: use int data type consistently

Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarVineet Gupta <vgupta@kernel.org>
parent b1040148
...@@ -42,7 +42,7 @@ static inline void arch_atomic_##op(int i, atomic_t *v) \ ...@@ -42,7 +42,7 @@ static inline void arch_atomic_##op(int i, atomic_t *v) \
static inline int arch_atomic_##op##_return(int i, atomic_t *v) \ static inline int arch_atomic_##op##_return(int i, atomic_t *v) \
{ \ { \
unsigned long flags; \ unsigned long flags; \
unsigned long temp; \ unsigned int temp; \
\ \
/* \ /* \
* spin lock/unlock provides the needed smp_mb() before/after \ * spin lock/unlock provides the needed smp_mb() before/after \
...@@ -60,7 +60,7 @@ static inline int arch_atomic_##op##_return(int i, atomic_t *v) \ ...@@ -60,7 +60,7 @@ static inline int arch_atomic_##op##_return(int i, atomic_t *v) \
static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
{ \ { \
unsigned long flags; \ unsigned long flags; \
unsigned long orig; \ unsigned int orig; \
\ \
/* \ /* \
* spin lock/unlock provides the needed smp_mb() before/after \ * spin lock/unlock provides the needed smp_mb() before/after \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment