Commit 6d5f0ebf authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core locking updates from Ingo Molnar:
 "The main updates in this cycle were:

   - mutex MCS refactoring finishing touches: improve comments, refactor
     and clean up code, reduce debug data structure footprint, etc.

   - qrwlock finishing touches: remove old code, self-test updates.

   - small rwsem optimization

   - various smaller fixes/cleanups"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/lockdep: Revert qrwlock recusive stuff
  locking/rwsem: Avoid double checking before try acquiring write lock
  locking/rwsem: Move EXPORT_SYMBOL() lines to follow function definition
  locking/rwlock, x86: Delete unused asm/rwlock.h and rwlock.S
  locking/rwlock, x86: Clean up asm/spinlock*.h to remove old rwlock code
  locking/semaphore: Resolve some shadow warnings
  locking/selftest: Support queued rwlock
  locking/lockdep: Restrict the use of recursive read_lock() with qrwlock
  locking/spinlocks: Always evaluate the second argument of spin_lock_nested()
  locking/Documentation: Update locking/mutex-design.txt disadvantages
  locking/Documentation: Move locking related docs into Documentation/locking/
  locking/mutexes: Use MUTEX_SPIN_ON_OWNER when appropriate
  locking/mutexes: Refactor optimistic spinning code
  locking/mcs: Remove obsolete comment
  locking/mutexes: Document quick lock release when unlocking
  locking/mutexes: Standardize arguments in lock/unlock slowpaths
  locking: Remove deprecated smp_mb__() barriers
parents dbb885fe 8acd91e8
...@@ -287,6 +287,8 @@ local_ops.txt ...@@ -287,6 +287,8 @@ local_ops.txt
- semantics and behavior of local atomic operations. - semantics and behavior of local atomic operations.
lockdep-design.txt lockdep-design.txt
- documentation on the runtime locking correctness validator. - documentation on the runtime locking correctness validator.
locking/
- directory with info about kernel locking primitives
lockstat.txt lockstat.txt
- info on collecting statistics on locks (and contention). - info on collecting statistics on locks (and contention).
lockup-watchdogs.txt lockup-watchdogs.txt
......
...@@ -1972,7 +1972,7 @@ machines due to caching. ...@@ -1972,7 +1972,7 @@ machines due to caching.
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
<filename>Documentation/spinlocks.txt</filename>: <filename>Documentation/locking/spinlocks.txt</filename>:
Linus Torvalds' spinlocking tutorial in the kernel sources. Linus Torvalds' spinlocking tutorial in the kernel sources.
</para> </para>
</listitem> </listitem>
......
...@@ -12,7 +12,7 @@ Because things like lock contention can severely impact performance. ...@@ -12,7 +12,7 @@ Because things like lock contention can severely impact performance.
- HOW - HOW
Lockdep already has hooks in the lock functions and maps lock instances to Lockdep already has hooks in the lock functions and maps lock instances to
lock classes. We build on that (see Documentation/lockdep-design.txt). lock classes. We build on that (see Documentation/lokcing/lockdep-design.txt).
The graph below shows the relation between the lock functions and the various The graph below shows the relation between the lock functions and the various
hooks therein. hooks therein.
......
...@@ -145,9 +145,9 @@ Disadvantages ...@@ -145,9 +145,9 @@ Disadvantages
Unlike its original design and purpose, 'struct mutex' is larger than Unlike its original design and purpose, 'struct mutex' is larger than
most locks in the kernel. E.g: on x86-64 it is 40 bytes, almost twice most locks in the kernel. E.g: on x86-64 it is 40 bytes, almost twice
as large as 'struct semaphore' (24 bytes) and 8 bytes shy of the as large as 'struct semaphore' (24 bytes) and tied, along with rwsems,
'struct rw_semaphore' variant. Larger structure sizes mean more CPU for the largest lock in the kernel. Larger structure sizes mean more
cache and memory footprint. CPU cache and memory footprint.
When to use mutexes When to use mutexes
------------------- -------------------
......
...@@ -5680,8 +5680,8 @@ M: Ingo Molnar <mingo@redhat.com> ...@@ -5680,8 +5680,8 @@ M: Ingo Molnar <mingo@redhat.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/locking T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/locking
S: Maintained S: Maintained
F: Documentation/lockdep*.txt F: Documentation/locking/lockdep*.txt
F: Documentation/lockstat.txt F: Documentation/locking/lockstat.txt
F: include/linux/lockdep.h F: include/linux/lockdep.h
F: kernel/locking/ F: kernel/locking/
......
#ifndef _ASM_X86_RWLOCK_H
#define _ASM_X86_RWLOCK_H
#include <asm/asm.h>
#if CONFIG_NR_CPUS <= 2048
#ifndef __ASSEMBLY__
typedef union {
s32 lock;
s32 write;
} arch_rwlock_t;
#endif
#define RW_LOCK_BIAS 0x00100000
#define READ_LOCK_SIZE(insn) __ASM_FORM(insn##l)
#define READ_LOCK_ATOMIC(n) atomic_##n
#define WRITE_LOCK_ADD(n) __ASM_FORM_COMMA(addl n)
#define WRITE_LOCK_SUB(n) __ASM_FORM_COMMA(subl n)
#define WRITE_LOCK_CMP RW_LOCK_BIAS
#else /* CONFIG_NR_CPUS > 2048 */
#include <linux/const.h>
#ifndef __ASSEMBLY__
typedef union {
s64 lock;
struct {
u32 read;
s32 write;
};
} arch_rwlock_t;
#endif
#define RW_LOCK_BIAS (_AC(1,L) << 32)
#define READ_LOCK_SIZE(insn) __ASM_FORM(insn##q)
#define READ_LOCK_ATOMIC(n) atomic64_##n
#define WRITE_LOCK_ADD(n) __ASM_FORM(incl)
#define WRITE_LOCK_SUB(n) __ASM_FORM(decl)
#define WRITE_LOCK_CMP 1
#endif /* CONFIG_NR_CPUS */
#define __ARCH_RW_LOCK_UNLOCKED { RW_LOCK_BIAS }
/* Actual code is in asm/spinlock.h or in arch/x86/lib/rwlock.S */
#endif /* _ASM_X86_RWLOCK_H */
...@@ -187,7 +187,6 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) ...@@ -187,7 +187,6 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
cpu_relax(); cpu_relax();
} }
#ifndef CONFIG_QUEUE_RWLOCK
/* /*
* Read-write spinlocks, allowing multiple readers * Read-write spinlocks, allowing multiple readers
* but only one writer. * but only one writer.
...@@ -198,91 +197,15 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) ...@@ -198,91 +197,15 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
* irq-safe write-lock, but readers can get non-irqsafe * irq-safe write-lock, but readers can get non-irqsafe
* read-locks. * read-locks.
* *
* On x86, we implement read-write locks as a 32-bit counter * On x86, we implement read-write locks using the generic qrwlock with
* with the high bit (sign) being the "contended" bit. * x86 specific optimization.
*/ */
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
static inline int arch_read_can_lock(arch_rwlock_t *lock)
{
return lock->lock > 0;
}
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
static inline int arch_write_can_lock(arch_rwlock_t *lock)
{
return lock->write == WRITE_LOCK_CMP;
}
static inline void arch_read_lock(arch_rwlock_t *rw)
{
asm volatile(LOCK_PREFIX READ_LOCK_SIZE(dec) " (%0)\n\t"
"jns 1f\n"
"call __read_lock_failed\n\t"
"1:\n"
::LOCK_PTR_REG (rw) : "memory");
}
static inline void arch_write_lock(arch_rwlock_t *rw)
{
asm volatile(LOCK_PREFIX WRITE_LOCK_SUB(%1) "(%0)\n\t"
"jz 1f\n"
"call __write_lock_failed\n\t"
"1:\n"
::LOCK_PTR_REG (&rw->write), "i" (RW_LOCK_BIAS)
: "memory");
}
static inline int arch_read_trylock(arch_rwlock_t *lock)
{
READ_LOCK_ATOMIC(t) *count = (READ_LOCK_ATOMIC(t) *)lock;
if (READ_LOCK_ATOMIC(dec_return)(count) >= 0)
return 1;
READ_LOCK_ATOMIC(inc)(count);
return 0;
}
static inline int arch_write_trylock(arch_rwlock_t *lock)
{
atomic_t *count = (atomic_t *)&lock->write;
if (atomic_sub_and_test(WRITE_LOCK_CMP, count))
return 1;
atomic_add(WRITE_LOCK_CMP, count);
return 0;
}
static inline void arch_read_unlock(arch_rwlock_t *rw)
{
asm volatile(LOCK_PREFIX READ_LOCK_SIZE(inc) " %0"
:"+m" (rw->lock) : : "memory");
}
static inline void arch_write_unlock(arch_rwlock_t *rw)
{
asm volatile(LOCK_PREFIX WRITE_LOCK_ADD(%1) "%0"
: "+m" (rw->write) : "i" (RW_LOCK_BIAS) : "memory");
}
#else
#include <asm/qrwlock.h> #include <asm/qrwlock.h>
#endif /* CONFIG_QUEUE_RWLOCK */
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) #define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) #define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#undef READ_LOCK_SIZE
#undef READ_LOCK_ATOMIC
#undef WRITE_LOCK_ADD
#undef WRITE_LOCK_SUB
#undef WRITE_LOCK_CMP
#define arch_spin_relax(lock) cpu_relax() #define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax() #define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax() #define arch_write_relax(lock) cpu_relax()
......
...@@ -34,10 +34,6 @@ typedef struct arch_spinlock { ...@@ -34,10 +34,6 @@ typedef struct arch_spinlock {
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } } #define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
#ifdef CONFIG_QUEUE_RWLOCK
#include <asm-generic/qrwlock_types.h> #include <asm-generic/qrwlock_types.h>
#else
#include <asm/rwlock.h>
#endif
#endif /* _ASM_X86_SPINLOCK_TYPES_H */ #endif /* _ASM_X86_SPINLOCK_TYPES_H */
...@@ -20,7 +20,6 @@ lib-y := delay.o misc.o cmdline.o ...@@ -20,7 +20,6 @@ lib-y := delay.o misc.o cmdline.o
lib-y += thunk_$(BITS).o lib-y += thunk_$(BITS).o
lib-y += usercopy_$(BITS).o usercopy.o getuser.o putuser.o lib-y += usercopy_$(BITS).o usercopy.o getuser.o putuser.o
lib-y += memcpy_$(BITS).o lib-y += memcpy_$(BITS).o
lib-$(CONFIG_SMP) += rwlock.o
lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o
......
/* Slow paths of read/write spinlocks. */
#include <linux/linkage.h>
#include <asm/alternative-asm.h>
#include <asm/frame.h>
#include <asm/rwlock.h>
#ifdef CONFIG_X86_32
# define __lock_ptr eax
#else
# define __lock_ptr rdi
#endif
ENTRY(__write_lock_failed)
CFI_STARTPROC
FRAME
0: LOCK_PREFIX
WRITE_LOCK_ADD($RW_LOCK_BIAS) (%__lock_ptr)
1: rep; nop
cmpl $WRITE_LOCK_CMP, (%__lock_ptr)
jne 1b
LOCK_PREFIX
WRITE_LOCK_SUB($RW_LOCK_BIAS) (%__lock_ptr)
jnz 0b
ENDFRAME
ret
CFI_ENDPROC
END(__write_lock_failed)
ENTRY(__read_lock_failed)
CFI_STARTPROC
FRAME
0: LOCK_PREFIX
READ_LOCK_SIZE(inc) (%__lock_ptr)
1: rep; nop
READ_LOCK_SIZE(cmp) $1, (%__lock_ptr)
js 1b
LOCK_PREFIX
READ_LOCK_SIZE(dec) (%__lock_ptr)
js 0b
ENDFRAME
ret
CFI_ENDPROC
END(__read_lock_failed)
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
* of extra utility/tracking out of our acquire-ctx. This is provided * of extra utility/tracking out of our acquire-ctx. This is provided
* by drm_modeset_lock / drm_modeset_acquire_ctx. * by drm_modeset_lock / drm_modeset_acquire_ctx.
* *
* For basic principles of ww_mutex, see: Documentation/ww-mutex-design.txt * For basic principles of ww_mutex, see: Documentation/locking/ww-mutex-design.txt
* *
* The basic usage pattern is to: * The basic usage pattern is to:
* *
......
...@@ -3,42 +3,6 @@ ...@@ -3,42 +3,6 @@
#define _LINUX_ATOMIC_H #define _LINUX_ATOMIC_H
#include <asm/atomic.h> #include <asm/atomic.h>
/*
* Provide __deprecated wrappers for the new interface, avoid flag day changes.
* We need the ugly external functions to break header recursion hell.
*/
#ifndef smp_mb__before_atomic_inc
static inline void __deprecated smp_mb__before_atomic_inc(void)
{
extern void __smp_mb__before_atomic(void);
__smp_mb__before_atomic();
}
#endif
#ifndef smp_mb__after_atomic_inc
static inline void __deprecated smp_mb__after_atomic_inc(void)
{
extern void __smp_mb__after_atomic(void);
__smp_mb__after_atomic();
}
#endif
#ifndef smp_mb__before_atomic_dec
static inline void __deprecated smp_mb__before_atomic_dec(void)
{
extern void __smp_mb__before_atomic(void);
__smp_mb__before_atomic();
}
#endif
#ifndef smp_mb__after_atomic_dec
static inline void __deprecated smp_mb__after_atomic_dec(void)
{
extern void __smp_mb__after_atomic(void);
__smp_mb__after_atomic();
}
#endif
/** /**
* atomic_add_unless - add unless the number is already a given value * atomic_add_unless - add unless the number is already a given value
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
......
...@@ -32,26 +32,6 @@ extern unsigned long __sw_hweight64(__u64 w); ...@@ -32,26 +32,6 @@ extern unsigned long __sw_hweight64(__u64 w);
*/ */
#include <asm/bitops.h> #include <asm/bitops.h>
/*
* Provide __deprecated wrappers for the new interface, avoid flag day changes.
* We need the ugly external functions to break header recursion hell.
*/
#ifndef smp_mb__before_clear_bit
static inline void __deprecated smp_mb__before_clear_bit(void)
{
extern void __smp_mb__before_atomic(void);
__smp_mb__before_atomic();
}
#endif
#ifndef smp_mb__after_clear_bit
static inline void __deprecated smp_mb__after_clear_bit(void)
{
extern void __smp_mb__after_atomic(void);
__smp_mb__after_atomic();
}
#endif
#define for_each_set_bit(bit, addr, size) \ #define for_each_set_bit(bit, addr, size) \
for ((bit) = find_first_bit((addr), (size)); \ for ((bit) = find_first_bit((addr), (size)); \
(bit) < (size); \ (bit) < (size); \
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
* Copyright (C) 2006,2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> * Copyright (C) 2006,2007 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
* Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com> * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra <pzijlstr@redhat.com>
* *
* see Documentation/lockdep-design.txt for more details. * see Documentation/locking/lockdep-design.txt for more details.
*/ */
#ifndef __LINUX_LOCKDEP_H #ifndef __LINUX_LOCKDEP_H
#define __LINUX_LOCKDEP_H #define __LINUX_LOCKDEP_H
......
...@@ -52,7 +52,7 @@ struct mutex { ...@@ -52,7 +52,7 @@ struct mutex {
atomic_t count; atomic_t count;
spinlock_t wait_lock; spinlock_t wait_lock;
struct list_head wait_list; struct list_head wait_list;
#if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_SMP) #if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)
struct task_struct *owner; struct task_struct *owner;
#endif #endif
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
...@@ -133,7 +133,7 @@ static inline int mutex_is_locked(struct mutex *lock) ...@@ -133,7 +133,7 @@ static inline int mutex_is_locked(struct mutex *lock)
/* /*
* See kernel/locking/mutex.c for detailed documentation of these APIs. * See kernel/locking/mutex.c for detailed documentation of these APIs.
* Also see Documentation/mutex-design.txt. * Also see Documentation/locking/mutex-design.txt.
*/ */
#ifdef CONFIG_DEBUG_LOCK_ALLOC #ifdef CONFIG_DEBUG_LOCK_ALLOC
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
......
...@@ -149,7 +149,7 @@ extern void downgrade_write(struct rw_semaphore *sem); ...@@ -149,7 +149,7 @@ extern void downgrade_write(struct rw_semaphore *sem);
* static then another method for expressing nested locking is * static then another method for expressing nested locking is
* the explicit definition of lock class keys and the use of * the explicit definition of lock class keys and the use of
* lockdep_set_class() at lock initialization time. * lockdep_set_class() at lock initialization time.
* See Documentation/lockdep-design.txt for more details.) * See Documentation/locking/lockdep-design.txt for more details.)
*/ */
extern void down_read_nested(struct rw_semaphore *sem, int subclass); extern void down_read_nested(struct rw_semaphore *sem, int subclass);
extern void down_write_nested(struct rw_semaphore *sem, int subclass); extern void down_write_nested(struct rw_semaphore *sem, int subclass);
......
...@@ -197,7 +197,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) ...@@ -197,7 +197,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
_raw_spin_lock_nest_lock(lock, &(nest_lock)->dep_map); \ _raw_spin_lock_nest_lock(lock, &(nest_lock)->dep_map); \
} while (0) } while (0)
#else #else
# define raw_spin_lock_nested(lock, subclass) _raw_spin_lock(lock) /*
* Always evaluate the 'subclass' argument to avoid that the compiler
* warns about set-but-not-used variables when building with
* CONFIG_DEBUG_LOCK_ALLOC=n and with W=1.
*/
# define raw_spin_lock_nested(lock, subclass) \
_raw_spin_lock(((void)(subclass), (lock)))
# define raw_spin_lock_nest_lock(lock, nest_lock) _raw_spin_lock(lock) # define raw_spin_lock_nest_lock(lock, nest_lock) _raw_spin_lock(lock)
#endif #endif
......
...@@ -56,9 +56,6 @@ do { \ ...@@ -56,9 +56,6 @@ do { \
* If the lock has already been acquired, then this will proceed to spin * If the lock has already been acquired, then this will proceed to spin
* on this node->locked until the previous lock holder sets the node->locked * on this node->locked until the previous lock holder sets the node->locked
* in mcs_spin_unlock(). * in mcs_spin_unlock().
*
* We don't inline mcs_spin_lock() so that perf can correctly account for the
* time spent in this lock function.
*/ */
static inline static inline
void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
......
This diff is collapsed.
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#define mutex_remove_waiter(lock, waiter, ti) \ #define mutex_remove_waiter(lock, waiter, ti) \
__list_del((waiter)->list.prev, (waiter)->list.next) __list_del((waiter)->list.prev, (waiter)->list.next)
#ifdef CONFIG_SMP #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
static inline void mutex_set_owner(struct mutex *lock) static inline void mutex_set_owner(struct mutex *lock)
{ {
lock->owner = current; lock->owner = current;
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt * Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
* Copyright (C) 2006 Esben Nielsen * Copyright (C) 2006 Esben Nielsen
* *
* See Documentation/rt-mutex-design.txt for details. * See Documentation/locking/rt-mutex-design.txt for details.
*/ */
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/export.h> #include <linux/export.h>
......
...@@ -246,19 +246,22 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem) ...@@ -246,19 +246,22 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
return sem; return sem;
} }
EXPORT_SYMBOL(rwsem_down_read_failed);
static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
{ {
if (!(count & RWSEM_ACTIVE_MASK)) { /*
/* try acquiring the write lock */ * Try acquiring the write lock. Check count first in order
if (sem->count == RWSEM_WAITING_BIAS && * to reduce unnecessary expensive cmpxchg() operations.
*/
if (count == RWSEM_WAITING_BIAS &&
cmpxchg(&sem->count, RWSEM_WAITING_BIAS, cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) { RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
if (!list_is_singular(&sem->wait_list)) if (!list_is_singular(&sem->wait_list))
rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
return true; return true;
} }
}
return false; return false;
} }
...@@ -465,6 +468,7 @@ struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore *sem) ...@@ -465,6 +468,7 @@ struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore *sem)
return sem; return sem;
} }
EXPORT_SYMBOL(rwsem_down_write_failed);
/* /*
* handle waking up a waiter on the semaphore * handle waking up a waiter on the semaphore
...@@ -485,6 +489,7 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem) ...@@ -485,6 +489,7 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
return sem; return sem;
} }
EXPORT_SYMBOL(rwsem_wake);
/* /*
* downgrade a write lock into a read lock * downgrade a write lock into a read lock
...@@ -506,8 +511,4 @@ struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem) ...@@ -506,8 +511,4 @@ struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
return sem; return sem;
} }
EXPORT_SYMBOL(rwsem_down_read_failed);
EXPORT_SYMBOL(rwsem_down_write_failed);
EXPORT_SYMBOL(rwsem_wake);
EXPORT_SYMBOL(rwsem_downgrade_wake); EXPORT_SYMBOL(rwsem_downgrade_wake);
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
static noinline void __down(struct semaphore *sem); static noinline void __down(struct semaphore *sem);
static noinline int __down_interruptible(struct semaphore *sem); static noinline int __down_interruptible(struct semaphore *sem);
static noinline int __down_killable(struct semaphore *sem); static noinline int __down_killable(struct semaphore *sem);
static noinline int __down_timeout(struct semaphore *sem, long jiffies); static noinline int __down_timeout(struct semaphore *sem, long timeout);
static noinline void __up(struct semaphore *sem); static noinline void __up(struct semaphore *sem);
/** /**
...@@ -145,14 +145,14 @@ EXPORT_SYMBOL(down_trylock); ...@@ -145,14 +145,14 @@ EXPORT_SYMBOL(down_trylock);
/** /**
* down_timeout - acquire the semaphore within a specified time * down_timeout - acquire the semaphore within a specified time
* @sem: the semaphore to be acquired * @sem: the semaphore to be acquired
* @jiffies: how long to wait before failing * @timeout: how long to wait before failing
* *
* Attempts to acquire the semaphore. If no more tasks are allowed to * Attempts to acquire the semaphore. If no more tasks are allowed to
* acquire the semaphore, calling this function will put the task to sleep. * acquire the semaphore, calling this function will put the task to sleep.
* If the semaphore is not released within the specified number of jiffies, * If the semaphore is not released within the specified number of jiffies,
* this function returns -ETIME. It returns 0 if the semaphore was acquired. * this function returns -ETIME. It returns 0 if the semaphore was acquired.
*/ */
int down_timeout(struct semaphore *sem, long jiffies) int down_timeout(struct semaphore *sem, long timeout)
{ {
unsigned long flags; unsigned long flags;
int result = 0; int result = 0;
...@@ -161,7 +161,7 @@ int down_timeout(struct semaphore *sem, long jiffies) ...@@ -161,7 +161,7 @@ int down_timeout(struct semaphore *sem, long jiffies)
if (likely(sem->count > 0)) if (likely(sem->count > 0))
sem->count--; sem->count--;
else else
result = __down_timeout(sem, jiffies); result = __down_timeout(sem, timeout);
raw_spin_unlock_irqrestore(&sem->lock, flags); raw_spin_unlock_irqrestore(&sem->lock, flags);
return result; return result;
...@@ -248,9 +248,9 @@ static noinline int __sched __down_killable(struct semaphore *sem) ...@@ -248,9 +248,9 @@ static noinline int __sched __down_killable(struct semaphore *sem)
return __down_common(sem, TASK_KILLABLE, MAX_SCHEDULE_TIMEOUT); return __down_common(sem, TASK_KILLABLE, MAX_SCHEDULE_TIMEOUT);
} }
static noinline int __sched __down_timeout(struct semaphore *sem, long jiffies) static noinline int __sched __down_timeout(struct semaphore *sem, long timeout)
{ {
return __down_common(sem, TASK_UNINTERRUPTIBLE, jiffies); return __down_common(sem, TASK_UNINTERRUPTIBLE, timeout);
} }
static noinline void __sched __up(struct semaphore *sem) static noinline void __sched __up(struct semaphore *sem)
......
...@@ -90,22 +90,6 @@ ...@@ -90,22 +90,6 @@
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/sched.h> #include <trace/events/sched.h>
#ifdef smp_mb__before_atomic
void __smp_mb__before_atomic(void)
{
smp_mb__before_atomic();
}
EXPORT_SYMBOL(__smp_mb__before_atomic);
#endif
#ifdef smp_mb__after_atomic
void __smp_mb__after_atomic(void)
{
smp_mb__after_atomic();
}
EXPORT_SYMBOL(__smp_mb__after_atomic);
#endif
void start_bandwidth_timer(struct hrtimer *period_timer, ktime_t period) void start_bandwidth_timer(struct hrtimer *period_timer, ktime_t period)
{ {
unsigned long delta; unsigned long delta;
......
...@@ -952,7 +952,7 @@ config PROVE_LOCKING ...@@ -952,7 +952,7 @@ config PROVE_LOCKING
the proof of observed correctness is also maintained for an the proof of observed correctness is also maintained for an
arbitrary combination of these separate locking variants. arbitrary combination of these separate locking variants.
For more details, see Documentation/lockdep-design.txt. For more details, see Documentation/locking/lockdep-design.txt.
config LOCKDEP config LOCKDEP
bool bool
...@@ -973,7 +973,7 @@ config LOCK_STAT ...@@ -973,7 +973,7 @@ config LOCK_STAT
help help
This feature enables tracking lock contention points This feature enables tracking lock contention points
For more details, see Documentation/lockstat.txt For more details, see Documentation/locking/lockstat.txt
This also enables lock events required by "perf lock", This also enables lock events required by "perf lock",
subcommand of perf. subcommand of perf.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment