Commit b247be3f authored by Will Deacon's avatar Will Deacon Committed by Ingo Molnar

locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound

On x86, atomic_cond_read_relaxed will busy-wait with a cpu_relax() loop,
so it is desirable to increase the number of times we spin on the qspinlock
lockword when it is found to be transitioning from pending to locked.

According to Waiman Long:

 | Ideally, the spinning times should be at least a few times the typical
 | cacheline load time from memory which I think can be down to 100ns or
 | so for each cacheline load with the newest systems or up to several
 | hundreds ns for older systems.

which in his benchmarking corresponded to 512 iterations.
Suggested-by: default avatarWaiman Long <longman@redhat.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarWaiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-5-git-send-email-will.deacon@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 6512276d
...@@ -7,6 +7,8 @@ ...@@ -7,6 +7,8 @@
#include <asm-generic/qspinlock_types.h> #include <asm-generic/qspinlock_types.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
#define _Q_PENDING_LOOPS (1 << 9)
#define queued_spin_unlock queued_spin_unlock #define queued_spin_unlock queued_spin_unlock
/** /**
* queued_spin_unlock - release a queued spinlock * queued_spin_unlock - release a queued spinlock
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment