Commit aaee8c3c authored by Andy Lutomirski's avatar Andy Lutomirski Committed by Ingo Molnar

x86/entry/traps: Don't force in_interrupt() to return true in IST handlers

Forcing in_interrupt() to return true if we're not in a bona fide
interrupt confuses the softirq code.  This fixes warnings like:

  NOHZ: local_softirq_pending 282

... which can happen when running things like selftests/x86.

This will change perf's static percpu buffer usage in IST context.
I think this is okay, and it's changing the behavior to match
historical (pre-4.0) behavior.
Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes: 95927475 ("x86, traps: Track entry into and exit from IST context")
Link: http://lkml.kernel.org/r/cdc215f94d118d691d73df35275022331156fb45.1464130360.git.luto@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 96685a55
...@@ -96,6 +96,12 @@ static inline void cond_local_irq_disable(struct pt_regs *regs) ...@@ -96,6 +96,12 @@ static inline void cond_local_irq_disable(struct pt_regs *regs)
local_irq_disable(); local_irq_disable();
} }
/*
* In IST context, we explicitly disable preemption. This serves two
* purposes: it makes it much less likely that we would accidentally
* schedule in IST context and it will force a warning if we somehow
* manage to schedule by accident.
*/
void ist_enter(struct pt_regs *regs) void ist_enter(struct pt_regs *regs)
{ {
if (user_mode(regs)) { if (user_mode(regs)) {
...@@ -110,13 +116,7 @@ void ist_enter(struct pt_regs *regs) ...@@ -110,13 +116,7 @@ void ist_enter(struct pt_regs *regs)
rcu_nmi_enter(); rcu_nmi_enter();
} }
/* preempt_disable();
* We are atomic because we're on the IST stack; or we're on
* x86_32, in which case we still shouldn't schedule; or we're
* on x86_64 and entered from user mode, in which case we're
* still atomic unless ist_begin_non_atomic is called.
*/
preempt_count_add(HARDIRQ_OFFSET);
/* This code is a bit fragile. Test it. */ /* This code is a bit fragile. Test it. */
RCU_LOCKDEP_WARN(!rcu_is_watching(), "ist_enter didn't work"); RCU_LOCKDEP_WARN(!rcu_is_watching(), "ist_enter didn't work");
...@@ -124,7 +124,7 @@ void ist_enter(struct pt_regs *regs) ...@@ -124,7 +124,7 @@ void ist_enter(struct pt_regs *regs)
void ist_exit(struct pt_regs *regs) void ist_exit(struct pt_regs *regs)
{ {
preempt_count_sub(HARDIRQ_OFFSET); preempt_enable_no_resched();
if (!user_mode(regs)) if (!user_mode(regs))
rcu_nmi_exit(); rcu_nmi_exit();
...@@ -155,7 +155,7 @@ void ist_begin_non_atomic(struct pt_regs *regs) ...@@ -155,7 +155,7 @@ void ist_begin_non_atomic(struct pt_regs *regs)
BUG_ON((unsigned long)(current_top_of_stack() - BUG_ON((unsigned long)(current_top_of_stack() -
current_stack_pointer()) >= THREAD_SIZE); current_stack_pointer()) >= THREAD_SIZE);
preempt_count_sub(HARDIRQ_OFFSET); preempt_enable_no_resched();
} }
/** /**
...@@ -165,7 +165,7 @@ void ist_begin_non_atomic(struct pt_regs *regs) ...@@ -165,7 +165,7 @@ void ist_begin_non_atomic(struct pt_regs *regs)
*/ */
void ist_end_non_atomic(void) void ist_end_non_atomic(void)
{ {
preempt_count_add(HARDIRQ_OFFSET); preempt_disable();
} }
static nokprobe_inline int static nokprobe_inline int
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment