Commit df068247 authored by Mark Rutland's avatar Mark Rutland Committed by Catalin Marinas

arm64: entry: remove redundant IRQ flag tracing

All EL0 returns go via ret_to_user(), which masks IRQs and notifies
lockdep and tracing before calling into do_notify_resume(). Therefore,
there's no need for do_notify_resume() to call trace_hardirqs_off(), and
the comment is stale. The call is simply redundant.

In ret_to_user() we call exit_to_user_mode(), which notifies lockdep and
tracing the IRQs will be enabled in userspace, so there's no need for
el0_svc_common() to call trace_hardirqs_on() before returning. Further,
at the start of ret_to_user() we call trace_hardirqs_off(), so not only
is this redundant, but it is immediately undone.

In addition to being redundant, the trace_hardirqs_on() in
el0_svc_common() leaves lockdep inconsistent with the hardware state,
and is liable to cause issues for any C code or instrumentation
between this and the call to trace_hardirqs_off() which undoes it in
ret_to_user().

This patch removes the redundant tracing calls and associated stale
comments.

Fixes: 23529049 ("arm64: entry: fix non-NMI user<->kernel transitions")
Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
Acked-by: default avatarWill Deacon <will@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210107145310.44616-1-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent d78050ee
...@@ -914,13 +914,6 @@ static void do_signal(struct pt_regs *regs) ...@@ -914,13 +914,6 @@ static void do_signal(struct pt_regs *regs)
asmlinkage void do_notify_resume(struct pt_regs *regs, asmlinkage void do_notify_resume(struct pt_regs *regs,
unsigned long thread_flags) unsigned long thread_flags)
{ {
/*
* The assembly code enters us with IRQs off, but it hasn't
* informed the tracing code of that for efficiency reasons.
* Update the trace code with the current status.
*/
trace_hardirqs_off();
do { do {
if (thread_flags & _TIF_NEED_RESCHED) { if (thread_flags & _TIF_NEED_RESCHED) {
/* Unmask Debug and SError for the next task */ /* Unmask Debug and SError for the next task */
......
...@@ -165,15 +165,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, ...@@ -165,15 +165,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
local_daif_mask(); local_daif_mask();
flags = current_thread_info()->flags; flags = current_thread_info()->flags;
if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) { if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
/*
* We're off to userspace, where interrupts are
* always enabled after we restore the flags from
* the SPSR.
*/
trace_hardirqs_on();
return; return;
}
local_daif_restore(DAIF_PROCCTX); local_daif_restore(DAIF_PROCCTX);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment