Commit cce188bd authored by Masami Hiramatsu's avatar Masami Hiramatsu Committed by Ingo Molnar

bpf/error-inject/kprobes: Clear current_kprobe and enable preempt in kprobe

Clear current_kprobe and enable preemption in kprobe
even if pre_handler returns !0.

This simplifies function override using kprobes.

Jprobe used to require to keep the preemption disabled and
keep current_kprobe until it returned to original function
entry. For this reason kprobe_int3_handler() and similar
arch dependent kprobe handers checks pre_handler result
and exit without enabling preemption if the result is !0.

After removing the jprobe, Kprobes does not need to
keep preempt disabled even if user handler returns !0
anymore.

But since the function override handler in error-inject
and bpf is also returns !0 if it overrides a function,
to balancing the preempt count, it enables preemption
and reset current kprobe by itself.

That is a bad design that is very buggy. This fixes
such unbalanced preempt-count and current_kprobes setting
in kprobes, bpf and error-inject.

Note: for powerpc and x86, this removes all preempt_disable
from kprobe_ftrace_handler because ftrace callbacks are
called under preempt disabled.
Signed-off-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
Acked-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Josef Bacik <jbacik@fb.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: linux-arch@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparclinux@vger.kernel.org
Link: https://lore.kernel.org/lkml/152942494574.15209.12323837825873032258.stgit@devboxSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent d5ad85b6
...@@ -231,6 +231,9 @@ int __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs) ...@@ -231,6 +231,9 @@ int __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs)
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
setup_singlestep(p, regs); setup_singlestep(p, regs);
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
} else {
reset_current_kprobe();
preempt_enable_no_resched();
} }
return 1; return 1;
...@@ -442,9 +445,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -442,9 +445,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
regs->ret = orig_ret_address; regs->ret = orig_ret_address;
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -301,9 +301,9 @@ void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -301,9 +301,9 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
/* /*
* If we have no pre-handler or it returned 0, we * If we have no pre-handler or it returned 0, we
* continue with normal processing. If we have a * continue with normal processing. If we have a
* pre-handler and it returned non-zero, it prepped * pre-handler and it returned non-zero, it will
* for calling the break_handler below on re-entry, * modify the execution path and no need to single
* so get out doing nothing more here. * stepping. Let's just reset current kprobe and exit.
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
...@@ -312,8 +312,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -312,8 +312,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
kcb->kprobe_status = KPROBE_HIT_SSDONE; kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0); p->post_handler(p, regs, 0);
} }
reset_current_kprobe();
} }
reset_current_kprobe();
} }
} else { } else {
/* /*
......
...@@ -395,9 +395,9 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -395,9 +395,9 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
/* /*
* If we have no pre-handler or it returned 0, we * If we have no pre-handler or it returned 0, we
* continue with normal processing. If we have a * continue with normal processing. If we have a
* pre-handler and it returned non-zero, it prepped * pre-handler and it returned non-zero, it will
* for calling the break_handler below on re-entry, * modify the execution path and no need to single
* so get out doing nothing more here. * stepping. Let's just reset current kprobe and exit.
* *
* pre_handler can hit a breakpoint and can step thru * pre_handler can hit a breakpoint and can step thru
* before return, keep PSTATE D-flag enabled until * before return, keep PSTATE D-flag enabled until
...@@ -405,8 +405,8 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -405,8 +405,8 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
setup_singlestep(p, regs, kcb, 0); setup_singlestep(p, regs, kcb, 0);
return; } else
} reset_current_kprobe();
} }
} }
/* /*
......
...@@ -478,12 +478,9 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -478,12 +478,9 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
*/ */
break; break;
} }
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
...@@ -851,13 +848,11 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) ...@@ -851,13 +848,11 @@ static int __kprobes pre_kprobes_handler(struct die_args *args)
set_current_kprobe(p, kcb); set_current_kprobe(p, kcb);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
/* reset_current_kprobe();
* Our pre-handler is specifically requesting that we just preempt_enable_no_resched();
* do a return. This is used for both the jprobe pre-handler
* and the kretprobe trampoline
*/
return 1; return 1;
}
#if !defined(CONFIG_PREEMPT) #if !defined(CONFIG_PREEMPT)
if (p->ainsn.inst_flag == INST_FLAG_BOOSTABLE && !p->post_handler) { if (p->ainsn.inst_flag == INST_FLAG_BOOSTABLE && !p->post_handler) {
......
...@@ -358,6 +358,8 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -358,6 +358,8 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
if (p->pre_handler && p->pre_handler(p, regs)) { if (p->pre_handler && p->pre_handler(p, regs)) {
/* handler has already set things up, so skip ss setup */ /* handler has already set things up, so skip ss setup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
} }
...@@ -543,9 +545,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -543,9 +545,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
instruction_pointer(regs) = orig_ret_address; instruction_pointer(regs) = orig_ret_address;
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -32,11 +32,9 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, ...@@ -32,11 +32,9 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
struct kprobe *p; struct kprobe *p;
struct kprobe_ctlblk *kcb; struct kprobe_ctlblk *kcb;
preempt_disable();
p = get_kprobe((kprobe_opcode_t *)nip); p = get_kprobe((kprobe_opcode_t *)nip);
if (unlikely(!p) || kprobe_disabled(p)) if (unlikely(!p) || kprobe_disabled(p))
goto end; return;
kcb = get_kprobe_ctlblk(); kcb = get_kprobe_ctlblk();
if (kprobe_running()) { if (kprobe_running()) {
...@@ -60,18 +58,13 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip, ...@@ -60,18 +58,13 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
kcb->kprobe_status = KPROBE_HIT_SSDONE; kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0); p->post_handler(p, regs, 0);
} }
__this_cpu_write(current_kprobe, NULL); }
} else {
/* /*
* If pre_handler returns !0, it sets regs->nip and * If pre_handler returns !0, it changes regs->nip. We have to
* resets current kprobe. In this case, we should not * skip emulating post_handler.
* re-enable preemption.
*/ */
return; __this_cpu_write(current_kprobe, NULL);
}
} }
end:
preempt_enable_no_resched();
} }
NOKPROBE_SYMBOL(kprobe_ftrace_handler); NOKPROBE_SYMBOL(kprobe_ftrace_handler);
......
...@@ -358,9 +358,12 @@ int kprobe_handler(struct pt_regs *regs) ...@@ -358,9 +358,12 @@ int kprobe_handler(struct pt_regs *regs)
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
set_current_kprobe(p, regs, kcb); set_current_kprobe(p, regs, kcb);
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
/* handler has already set things up, so skip ss setup */ /* handler changed execution path, so skip ss setup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
if (p->ainsn.boostable >= 0) { if (p->ainsn.boostable >= 0) {
ret = try_to_emulate(p, regs); ret = try_to_emulate(p, regs);
......
...@@ -326,8 +326,11 @@ static int kprobe_handler(struct pt_regs *regs) ...@@ -326,8 +326,11 @@ static int kprobe_handler(struct pt_regs *regs)
*/ */
push_kprobe(kcb, p); push_kprobe(kcb, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
pop_kprobe(kcb);
preempt_enable_no_resched();
return 1; return 1;
}
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
} }
enable_singlestep(kcb, regs, (unsigned long) p->ainsn.insn); enable_singlestep(kcb, regs, (unsigned long) p->ainsn.insn);
...@@ -431,9 +434,7 @@ static int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -431,9 +434,7 @@ static int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
regs->psw.addr = orig_ret_address; regs->psw.addr = orig_ret_address;
pop_kprobe(get_kprobe_ctlblk());
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -272,9 +272,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -272,9 +272,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
set_current_kprobe(p, regs, kcb); set_current_kprobe(p, regs, kcb);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
/* handler has already set things up, so skip ss setup */ /* handler has already set things up, so skip ss setup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
prepare_singlestep(p, regs); prepare_singlestep(p, regs);
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
...@@ -352,8 +355,6 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -352,8 +355,6 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
regs->pc = orig_ret_address; regs->pc = orig_ret_address;
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
kfree(ri); kfree(ri);
......
...@@ -175,8 +175,11 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -175,8 +175,11 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
set_current_kprobe(p, regs, kcb); set_current_kprobe(p, regs, kcb);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs)) {
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
prepare_singlestep(p, regs, kcb); prepare_singlestep(p, regs, kcb);
kcb->kprobe_status = KPROBE_HIT_SS; kcb->kprobe_status = KPROBE_HIT_SS;
...@@ -508,9 +511,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -508,9 +511,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
regs->tpc = orig_ret_address; regs->tpc = orig_ret_address;
regs->tnpc = orig_ret_address + 4; regs->tnpc = orig_ret_address + 4;
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
preempt_enable_no_resched();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist); hlist_del(&ri->hlist);
......
...@@ -694,6 +694,10 @@ int kprobe_int3_handler(struct pt_regs *regs) ...@@ -694,6 +694,10 @@ int kprobe_int3_handler(struct pt_regs *regs)
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) if (!p->pre_handler || !p->pre_handler(p, regs))
setup_singlestep(p, regs, kcb, 0); setup_singlestep(p, regs, kcb, 0);
else {
reset_current_kprobe();
preempt_enable_no_resched();
}
return 1; return 1;
} }
} else if (*addr != BREAKPOINT_INSTRUCTION) { } else if (*addr != BREAKPOINT_INSTRUCTION) {
......
...@@ -45,8 +45,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, ...@@ -45,8 +45,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */ /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
regs->ip = ip + sizeof(kprobe_opcode_t); regs->ip = ip + sizeof(kprobe_opcode_t);
/* To emulate trap based kprobes, preempt_disable here */
preempt_disable();
__this_cpu_write(current_kprobe, p); __this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE; kcb->kprobe_status = KPROBE_HIT_ACTIVE;
if (!p->pre_handler || !p->pre_handler(p, regs)) { if (!p->pre_handler || !p->pre_handler(p, regs)) {
...@@ -60,13 +58,12 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, ...@@ -60,13 +58,12 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
p->post_handler(p, regs, 0); p->post_handler(p, regs, 0);
} }
regs->ip = orig_ip; regs->ip = orig_ip;
__this_cpu_write(current_kprobe, NULL);
preempt_enable_no_resched();
} }
/* /*
* If pre_handler returns !0, it sets regs->ip and * If pre_handler returns !0, it changes regs->ip. We have to
* resets current kprobe, and keep preempt count +1. * skip emulating post_handler.
*/ */
__this_cpu_write(current_kprobe, NULL);
} }
} }
NOKPROBE_SYMBOL(kprobe_ftrace_handler); NOKPROBE_SYMBOL(kprobe_ftrace_handler);
......
...@@ -184,9 +184,6 @@ static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs) ...@@ -184,9 +184,6 @@ static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs)
if (should_fail(&fei_fault_attr, 1)) { if (should_fail(&fei_fault_attr, 1)) {
regs_set_return_value(regs, attr->retval); regs_set_return_value(regs, attr->retval);
override_function_with_return(regs); override_function_with_return(regs);
/* Kprobe specific fixup */
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
} }
......
...@@ -1217,16 +1217,11 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs) ...@@ -1217,16 +1217,11 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
/* /*
* We need to check and see if we modified the pc of the * We need to check and see if we modified the pc of the
* pt_regs, and if so clear the kprobe and return 1 so that we * pt_regs, and if so return 1 so that we don't do the
* don't do the single stepping. * single stepping.
* The ftrace kprobe handler leaves it up to us to re-enable
* preemption here before returning if we've modified the ip.
*/ */
if (orig_ip != instruction_pointer(regs)) { if (orig_ip != instruction_pointer(regs))
reset_current_kprobe();
preempt_enable_no_resched();
return 1; return 1;
}
if (!ret) if (!ret)
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment