Commit 28bc55f6 authored by Peter Zijlstra's avatar Peter Zijlstra

sched: Constrain locks in sched_submit_work()

Even though sched_submit_work() is ran from preemptible context,
it is discouraged to have it use blocking locks due to the recursion
potential.

Enforce this.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230908162254.999499-2-bigeasy@linutronix.de
parent a432b7c0
...@@ -6720,11 +6720,18 @@ void __noreturn do_task_dead(void) ...@@ -6720,11 +6720,18 @@ void __noreturn do_task_dead(void)
static inline void sched_submit_work(struct task_struct *tsk) static inline void sched_submit_work(struct task_struct *tsk)
{ {
static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
unsigned int task_flags; unsigned int task_flags;
if (task_is_running(tsk)) if (task_is_running(tsk))
return; return;
/*
* Establish LD_WAIT_CONFIG context to ensure none of the code called
* will use a blocking primitive -- which would lead to recursion.
*/
lock_map_acquire_try(&sched_map);
task_flags = tsk->flags; task_flags = tsk->flags;
/* /*
* If a worker goes to sleep, notify and ask workqueue whether it * If a worker goes to sleep, notify and ask workqueue whether it
...@@ -6749,6 +6756,8 @@ static inline void sched_submit_work(struct task_struct *tsk) ...@@ -6749,6 +6756,8 @@ static inline void sched_submit_work(struct task_struct *tsk)
* make sure to submit it to avoid deadlocks. * make sure to submit it to avoid deadlocks.
*/ */
blk_flush_plug(tsk->plug, true); blk_flush_plug(tsk->plug, true);
lock_map_release(&sched_map);
} }
static void sched_update_worker(struct task_struct *tsk) static void sched_update_worker(struct task_struct *tsk)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment