Commit 7d148be6 authored by Vincent Guittot's avatar Vincent Guittot Committed by Peter Zijlstra

sched/fair: Optimize enqueue_task_fair()

enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
throttled which means that se can't be NULL in such case and we can move
the label after the if (!se) statement. Futhermore, the latter can be
removed because se is always NULL when reaching this point.
Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: default avatarPhil Auld <pauld@redhat.com>
Link: https://lkml.kernel.org/r/20200513135502.4672-1-vincent.guittot@linaro.org
parent 9013196a
...@@ -5512,9 +5512,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) ...@@ -5512,9 +5512,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
list_add_leaf_cfs_rq(cfs_rq); list_add_leaf_cfs_rq(cfs_rq);
} }
enqueue_throttle: /* At this point se is NULL and we are at root level*/
if (!se) {
add_nr_running(rq, 1); add_nr_running(rq, 1);
/* /*
* Since new tasks are assigned an initial util_avg equal to * Since new tasks are assigned an initial util_avg equal to
* half of the spare capacity of their CPU, tiny tasks have the * half of the spare capacity of their CPU, tiny tasks have the
...@@ -5532,8 +5532,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) ...@@ -5532,8 +5532,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
if (flags & ENQUEUE_WAKEUP) if (flags & ENQUEUE_WAKEUP)
update_overutilized_status(rq); update_overutilized_status(rq);
} enqueue_throttle:
if (cfs_bandwidth_used()) { if (cfs_bandwidth_used()) {
/* /*
* When bandwidth control is enabled; the cfs_rq_throttled() * When bandwidth control is enabled; the cfs_rq_throttled()
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment