Commit fe1f0714 authored by Peter Newman's avatar Peter Newman Committed by Borislav Petkov (AMD)

x86/resctrl: Fix task CLOSID/RMID update race

When the user moves a running task to a new rdtgroup using the task's
file interface or by deleting its rdtgroup, the resulting change in
CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the
task(s) CPUs.

x86 allows reordering loads with prior stores, so if the task starts
running between a task_curr() check that the CPU hoisted before the
stores in the CLOSID/RMID update then it can start running with the old
CLOSID/RMID until it is switched again because __rdtgroup_move_task()
failed to determine that it needs to be interrupted to obtain the new
CLOSID/RMID.

Refer to the diagram below:

CPU 0                                   CPU 1
-----                                   -----
__rdtgroup_move_task():
  curr <- t1->cpu->rq->curr
                                        __schedule():
                                          rq->curr <- t1
                                        resctrl_sched_in():
                                          t1->{closid,rmid} -> {1,1}
  t1->{closid,rmid} <- {2,2}
  if (curr == t1) // false
   IPI(t1->cpu)

A similar race impacts rdt_move_group_tasks(), which updates tasks in a
deleted rdtgroup.

In both cases, use smp_mb() to order the task_struct::{closid,rmid}
stores before the loads in task_curr().  In particular, in the
rdt_move_group_tasks() case, simply execute an smp_mb() on every
iteration with a matching task.

It is possible to use a single smp_mb() in rdt_move_group_tasks(), but
this would require two passes and a means of remembering which
task_structs were updated in the first loop. However, benchmarking
results below showed too little performance impact in the simple
approach to justify implementing the two-pass approach.

Times below were collected using `perf stat` to measure the time to
remove a group containing a 1600-task, parallel workload.

CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads)

  # mkdir /sys/fs/resctrl/test
  # echo $$ > /sys/fs/resctrl/test/tasks
  # perf bench sched messaging -g 40 -l 100000

task-clock time ranges collected using:

  # perf stat rmdir /sys/fs/resctrl/test

Baseline:                     1.54 - 1.60 ms
smp_mb() every matching task: 1.57 - 1.67 ms

  [ bp: Massage commit message. ]

Fixes: ae28d1aa ("x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC MSR")
Fixes: 0efc89be ("x86/intel_rdt: Update task closid immediately on CPU in rmdir and unmount")
Signed-off-by: default avatarPeter Newman <peternewman@google.com>
Signed-off-by: default avatarBorislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: default avatarReinette Chatre <reinette.chatre@intel.com>
Reviewed-by: default avatarBabu Moger <babu.moger@amd.com>
Cc: <stable@kernel.org>
Link: https://lore.kernel.org/r/20221220161123.432120-1-peternewman@google.com
parent 90b926e6
...@@ -580,8 +580,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk, ...@@ -580,8 +580,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
/* /*
* Ensure the task's closid and rmid are written before determining if * Ensure the task's closid and rmid are written before determining if
* the task is current that will decide if it will be interrupted. * the task is current that will decide if it will be interrupted.
* This pairs with the full barrier between the rq->curr update and
* resctrl_sched_in() during context switch.
*/ */
barrier(); smp_mb();
/* /*
* By now, the task's closid and rmid are set. If the task is current * By now, the task's closid and rmid are set. If the task is current
...@@ -2401,6 +2403,14 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, ...@@ -2401,6 +2403,14 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
WRITE_ONCE(t->closid, to->closid); WRITE_ONCE(t->closid, to->closid);
WRITE_ONCE(t->rmid, to->mon.rmid); WRITE_ONCE(t->rmid, to->mon.rmid);
/*
* Order the closid/rmid stores above before the loads
* in task_curr(). This pairs with the full barrier
* between the rq->curr update and resctrl_sched_in()
* during context switch.
*/
smp_mb();
/* /*
* If the task is on a CPU, set the CPU in the mask. * If the task is on a CPU, set the CPU in the mask.
* The detection is inaccurate as tasks might move or * The detection is inaccurate as tasks might move or
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment