Commit af9acbfc authored by Marc Zyngier's avatar Marc Zyngier Committed by Thomas Gleixner

irqchip/gic-v3-its: Fix GICv4.1 VPE affinity update

When updating the affinity of a VPE, the VMOVP command is currently skipped
if the two CPUs are part of the same VPE affinity.

But this is wrong, as the doorbell corresponding to this VPE is still
delivered on the 'old' CPU, which screws up the balancing.  Furthermore,
offlining that 'old' CPU results in doorbell interrupts generated for this
VPE being discarded.

The harsh reality is that VMOVP cannot be elided when a set_affinity()
request occurs. It needs to be obeyed, and if an optimisation is to be
made, it is at the point where the affinity change request is made (such as
in KVM).

Drop the VMOVP elision altogether, and only use the vpe_table_mask
to try and stay within the same ITS affinity group if at all possible.

Fixes: dd3f050a (irqchip/gic-v4.1: Implement the v4.1 flavour of VMOVP)
Reported-by: default avatarKunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20240213101206.2137483-4-maz@kernel.org
parent 8b02da04
...@@ -3826,8 +3826,9 @@ static int its_vpe_set_affinity(struct irq_data *d, ...@@ -3826,8 +3826,9 @@ static int its_vpe_set_affinity(struct irq_data *d,
bool force) bool force)
{ {
struct its_vpe *vpe = irq_data_get_irq_chip_data(d); struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
int from, cpu = cpumask_first(mask_val); struct cpumask common, *table_mask;
unsigned long flags; unsigned long flags;
int from, cpu;
/* /*
* Changing affinity is mega expensive, so let's be as lazy as * Changing affinity is mega expensive, so let's be as lazy as
...@@ -3843,19 +3844,22 @@ static int its_vpe_set_affinity(struct irq_data *d, ...@@ -3843,19 +3844,22 @@ static int its_vpe_set_affinity(struct irq_data *d,
* taken on any vLPI handling path that evaluates vpe->col_idx. * taken on any vLPI handling path that evaluates vpe->col_idx.
*/ */
from = vpe_to_cpuid_lock(vpe, &flags); from = vpe_to_cpuid_lock(vpe, &flags);
if (from == cpu) table_mask = gic_data_rdist_cpu(from)->vpe_table_mask;
goto out;
vpe->col_idx = cpu;
/* /*
* GICv4.1 allows us to skip VMOVP if moving to a cpu whose RD * If we are offered another CPU in the same GICv4.1 ITS
* is sharing its VPE table with the current one. * affinity, pick this one. Otherwise, any CPU will do.
*/ */
if (gic_data_rdist_cpu(cpu)->vpe_table_mask && if (table_mask && cpumask_and(&common, mask_val, table_mask))
cpumask_test_cpu(from, gic_data_rdist_cpu(cpu)->vpe_table_mask)) cpu = cpumask_test_cpu(from, &common) ? from : cpumask_first(&common);
else
cpu = cpumask_first(mask_val);
if (from == cpu)
goto out; goto out;
vpe->col_idx = cpu;
its_send_vmovp(vpe); its_send_vmovp(vpe);
its_vpe_db_proxy_move(vpe, from, cpu); its_vpe_db_proxy_move(vpe, from, cpu);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment