Commit 19a469a5 authored by Marc Zyngier's avatar Marc Zyngier Committed by Catalin Marinas

drivers/perf: arm-pmu: Handle per-interrupt affinity mask

On a big-little system, PMUs can be wired to CPUs using per CPU
interrups (PPI). In this case, it is important to make sure that
the enable/disable do happen on the right set of CPUs.

So instead of relying on the interrupt-affinity property, we can
use the actual percpu affinity that DT exposes as part of the
interrupt specifier. The DT binding is also updated to reflect
the fact that the interrupt-affinity property shouldn't be used
in that case.
Acked-by: default avatarRob Herring <robh@kernel.org>
Tested-by: default avatarCaesar Wang <wxt@rock-chips.com>
Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent 90f777be
...@@ -39,7 +39,9 @@ Optional properties: ...@@ -39,7 +39,9 @@ Optional properties:
When using a PPI, specifies a list of phandles to CPU When using a PPI, specifies a list of phandles to CPU
nodes corresponding to the set of CPUs which have nodes corresponding to the set of CPUs which have
a PMU of this type signalling the PPI listed in the a PMU of this type signalling the PPI listed in the
interrupts property. interrupts property, unless this is already specified
by the PPI interrupt specifier itself (in which case
the interrupt-affinity property shouldn't be present).
This property should be present when there is more than This property should be present when there is more than
a single SPI. a single SPI.
......
...@@ -603,7 +603,8 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu) ...@@ -603,7 +603,8 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu)
irq = platform_get_irq(pmu_device, 0); irq = platform_get_irq(pmu_device, 0);
if (irq >= 0 && irq_is_percpu(irq)) { if (irq >= 0 && irq_is_percpu(irq)) {
on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1); on_each_cpu_mask(&cpu_pmu->supported_cpus,
cpu_pmu_disable_percpu_irq, &irq, 1);
free_percpu_irq(irq, &hw_events->percpu_pmu); free_percpu_irq(irq, &hw_events->percpu_pmu);
} else { } else {
for (i = 0; i < irqs; ++i) { for (i = 0; i < irqs; ++i) {
...@@ -645,7 +646,9 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) ...@@ -645,7 +646,9 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler)
irq); irq);
return err; return err;
} }
on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1);
on_each_cpu_mask(&cpu_pmu->supported_cpus,
cpu_pmu_enable_percpu_irq, &irq, 1);
} else { } else {
for (i = 0; i < irqs; ++i) { for (i = 0; i < irqs; ++i) {
int cpu = i; int cpu = i;
...@@ -961,9 +964,23 @@ static int of_pmu_irq_cfg(struct arm_pmu *pmu) ...@@ -961,9 +964,23 @@ static int of_pmu_irq_cfg(struct arm_pmu *pmu)
i++; i++;
} while (1); } while (1);
/* If we didn't manage to parse anything, claim to support all CPUs */ /* If we didn't manage to parse anything, try the interrupt affinity */
if (cpumask_weight(&pmu->supported_cpus) == 0) if (cpumask_weight(&pmu->supported_cpus) == 0) {
if (!using_spi) {
/* If using PPIs, check the affinity of the partition */
int ret, irq;
irq = platform_get_irq(pdev, 0);
ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus);
if (ret) {
kfree(irqs);
return ret;
}
} else {
/* Otherwise default to all CPUs */
cpumask_setall(&pmu->supported_cpus); cpumask_setall(&pmu->supported_cpus);
}
}
/* If we matched up the IRQ affinities, use them to route the SPIs */ /* If we matched up the IRQ affinities, use them to route the SPIs */
if (using_spi && i == pdev->num_resources) if (using_spi && i == pdev->num_resources)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment