Commit c5379ba8 authored by Thomas Petazzoni's avatar Thomas Petazzoni Committed by Gregory CLEMENT

ARM: mvebu: fix HW I/O coherency related deadlocks

Until now, our understanding for HW I/O coherency to work on the
Cortex-A9 based Marvell SoC was that only the PCIe regions should be
mapped strongly-ordered. However, we were still encountering some
deadlocks, especially when testing the CESA crypto engine. After
checking with the HW designers, it was concluded that all the MMIO
registers should be mapped as strongly ordered for the HW I/O coherency
mechanism to work properly.

This fixes some easy to reproduce deadlocks with the CESA crypto engine
driver (dmcrypt on a sufficiently large disk partition).
Tested-by: default avatarTerry Stockert <stockert@inkblotadmirer.me>
Tested-by: default avatarRomain Perier <romain.perier@free-electrons.com>
Cc: Terry Stockert <stockert@inkblotadmirer.me>
Cc: Romain Perier <romain.perier@free-electrons.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarThomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: default avatarGregory CLEMENT <gregory.clement@free-electrons.com>
parent 1a695a90
...@@ -162,22 +162,16 @@ static void __init armada_370_coherency_init(struct device_node *np) ...@@ -162,22 +162,16 @@ static void __init armada_370_coherency_init(struct device_node *np)
} }
/* /*
* This ioremap hook is used on Armada 375/38x to ensure that PCIe * This ioremap hook is used on Armada 375/38x to ensure that all MMIO
* memory areas are mapped as MT_UNCACHED instead of MT_DEVICE. This * areas are mapped as MT_UNCACHED instead of MT_DEVICE. This is
* is needed as a workaround for a deadlock issue between the PCIe * needed for the HW I/O coherency mechanism to work properly without
* interface and the cache controller. * deadlock.
*/ */
static void __iomem * static void __iomem *
armada_pcie_wa_ioremap_caller(phys_addr_t phys_addr, size_t size, armada_wa_ioremap_caller(phys_addr_t phys_addr, size_t size,
unsigned int mtype, void *caller) unsigned int mtype, void *caller)
{ {
struct resource pcie_mem;
mvebu_mbus_get_pcie_mem_aperture(&pcie_mem);
if (pcie_mem.start <= phys_addr && (phys_addr + size) <= pcie_mem.end)
mtype = MT_UNCACHED; mtype = MT_UNCACHED;
return __arm_ioremap_caller(phys_addr, size, mtype, caller); return __arm_ioremap_caller(phys_addr, size, mtype, caller);
} }
...@@ -186,7 +180,7 @@ static void __init armada_375_380_coherency_init(struct device_node *np) ...@@ -186,7 +180,7 @@ static void __init armada_375_380_coherency_init(struct device_node *np)
struct device_node *cache_dn; struct device_node *cache_dn;
coherency_cpu_base = of_iomap(np, 0); coherency_cpu_base = of_iomap(np, 0);
arch_ioremap_caller = armada_pcie_wa_ioremap_caller; arch_ioremap_caller = armada_wa_ioremap_caller;
/* /*
* We should switch the PL310 to I/O coherency mode only if * We should switch the PL310 to I/O coherency mode only if
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment