Commit 52a55252 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'iommu-updates-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:

 - batched unmap support for the IOMMU-API

 - support for unlocked command queueing in the ARM-SMMU driver

 - rework the ATS support in the ARM-SMMU driver

 - more refactoring in the ARM-SMMU driver to support hardware
   implemention specific quirks and errata

 - bounce buffering DMA-API implementatation in the Intel VT-d driver
   for untrusted devices (like Thunderbolt devices)

 - fixes for runtime PM support in the OMAP iommu driver

 - MT8183 IOMMU support in the Mediatek IOMMU driver

 - rework of the way the IOMMU core sets the default domain type for
   groups. Changing the default domain type on x86 does not require two
   kernel parameters anymore.

 - more smaller fixes and cleanups

* tag 'iommu-updates-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (113 commits)
  iommu/vt-d: Declare Broadwell igfx dmar support snafu
  iommu/vt-d: Add Scalable Mode fault information
  iommu/vt-d: Use bounce buffer for untrusted devices
  iommu/vt-d: Add trace events for device dma map/unmap
  iommu/vt-d: Don't switch off swiotlb if bounce page is used
  iommu/vt-d: Check whether device requires bounce buffer
  swiotlb: Split size parameter to map/unmap APIs
  iommu/omap: Mark pm functions __maybe_unused
  iommu/ipmmu-vmsa: Disable cache snoop transactions on R-Car Gen3
  iommu/ipmmu-vmsa: Move IMTTBCR_SL0_TWOBIT_* to restore sort order
  iommu: Don't use sme_active() in generic code
  iommu/arm-smmu-v3: Fix build error without CONFIG_PCI_ATS
  iommu/qcom: Use struct_size() helper
  iommu: Remove wrong default domain comments
  iommu/dma: Fix for dereferencing before null checking
  iommu/mediatek: Clean up struct mtk_smi_iommu
  memory: mtk-smi: Get rid of need_larbid
  iommu/mediatek: Fix VLD_PA_RNG register backup when suspend
  memory: mtk-smi: Add bus_sel for mt8183
  memory: mtk-smi: Invoke pm runtime_callback to enable clocks
  ...
parents bbfe0d6b e95adb9a
...@@ -1732,6 +1732,11 @@ ...@@ -1732,6 +1732,11 @@
Note that using this option lowers the security Note that using this option lowers the security
provided by tboot because it makes the system provided by tboot because it makes the system
vulnerable to DMA attacks. vulnerable to DMA attacks.
nobounce [Default off]
Disable bounce buffer for unstrusted devices such as
the Thunderbolt devices. This will treat the untrusted
devices as the trusted ones, hence might expose security
risks of DMA attacks.
intel_idle.max_cstate= [KNL,HW,ACPI,X86] intel_idle.max_cstate= [KNL,HW,ACPI,X86]
0 disables intel_idle and fall back on acpi_idle. 0 disables intel_idle and fall back on acpi_idle.
...@@ -1811,7 +1816,7 @@ ...@@ -1811,7 +1816,7 @@
synchronously. synchronously.
iommu.passthrough= iommu.passthrough=
[ARM64] Configure DMA to bypass the IOMMU by default. [ARM64, X86] Configure DMA to bypass the IOMMU by default.
Format: { "0" | "1" } Format: { "0" | "1" }
0 - Use IOMMU translation for DMA. 0 - Use IOMMU translation for DMA.
1 - Bypass the IOMMU for DMA. 1 - Bypass the IOMMU for DMA.
......
...@@ -11,10 +11,23 @@ ARM Short-Descriptor translation table format for address translation. ...@@ -11,10 +11,23 @@ ARM Short-Descriptor translation table format for address translation.
| |
m4u (Multimedia Memory Management Unit) m4u (Multimedia Memory Management Unit)
| |
+--------+
| |
gals0-rx gals1-rx (Global Async Local Sync rx)
| |
| |
gals0-tx gals1-tx (Global Async Local Sync tx)
| | Some SoCs may have GALS.
+--------+
|
SMI Common(Smart Multimedia Interface Common) SMI Common(Smart Multimedia Interface Common)
| |
+----------------+------- +----------------+-------
| | | |
| gals-rx There may be GALS in some larbs.
| |
| |
| gals-tx
| | | |
SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb). SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb).
(display) (vdec) (display) (vdec)
...@@ -36,6 +49,10 @@ each local arbiter. ...@@ -36,6 +49,10 @@ each local arbiter.
like display, video decode, and camera. And there are different ports like display, video decode, and camera. And there are different ports
in each larb. Take a example, There are many ports like MC, PP, VLD in the in each larb. Take a example, There are many ports like MC, PP, VLD in the
video decode local arbiter, all these ports are according to the video HW. video decode local arbiter, all these ports are according to the video HW.
In some SoCs, there may be a GALS(Global Async Local Sync) module between
smi-common and m4u, and additional GALS module between smi-larb and
smi-common. GALS can been seen as a "asynchronous fifo" which could help
synchronize for the modules in different clock frequency.
Required properties: Required properties:
- compatible : must be one of the following string: - compatible : must be one of the following string:
...@@ -44,18 +61,25 @@ Required properties: ...@@ -44,18 +61,25 @@ Required properties:
"mediatek,mt7623-m4u", "mediatek,mt2701-m4u" for mt7623 which uses "mediatek,mt7623-m4u", "mediatek,mt2701-m4u" for mt7623 which uses
generation one m4u HW. generation one m4u HW.
"mediatek,mt8173-m4u" for mt8173 which uses generation two m4u HW. "mediatek,mt8173-m4u" for mt8173 which uses generation two m4u HW.
"mediatek,mt8183-m4u" for mt8183 which uses generation two m4u HW.
- reg : m4u register base and size. - reg : m4u register base and size.
- interrupts : the interrupt of m4u. - interrupts : the interrupt of m4u.
- clocks : must contain one entry for each clock-names. - clocks : must contain one entry for each clock-names.
- clock-names : must be "bclk", It is the block clock of m4u. - clock-names : Only 1 optional clock:
- "bclk": the block clock of m4u.
Here is the list which require this "bclk":
- mt2701, mt2712, mt7623 and mt8173.
Note that m4u use the EMI clock which always has been enabled before kernel
if there is no this "bclk".
- mediatek,larbs : List of phandle to the local arbiters in the current Socs. - mediatek,larbs : List of phandle to the local arbiters in the current Socs.
Refer to bindings/memory-controllers/mediatek,smi-larb.txt. It must sort Refer to bindings/memory-controllers/mediatek,smi-larb.txt. It must sort
according to the local arbiter index, like larb0, larb1, larb2... according to the local arbiter index, like larb0, larb1, larb2...
- iommu-cells : must be 1. This is the mtk_m4u_id according to the HW. - iommu-cells : must be 1. This is the mtk_m4u_id according to the HW.
Specifies the mtk_m4u_id as defined in Specifies the mtk_m4u_id as defined in
dt-binding/memory/mt2701-larb-port.h for mt2701, mt7623 dt-binding/memory/mt2701-larb-port.h for mt2701, mt7623
dt-binding/memory/mt2712-larb-port.h for mt2712, and dt-binding/memory/mt2712-larb-port.h for mt2712,
dt-binding/memory/mt8173-larb-port.h for mt8173. dt-binding/memory/mt8173-larb-port.h for mt8173, and
dt-binding/memory/mt8183-larb-port.h for mt8183.
Example: Example:
iommu: iommu@10205000 { iommu: iommu@10205000 {
......
...@@ -2,9 +2,10 @@ SMI (Smart Multimedia Interface) Common ...@@ -2,9 +2,10 @@ SMI (Smart Multimedia Interface) Common
The hardware block diagram please check bindings/iommu/mediatek,iommu.txt The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
Mediatek SMI have two generations of HW architecture, mt2712 and mt8173 use Mediatek SMI have two generations of HW architecture, here is the list
the second generation of SMI HW while mt2701 uses the first generation HW of which generation the SoCs use:
SMI. generation 1: mt2701 and mt7623.
generation 2: mt2712, mt8173 and mt8183.
There's slight differences between the two SMI, for generation 2, the There's slight differences between the two SMI, for generation 2, the
register which control the iommu port is at each larb's register base. But register which control the iommu port is at each larb's register base. But
...@@ -19,6 +20,7 @@ Required properties: ...@@ -19,6 +20,7 @@ Required properties:
"mediatek,mt2712-smi-common" "mediatek,mt2712-smi-common"
"mediatek,mt7623-smi-common", "mediatek,mt2701-smi-common" "mediatek,mt7623-smi-common", "mediatek,mt2701-smi-common"
"mediatek,mt8173-smi-common" "mediatek,mt8173-smi-common"
"mediatek,mt8183-smi-common"
- reg : the register and size of the SMI block. - reg : the register and size of the SMI block.
- power-domains : a phandle to the power domain of this local arbiter. - power-domains : a phandle to the power domain of this local arbiter.
- clocks : Must contain an entry for each entry in clock-names. - clocks : Must contain an entry for each entry in clock-names.
...@@ -30,6 +32,10 @@ Required properties: ...@@ -30,6 +32,10 @@ Required properties:
They may be the same if both source clocks are the same. They may be the same if both source clocks are the same.
- "async" : asynchronous clock, it help transform the smi clock into the emi - "async" : asynchronous clock, it help transform the smi clock into the emi
clock domain, this clock is only needed by generation 1 smi HW. clock domain, this clock is only needed by generation 1 smi HW.
and these 2 option clocks for generation 2 smi HW:
- "gals0": the path0 clock of GALS(Global Async Local Sync).
- "gals1": the path1 clock of GALS(Global Async Local Sync).
Here is the list which has this GALS: mt8183.
Example: Example:
smi_common: smi@14022000 { smi_common: smi@14022000 {
......
...@@ -8,6 +8,7 @@ Required properties: ...@@ -8,6 +8,7 @@ Required properties:
"mediatek,mt2712-smi-larb" "mediatek,mt2712-smi-larb"
"mediatek,mt7623-smi-larb", "mediatek,mt2701-smi-larb" "mediatek,mt7623-smi-larb", "mediatek,mt2701-smi-larb"
"mediatek,mt8173-smi-larb" "mediatek,mt8173-smi-larb"
"mediatek,mt8183-smi-larb"
- reg : the register and size of this local arbiter. - reg : the register and size of this local arbiter.
- mediatek,smi : a phandle to the smi_common node. - mediatek,smi : a phandle to the smi_common node.
- power-domains : a phandle to the power domain of this local arbiter. - power-domains : a phandle to the power domain of this local arbiter.
...@@ -16,6 +17,9 @@ Required properties: ...@@ -16,6 +17,9 @@ Required properties:
- "apb" : Advanced Peripheral Bus clock, It's the clock for setting - "apb" : Advanced Peripheral Bus clock, It's the clock for setting
the register. the register.
- "smi" : It's the clock for transfer data and command. - "smi" : It's the clock for transfer data and command.
and this optional clock name:
- "gals": the clock for GALS(Global Async Local Sync).
Here is the list which has this GALS: mt8183.
Required property for mt2701, mt2712 and mt7623: Required property for mt2701, mt2712 and mt7623:
- mediatek,larb-id :the hardware id of this larb. - mediatek,larb-id :the hardware id of this larb.
......
...@@ -1342,8 +1342,7 @@ M: Will Deacon <will@kernel.org> ...@@ -1342,8 +1342,7 @@ M: Will Deacon <will@kernel.org>
R: Robin Murphy <robin.murphy@arm.com> R: Robin Murphy <robin.murphy@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/iommu/arm-smmu.c F: drivers/iommu/arm-smmu*
F: drivers/iommu/arm-smmu-v3.c
F: drivers/iommu/io-pgtable-arm.c F: drivers/iommu/io-pgtable-arm.c
F: drivers/iommu/io-pgtable-arm-v7s.c F: drivers/iommu/io-pgtable-arm-v7s.c
......
...@@ -229,3 +229,5 @@ include/generated/ti-pm-asm-offsets.h: arch/arm/mach-omap2/pm-asm-offsets.s FORC ...@@ -229,3 +229,5 @@ include/generated/ti-pm-asm-offsets.h: arch/arm/mach-omap2/pm-asm-offsets.s FORC
$(obj)/sleep33xx.o $(obj)/sleep43xx.o: include/generated/ti-pm-asm-offsets.h $(obj)/sleep33xx.o $(obj)/sleep43xx.o: include/generated/ti-pm-asm-offsets.h
targets += pm-asm-offsets.s targets += pm-asm-offsets.s
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
// SPDX-License-Identifier: GPL-2.0-only
/*
* OMAP IOMMU quirks for various TI SoCs
*
* Copyright (C) 2015-2019 Texas Instruments Incorporated - http://www.ti.com/
* Suman Anna <s-anna@ti.com>
*/
#include <linux/platform_device.h>
#include <linux/err.h>
#include "omap_hwmod.h"
#include "omap_device.h"
#include "powerdomain.h"
int omap_iommu_set_pwrdm_constraint(struct platform_device *pdev, bool request,
u8 *pwrst)
{
struct powerdomain *pwrdm;
struct omap_device *od;
u8 next_pwrst;
od = to_omap_device(pdev);
if (!od)
return -ENODEV;
if (od->hwmods_cnt != 1)
return -EINVAL;
pwrdm = omap_hwmod_get_pwrdm(od->hwmods[0]);
if (!pwrdm)
return -EINVAL;
if (request)
*pwrst = pwrdm_read_next_pwrst(pwrdm);
if (*pwrst > PWRDM_POWER_RET)
return 0;
next_pwrst = request ? PWRDM_POWER_ON : *pwrst;
return pwrdm_set_next_pwrst(pwrdm, next_pwrst);
}
...@@ -8,10 +8,8 @@ ...@@ -8,10 +8,8 @@
extern void no_iommu_init(void); extern void no_iommu_init(void);
#ifdef CONFIG_INTEL_IOMMU #ifdef CONFIG_INTEL_IOMMU
extern int force_iommu, no_iommu; extern int force_iommu, no_iommu;
extern int iommu_pass_through;
extern int iommu_detected; extern int iommu_detected;
#else #else
#define iommu_pass_through (0)
#define no_iommu (1) #define no_iommu (1)
#define iommu_detected (0) #define iommu_detected (0)
#endif #endif
......
...@@ -22,8 +22,6 @@ int force_iommu __read_mostly = 1; ...@@ -22,8 +22,6 @@ int force_iommu __read_mostly = 1;
int force_iommu __read_mostly; int force_iommu __read_mostly;
#endif #endif
int iommu_pass_through;
static int __init pci_iommu_init(void) static int __init pci_iommu_init(void)
{ {
if (iommu_detected) if (iommu_detected)
......
...@@ -4,7 +4,6 @@ ...@@ -4,7 +4,6 @@
extern int force_iommu, no_iommu; extern int force_iommu, no_iommu;
extern int iommu_detected; extern int iommu_detected;
extern int iommu_pass_through;
/* 10 seconds */ /* 10 seconds */
#define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000) #define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
#include <linux/dma-direct.h> #include <linux/dma-direct.h>
#include <linux/dma-debug.h> #include <linux/dma-debug.h>
#include <linux/iommu.h>
#include <linux/dmar.h> #include <linux/dmar.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/memblock.h> #include <linux/memblock.h>
...@@ -34,21 +35,6 @@ int no_iommu __read_mostly; ...@@ -34,21 +35,6 @@ int no_iommu __read_mostly;
/* Set this to 1 if there is a HW IOMMU in the system */ /* Set this to 1 if there is a HW IOMMU in the system */
int iommu_detected __read_mostly = 0; int iommu_detected __read_mostly = 0;
/*
* This variable becomes 1 if iommu=pt is passed on the kernel command line.
* If this variable is 1, IOMMU implementations do no DMA translation for
* devices and allow every device to access to whole physical memory. This is
* useful if a user wants to use an IOMMU only for KVM device assignment to
* guests and not for driver dma translation.
* It is also possible to disable by default in kernel config, and enable with
* iommu=nopt at boot time.
*/
#ifdef CONFIG_IOMMU_DEFAULT_PASSTHROUGH
int iommu_pass_through __read_mostly = 1;
#else
int iommu_pass_through __read_mostly;
#endif
extern struct iommu_table_entry __iommu_table[], __iommu_table_end[]; extern struct iommu_table_entry __iommu_table[], __iommu_table_end[];
void __init pci_iommu_alloc(void) void __init pci_iommu_alloc(void)
...@@ -120,9 +106,9 @@ static __init int iommu_setup(char *p) ...@@ -120,9 +106,9 @@ static __init int iommu_setup(char *p)
swiotlb = 1; swiotlb = 1;
#endif #endif
if (!strncmp(p, "pt", 2)) if (!strncmp(p, "pt", 2))
iommu_pass_through = 1; iommu_set_default_passthrough(true);
if (!strncmp(p, "nopt", 4)) if (!strncmp(p, "nopt", 4))
iommu_pass_through = 0; iommu_set_default_translated(true);
gart_parse_options(p); gart_parse_options(p);
......
...@@ -222,7 +222,7 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) ...@@ -222,7 +222,7 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo)
size_t unmapped_page; size_t unmapped_page;
size_t pgsize = get_pgsize(iova, len - unmapped_len); size_t pgsize = get_pgsize(iova, len - unmapped_len);
unmapped_page = ops->unmap(ops, iova, pgsize); unmapped_page = ops->unmap(ops, iova, pgsize, NULL);
if (!unmapped_page) if (!unmapped_page)
break; break;
...@@ -247,20 +247,28 @@ static void mmu_tlb_inv_context_s1(void *cookie) ...@@ -247,20 +247,28 @@ static void mmu_tlb_inv_context_s1(void *cookie)
mmu_hw_do_operation(pfdev, 0, 0, ~0UL, AS_COMMAND_FLUSH_MEM); mmu_hw_do_operation(pfdev, 0, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
} }
static void mmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
size_t granule, bool leaf, void *cookie)
{}
static void mmu_tlb_sync_context(void *cookie) static void mmu_tlb_sync_context(void *cookie)
{ {
//struct panfrost_device *pfdev = cookie; //struct panfrost_device *pfdev = cookie;
// TODO: Wait 1000 GPU cycles for HW_ISSUE_6367/T60X // TODO: Wait 1000 GPU cycles for HW_ISSUE_6367/T60X
} }
static const struct iommu_gather_ops mmu_tlb_ops = { static void mmu_tlb_flush_walk(unsigned long iova, size_t size, size_t granule,
void *cookie)
{
mmu_tlb_sync_context(cookie);
}
static void mmu_tlb_flush_leaf(unsigned long iova, size_t size, size_t granule,
void *cookie)
{
mmu_tlb_sync_context(cookie);
}
static const struct iommu_flush_ops mmu_tlb_ops = {
.tlb_flush_all = mmu_tlb_inv_context_s1, .tlb_flush_all = mmu_tlb_inv_context_s1,
.tlb_add_flush = mmu_tlb_inv_range_nosync, .tlb_flush_walk = mmu_tlb_flush_walk,
.tlb_sync = mmu_tlb_sync_context, .tlb_flush_leaf = mmu_tlb_flush_leaf,
}; };
static const char *access_type_name(struct panfrost_device *pfdev, static const char *access_type_name(struct panfrost_device *pfdev,
......
...@@ -182,6 +182,7 @@ config INTEL_IOMMU ...@@ -182,6 +182,7 @@ config INTEL_IOMMU
select IOMMU_IOVA select IOMMU_IOVA
select NEED_DMA_MAP_STATE select NEED_DMA_MAP_STATE
select DMAR_TABLE select DMAR_TABLE
select SWIOTLB
help help
DMA remapping (DMAR) devices support enables independent address DMA remapping (DMAR) devices support enables independent address
translations for Direct Memory Access (DMA) from devices. translations for Direct Memory Access (DMA) from devices.
......
...@@ -10,13 +10,14 @@ obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o ...@@ -10,13 +10,14 @@ obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
obj-$(CONFIG_IOMMU_IOVA) += iova.o obj-$(CONFIG_IOMMU_IOVA) += iova.o
obj-$(CONFIG_OF_IOMMU) += of_iommu.o obj-$(CONFIG_OF_IOMMU) += of_iommu.o
obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o
obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o amd_iommu_quirks.o
obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += amd_iommu_debugfs.o obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += amd_iommu_debugfs.o
obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o obj-$(CONFIG_ARM_SMMU) += arm-smmu.o arm-smmu-impl.o
obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o
obj-$(CONFIG_DMAR_TABLE) += dmar.o obj-$(CONFIG_DMAR_TABLE) += dmar.o
obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o intel-pasid.o obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o intel-pasid.o
obj-$(CONFIG_INTEL_IOMMU) += intel-trace.o
obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += intel-iommu-debugfs.o obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += intel-iommu-debugfs.o
obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
......
...@@ -436,7 +436,7 @@ static int iommu_init_device(struct device *dev) ...@@ -436,7 +436,7 @@ static int iommu_init_device(struct device *dev)
* invalid address), we ignore the capability for the device so * invalid address), we ignore the capability for the device so
* it'll be forced to go into translation mode. * it'll be forced to go into translation mode.
*/ */
if ((iommu_pass_through || !amd_iommu_force_isolation) && if ((iommu_default_passthrough() || !amd_iommu_force_isolation) &&
dev_is_pci(dev) && pci_iommuv2_capable(to_pci_dev(dev))) { dev_is_pci(dev) && pci_iommuv2_capable(to_pci_dev(dev))) {
struct amd_iommu *iommu; struct amd_iommu *iommu;
...@@ -2256,7 +2256,7 @@ static int amd_iommu_add_device(struct device *dev) ...@@ -2256,7 +2256,7 @@ static int amd_iommu_add_device(struct device *dev)
BUG_ON(!dev_data); BUG_ON(!dev_data);
if (iommu_pass_through || dev_data->iommu_v2) if (dev_data->iommu_v2)
iommu_request_dm_for_dev(dev); iommu_request_dm_for_dev(dev);
/* Domains are initialized for this device - have a look what we ended up with */ /* Domains are initialized for this device - have a look what we ended up with */
...@@ -2577,7 +2577,9 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, ...@@ -2577,7 +2577,9 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
bus_addr = address + s->dma_address + (j << PAGE_SHIFT); bus_addr = address + s->dma_address + (j << PAGE_SHIFT);
phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT); phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT);
ret = iommu_map_page(domain, bus_addr, phys_addr, PAGE_SIZE, prot, GFP_ATOMIC); ret = iommu_map_page(domain, bus_addr, phys_addr,
PAGE_SIZE, prot,
GFP_ATOMIC | __GFP_NOWARN);
if (ret) if (ret)
goto out_unmap; goto out_unmap;
...@@ -2835,7 +2837,7 @@ int __init amd_iommu_init_api(void) ...@@ -2835,7 +2837,7 @@ int __init amd_iommu_init_api(void)
int __init amd_iommu_init_dma_ops(void) int __init amd_iommu_init_dma_ops(void)
{ {
swiotlb = (iommu_pass_through || sme_me_mask) ? 1 : 0; swiotlb = (iommu_default_passthrough() || sme_me_mask) ? 1 : 0;
iommu_detected = 1; iommu_detected = 1;
if (amd_iommu_unmap_flush) if (amd_iommu_unmap_flush)
...@@ -3085,7 +3087,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova, ...@@ -3085,7 +3087,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
} }
static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova, static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
size_t page_size) size_t page_size,
struct iommu_iotlb_gather *gather)
{ {
struct protection_domain *domain = to_pdomain(dom); struct protection_domain *domain = to_pdomain(dom);
size_t unmap_size; size_t unmap_size;
...@@ -3226,9 +3229,10 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain) ...@@ -3226,9 +3229,10 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
domain_flush_complete(dom); domain_flush_complete(dom);
} }
static void amd_iommu_iotlb_range_add(struct iommu_domain *domain, static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
unsigned long iova, size_t size) struct iommu_iotlb_gather *gather)
{ {
amd_iommu_flush_iotlb_all(domain);
} }
const struct iommu_ops amd_iommu_ops = { const struct iommu_ops amd_iommu_ops = {
...@@ -3249,8 +3253,7 @@ const struct iommu_ops amd_iommu_ops = { ...@@ -3249,8 +3253,7 @@ const struct iommu_ops amd_iommu_ops = {
.is_attach_deferred = amd_iommu_is_attach_deferred, .is_attach_deferred = amd_iommu_is_attach_deferred,
.pgsize_bitmap = AMD_IOMMU_PGSIZES, .pgsize_bitmap = AMD_IOMMU_PGSIZES,
.flush_iotlb_all = amd_iommu_flush_iotlb_all, .flush_iotlb_all = amd_iommu_flush_iotlb_all,
.iotlb_range_add = amd_iommu_iotlb_range_add, .iotlb_sync = amd_iommu_iotlb_sync,
.iotlb_sync = amd_iommu_flush_iotlb_all,
}; };
/***************************************************************************** /*****************************************************************************
...@@ -4343,13 +4346,62 @@ static const struct irq_domain_ops amd_ir_domain_ops = { ...@@ -4343,13 +4346,62 @@ static const struct irq_domain_ops amd_ir_domain_ops = {
.deactivate = irq_remapping_deactivate, .deactivate = irq_remapping_deactivate,
}; };
int amd_iommu_activate_guest_mode(void *data)
{
struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
!entry || entry->lo.fields_vapic.guest_mode)
return 0;
entry->lo.val = 0;
entry->hi.val = 0;
entry->lo.fields_vapic.guest_mode = 1;
entry->lo.fields_vapic.ga_log_intr = 1;
entry->hi.fields.ga_root_ptr = ir_data->ga_root_ptr;
entry->hi.fields.vector = ir_data->ga_vector;
entry->lo.fields_vapic.ga_tag = ir_data->ga_tag;
return modify_irte_ga(ir_data->irq_2_irte.devid,
ir_data->irq_2_irte.index, entry, NULL);
}
EXPORT_SYMBOL(amd_iommu_activate_guest_mode);
int amd_iommu_deactivate_guest_mode(void *data)
{
struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
struct irq_cfg *cfg = ir_data->cfg;
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
!entry || !entry->lo.fields_vapic.guest_mode)
return 0;
entry->lo.val = 0;
entry->hi.val = 0;
entry->lo.fields_remap.dm = apic->irq_dest_mode;
entry->lo.fields_remap.int_type = apic->irq_delivery_mode;
entry->hi.fields.vector = cfg->vector;
entry->lo.fields_remap.destination =
APICID_TO_IRTE_DEST_LO(cfg->dest_apicid);
entry->hi.fields.destination =
APICID_TO_IRTE_DEST_HI(cfg->dest_apicid);
return modify_irte_ga(ir_data->irq_2_irte.devid,
ir_data->irq_2_irte.index, entry, NULL);
}
EXPORT_SYMBOL(amd_iommu_deactivate_guest_mode);
static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info) static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info)
{ {
int ret;
struct amd_iommu *iommu; struct amd_iommu *iommu;
struct amd_iommu_pi_data *pi_data = vcpu_info; struct amd_iommu_pi_data *pi_data = vcpu_info;
struct vcpu_data *vcpu_pi_info = pi_data->vcpu_data; struct vcpu_data *vcpu_pi_info = pi_data->vcpu_data;
struct amd_ir_data *ir_data = data->chip_data; struct amd_ir_data *ir_data = data->chip_data;
struct irte_ga *irte = (struct irte_ga *) ir_data->entry;
struct irq_2_irte *irte_info = &ir_data->irq_2_irte; struct irq_2_irte *irte_info = &ir_data->irq_2_irte;
struct iommu_dev_data *dev_data = search_dev_data(irte_info->devid); struct iommu_dev_data *dev_data = search_dev_data(irte_info->devid);
...@@ -4360,6 +4412,7 @@ static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info) ...@@ -4360,6 +4412,7 @@ static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info)
if (!dev_data || !dev_data->use_vapic) if (!dev_data || !dev_data->use_vapic)
return 0; return 0;
ir_data->cfg = irqd_cfg(data);
pi_data->ir_data = ir_data; pi_data->ir_data = ir_data;
/* Note: /* Note:
...@@ -4378,37 +4431,24 @@ static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info) ...@@ -4378,37 +4431,24 @@ static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info)
pi_data->prev_ga_tag = ir_data->cached_ga_tag; pi_data->prev_ga_tag = ir_data->cached_ga_tag;
if (pi_data->is_guest_mode) { if (pi_data->is_guest_mode) {
/* Setting */ ir_data->ga_root_ptr = (pi_data->base >> 12);
irte->hi.fields.ga_root_ptr = (pi_data->base >> 12); ir_data->ga_vector = vcpu_pi_info->vector;
irte->hi.fields.vector = vcpu_pi_info->vector; ir_data->ga_tag = pi_data->ga_tag;
irte->lo.fields_vapic.ga_log_intr = 1; ret = amd_iommu_activate_guest_mode(ir_data);
irte->lo.fields_vapic.guest_mode = 1; if (!ret)
irte->lo.fields_vapic.ga_tag = pi_data->ga_tag;
ir_data->cached_ga_tag = pi_data->ga_tag; ir_data->cached_ga_tag = pi_data->ga_tag;
} else { } else {
/* Un-Setting */ ret = amd_iommu_deactivate_guest_mode(ir_data);
struct irq_cfg *cfg = irqd_cfg(data);
irte->hi.val = 0;
irte->lo.val = 0;
irte->hi.fields.vector = cfg->vector;
irte->lo.fields_remap.guest_mode = 0;
irte->lo.fields_remap.destination =
APICID_TO_IRTE_DEST_LO(cfg->dest_apicid);
irte->hi.fields.destination =
APICID_TO_IRTE_DEST_HI(cfg->dest_apicid);
irte->lo.fields_remap.int_type = apic->irq_delivery_mode;
irte->lo.fields_remap.dm = apic->irq_dest_mode;
/* /*
* This communicates the ga_tag back to the caller * This communicates the ga_tag back to the caller
* so that it can do all the necessary clean up. * so that it can do all the necessary clean up.
*/ */
if (!ret)
ir_data->cached_ga_tag = 0; ir_data->cached_ga_tag = 0;
} }
return modify_irte_ga(irte_info->devid, irte_info->index, irte, ir_data); return ret;
} }
......
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef AMD_IOMMU_H
#define AMD_IOMMU_H
int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line);
#ifdef CONFIG_DMI
void amd_iommu_apply_ivrs_quirks(void);
#else
static void amd_iommu_apply_ivrs_quirks(void) { }
#endif
#endif
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <asm/irq_remapping.h> #include <asm/irq_remapping.h>
#include <linux/crash_dump.h> #include <linux/crash_dump.h>
#include "amd_iommu.h"
#include "amd_iommu_proto.h" #include "amd_iommu_proto.h"
#include "amd_iommu_types.h" #include "amd_iommu_types.h"
#include "irq_remapping.h" #include "irq_remapping.h"
...@@ -1002,7 +1003,7 @@ static void __init set_dev_entry_from_acpi(struct amd_iommu *iommu, ...@@ -1002,7 +1003,7 @@ static void __init set_dev_entry_from_acpi(struct amd_iommu *iommu,
set_iommu_for_device(iommu, devid); set_iommu_for_device(iommu, devid);
} }
static int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line) int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line)
{ {
struct devid_map *entry; struct devid_map *entry;
struct list_head *list; struct list_head *list;
...@@ -1153,6 +1154,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu, ...@@ -1153,6 +1154,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
if (ret) if (ret)
return ret; return ret;
amd_iommu_apply_ivrs_quirks();
/* /*
* First save the recommended feature enable bits from ACPI * First save the recommended feature enable bits from ACPI
*/ */
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Quirks for AMD IOMMU
*
* Copyright (C) 2019 Kai-Heng Feng <kai.heng.feng@canonical.com>
*/
#ifdef CONFIG_DMI
#include <linux/dmi.h>
#include "amd_iommu.h"
#define IVHD_SPECIAL_IOAPIC 1
struct ivrs_quirk_entry {
u8 id;
u16 devid;
};
enum {
DELL_INSPIRON_7375 = 0,
DELL_LATITUDE_5495,
LENOVO_IDEAPAD_330S_15ARR,
};
static const struct ivrs_quirk_entry ivrs_ioapic_quirks[][3] __initconst = {
/* ivrs_ioapic[4]=00:14.0 ivrs_ioapic[5]=00:00.2 */
[DELL_INSPIRON_7375] = {
{ .id = 4, .devid = 0xa0 },
{ .id = 5, .devid = 0x2 },
{}
},
/* ivrs_ioapic[4]=00:14.0 */
[DELL_LATITUDE_5495] = {
{ .id = 4, .devid = 0xa0 },
{}
},
/* ivrs_ioapic[32]=00:14.0 */
[LENOVO_IDEAPAD_330S_15ARR] = {
{ .id = 32, .devid = 0xa0 },
{}
},
{}
};
static int __init ivrs_ioapic_quirk_cb(const struct dmi_system_id *d)
{
const struct ivrs_quirk_entry *i;
for (i = d->driver_data; i->id != 0 && i->devid != 0; i++)
add_special_device(IVHD_SPECIAL_IOAPIC, i->id, (u16 *)&i->devid, 0);
return 0;
}
static const struct dmi_system_id ivrs_quirks[] __initconst = {
{
.callback = ivrs_ioapic_quirk_cb,
.ident = "Dell Inspiron 7375",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7375"),
},
.driver_data = (void *)&ivrs_ioapic_quirks[DELL_INSPIRON_7375],
},
{
.callback = ivrs_ioapic_quirk_cb,
.ident = "Dell Latitude 5495",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Latitude 5495"),
},
.driver_data = (void *)&ivrs_ioapic_quirks[DELL_LATITUDE_5495],
},
{
.callback = ivrs_ioapic_quirk_cb,
.ident = "Lenovo ideapad 330S-15ARR",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_NAME, "81FB"),
},
.driver_data = (void *)&ivrs_ioapic_quirks[LENOVO_IDEAPAD_330S_15ARR],
},
{}
};
void __init amd_iommu_apply_ivrs_quirks(void)
{
dmi_check_system(ivrs_quirks);
}
#endif
...@@ -873,6 +873,15 @@ struct amd_ir_data { ...@@ -873,6 +873,15 @@ struct amd_ir_data {
struct msi_msg msi_entry; struct msi_msg msi_entry;
void *entry; /* Pointer to union irte or struct irte_ga */ void *entry; /* Pointer to union irte or struct irte_ga */
void *ref; /* Pointer to the actual irte */ void *ref; /* Pointer to the actual irte */
/**
* Store information for activate/de-activate
* Guest virtual APIC mode during runtime.
*/
struct irq_cfg *cfg;
int ga_vector;
int ga_root_ptr;
int ga_tag;
}; };
struct amd_irte_ops { struct amd_irte_ops {
......
// SPDX-License-Identifier: GPL-2.0-only
// Miscellaneous Arm SMMU implementation and integration quirks
// Copyright (C) 2019 Arm Limited
#define pr_fmt(fmt) "arm-smmu: " fmt
#include <linux/bitfield.h>
#include <linux/of.h>
#include "arm-smmu.h"
static int arm_smmu_gr0_ns(int offset)
{
switch(offset) {
case ARM_SMMU_GR0_sCR0:
case ARM_SMMU_GR0_sACR:
case ARM_SMMU_GR0_sGFSR:
case ARM_SMMU_GR0_sGFSYNR0:
case ARM_SMMU_GR0_sGFSYNR1:
case ARM_SMMU_GR0_sGFSYNR2:
return offset + 0x400;
default:
return offset;
}
}
static u32 arm_smmu_read_ns(struct arm_smmu_device *smmu, int page,
int offset)
{
if (page == ARM_SMMU_GR0)
offset = arm_smmu_gr0_ns(offset);
return readl_relaxed(arm_smmu_page(smmu, page) + offset);
}
static void arm_smmu_write_ns(struct arm_smmu_device *smmu, int page,
int offset, u32 val)
{
if (page == ARM_SMMU_GR0)
offset = arm_smmu_gr0_ns(offset);
writel_relaxed(val, arm_smmu_page(smmu, page) + offset);
}
/* Since we don't care for sGFAR, we can do without 64-bit accessors */
static const struct arm_smmu_impl calxeda_impl = {
.read_reg = arm_smmu_read_ns,
.write_reg = arm_smmu_write_ns,
};
struct cavium_smmu {
struct arm_smmu_device smmu;
u32 id_base;
};
static int cavium_cfg_probe(struct arm_smmu_device *smmu)
{
static atomic_t context_count = ATOMIC_INIT(0);
struct cavium_smmu *cs = container_of(smmu, struct cavium_smmu, smmu);
/*
* Cavium CN88xx erratum #27704.
* Ensure ASID and VMID allocation is unique across all SMMUs in
* the system.
*/
cs->id_base = atomic_fetch_add(smmu->num_context_banks, &context_count);
dev_notice(smmu->dev, "\tenabling workaround for Cavium erratum 27704\n");
return 0;
}
static int cavium_init_context(struct arm_smmu_domain *smmu_domain)
{
struct cavium_smmu *cs = container_of(smmu_domain->smmu,
struct cavium_smmu, smmu);
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2)
smmu_domain->cfg.vmid += cs->id_base;
else
smmu_domain->cfg.asid += cs->id_base;
return 0;
}
static const struct arm_smmu_impl cavium_impl = {
.cfg_probe = cavium_cfg_probe,
.init_context = cavium_init_context,
};
static struct arm_smmu_device *cavium_smmu_impl_init(struct arm_smmu_device *smmu)
{
struct cavium_smmu *cs;
cs = devm_kzalloc(smmu->dev, sizeof(*cs), GFP_KERNEL);
if (!cs)
return ERR_PTR(-ENOMEM);
cs->smmu = *smmu;
cs->smmu.impl = &cavium_impl;
devm_kfree(smmu->dev, smmu);
return &cs->smmu;
}
#define ARM_MMU500_ACTLR_CPRE (1 << 1)
#define ARM_MMU500_ACR_CACHE_LOCK (1 << 26)
#define ARM_MMU500_ACR_S2CRB_TLBEN (1 << 10)
#define ARM_MMU500_ACR_SMTNMB_TLBEN (1 << 8)
static int arm_mmu500_reset(struct arm_smmu_device *smmu)
{
u32 reg, major;
int i;
/*
* On MMU-500 r2p0 onwards we need to clear ACR.CACHE_LOCK before
* writes to the context bank ACTLRs will stick. And we just hope that
* Secure has also cleared SACR.CACHE_LOCK for this to take effect...
*/
reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID7);
major = FIELD_GET(ID7_MAJOR, reg);
reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sACR);
if (major >= 2)
reg &= ~ARM_MMU500_ACR_CACHE_LOCK;
/*
* Allow unmatched Stream IDs to allocate bypass
* TLB entries for reduced latency.
*/
reg |= ARM_MMU500_ACR_SMTNMB_TLBEN | ARM_MMU500_ACR_S2CRB_TLBEN;
arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sACR, reg);
/*
* Disable MMU-500's not-particularly-beneficial next-page
* prefetcher for the sake of errata #841119 and #826419.
*/
for (i = 0; i < smmu->num_context_banks; ++i) {
reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR);
reg &= ~ARM_MMU500_ACTLR_CPRE;
arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg);
}
return 0;
}
static const struct arm_smmu_impl arm_mmu500_impl = {
.reset = arm_mmu500_reset,
};
struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu)
{
/*
* We will inevitably have to combine model-specific implementation
* quirks with platform-specific integration quirks, but everything
* we currently support happens to work out as straightforward
* mutually-exclusive assignments.
*/
switch (smmu->model) {
case ARM_MMU500:
smmu->impl = &arm_mmu500_impl;
break;
case CAVIUM_SMMUV2:
return cavium_smmu_impl_init(smmu);
default:
break;
}
if (of_property_read_bool(smmu->dev->of_node,
"calxeda,smmu-secure-config-access"))
smmu->impl = &calxeda_impl;
return smmu;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* IOMMU API for ARM architected SMMU implementations.
*
* Copyright (C) 2013 ARM Limited
*
* Author: Will Deacon <will.deacon@arm.com>
*/
#ifndef _ARM_SMMU_REGS_H
#define _ARM_SMMU_REGS_H
/* Configuration registers */
#define ARM_SMMU_GR0_sCR0 0x0
#define sCR0_CLIENTPD (1 << 0)
#define sCR0_GFRE (1 << 1)
#define sCR0_GFIE (1 << 2)
#define sCR0_EXIDENABLE (1 << 3)
#define sCR0_GCFGFRE (1 << 4)
#define sCR0_GCFGFIE (1 << 5)
#define sCR0_USFCFG (1 << 10)
#define sCR0_VMIDPNE (1 << 11)
#define sCR0_PTM (1 << 12)
#define sCR0_FB (1 << 13)
#define sCR0_VMID16EN (1 << 31)
#define sCR0_BSU_SHIFT 14
#define sCR0_BSU_MASK 0x3
/* Auxiliary Configuration register */
#define ARM_SMMU_GR0_sACR 0x10
/* Identification registers */
#define ARM_SMMU_GR0_ID0 0x20
#define ARM_SMMU_GR0_ID1 0x24
#define ARM_SMMU_GR0_ID2 0x28
#define ARM_SMMU_GR0_ID3 0x2c
#define ARM_SMMU_GR0_ID4 0x30
#define ARM_SMMU_GR0_ID5 0x34
#define ARM_SMMU_GR0_ID6 0x38
#define ARM_SMMU_GR0_ID7 0x3c
#define ARM_SMMU_GR0_sGFSR 0x48
#define ARM_SMMU_GR0_sGFSYNR0 0x50
#define ARM_SMMU_GR0_sGFSYNR1 0x54
#define ARM_SMMU_GR0_sGFSYNR2 0x58
#define ID0_S1TS (1 << 30)
#define ID0_S2TS (1 << 29)
#define ID0_NTS (1 << 28)
#define ID0_SMS (1 << 27)
#define ID0_ATOSNS (1 << 26)
#define ID0_PTFS_NO_AARCH32 (1 << 25)
#define ID0_PTFS_NO_AARCH32S (1 << 24)
#define ID0_CTTW (1 << 14)
#define ID0_NUMIRPT_SHIFT 16
#define ID0_NUMIRPT_MASK 0xff
#define ID0_NUMSIDB_SHIFT 9
#define ID0_NUMSIDB_MASK 0xf
#define ID0_EXIDS (1 << 8)
#define ID0_NUMSMRG_SHIFT 0
#define ID0_NUMSMRG_MASK 0xff
#define ID1_PAGESIZE (1 << 31)
#define ID1_NUMPAGENDXB_SHIFT 28
#define ID1_NUMPAGENDXB_MASK 7
#define ID1_NUMS2CB_SHIFT 16
#define ID1_NUMS2CB_MASK 0xff
#define ID1_NUMCB_SHIFT 0
#define ID1_NUMCB_MASK 0xff
#define ID2_OAS_SHIFT 4
#define ID2_OAS_MASK 0xf
#define ID2_IAS_SHIFT 0
#define ID2_IAS_MASK 0xf
#define ID2_UBS_SHIFT 8
#define ID2_UBS_MASK 0xf
#define ID2_PTFS_4K (1 << 12)
#define ID2_PTFS_16K (1 << 13)
#define ID2_PTFS_64K (1 << 14)
#define ID2_VMID16 (1 << 15)
#define ID7_MAJOR_SHIFT 4
#define ID7_MAJOR_MASK 0xf
/* Global TLB invalidation */
#define ARM_SMMU_GR0_TLBIVMID 0x64
#define ARM_SMMU_GR0_TLBIALLNSNH 0x68
#define ARM_SMMU_GR0_TLBIALLH 0x6c
#define ARM_SMMU_GR0_sTLBGSYNC 0x70
#define ARM_SMMU_GR0_sTLBGSTATUS 0x74
#define sTLBGSTATUS_GSACTIVE (1 << 0)
/* Stream mapping registers */
#define ARM_SMMU_GR0_SMR(n) (0x800 + ((n) << 2))
#define SMR_VALID (1 << 31)
#define SMR_MASK_SHIFT 16
#define SMR_ID_SHIFT 0
#define ARM_SMMU_GR0_S2CR(n) (0xc00 + ((n) << 2))
#define S2CR_CBNDX_SHIFT 0
#define S2CR_CBNDX_MASK 0xff
#define S2CR_EXIDVALID (1 << 10)
#define S2CR_TYPE_SHIFT 16
#define S2CR_TYPE_MASK 0x3
enum arm_smmu_s2cr_type {
S2CR_TYPE_TRANS,
S2CR_TYPE_BYPASS,
S2CR_TYPE_FAULT,
};
#define S2CR_PRIVCFG_SHIFT 24
#define S2CR_PRIVCFG_MASK 0x3
enum arm_smmu_s2cr_privcfg {
S2CR_PRIVCFG_DEFAULT,
S2CR_PRIVCFG_DIPAN,
S2CR_PRIVCFG_UNPRIV,
S2CR_PRIVCFG_PRIV,
};
/* Context bank attribute registers */
#define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2))
#define CBAR_VMID_SHIFT 0
#define CBAR_VMID_MASK 0xff
#define CBAR_S1_BPSHCFG_SHIFT 8
#define CBAR_S1_BPSHCFG_MASK 3
#define CBAR_S1_BPSHCFG_NSH 3
#define CBAR_S1_MEMATTR_SHIFT 12
#define CBAR_S1_MEMATTR_MASK 0xf
#define CBAR_S1_MEMATTR_WB 0xf
#define CBAR_TYPE_SHIFT 16
#define CBAR_TYPE_MASK 0x3
#define CBAR_TYPE_S2_TRANS (0 << CBAR_TYPE_SHIFT)
#define CBAR_TYPE_S1_TRANS_S2_BYPASS (1 << CBAR_TYPE_SHIFT)
#define CBAR_TYPE_S1_TRANS_S2_FAULT (2 << CBAR_TYPE_SHIFT)
#define CBAR_TYPE_S1_TRANS_S2_TRANS (3 << CBAR_TYPE_SHIFT)
#define CBAR_IRPTNDX_SHIFT 24
#define CBAR_IRPTNDX_MASK 0xff
#define ARM_SMMU_GR1_CBFRSYNRA(n) (0x400 + ((n) << 2))
#define ARM_SMMU_GR1_CBA2R(n) (0x800 + ((n) << 2))
#define CBA2R_RW64_32BIT (0 << 0)
#define CBA2R_RW64_64BIT (1 << 0)
#define CBA2R_VMID_SHIFT 16
#define CBA2R_VMID_MASK 0xffff
#define ARM_SMMU_CB_SCTLR 0x0
#define ARM_SMMU_CB_ACTLR 0x4
#define ARM_SMMU_CB_RESUME 0x8
#define ARM_SMMU_CB_TTBCR2 0x10
#define ARM_SMMU_CB_TTBR0 0x20
#define ARM_SMMU_CB_TTBR1 0x28
#define ARM_SMMU_CB_TTBCR 0x30
#define ARM_SMMU_CB_CONTEXTIDR 0x34
#define ARM_SMMU_CB_S1_MAIR0 0x38
#define ARM_SMMU_CB_S1_MAIR1 0x3c
#define ARM_SMMU_CB_PAR 0x50
#define ARM_SMMU_CB_FSR 0x58
#define ARM_SMMU_CB_FAR 0x60
#define ARM_SMMU_CB_FSYNR0 0x68
#define ARM_SMMU_CB_S1_TLBIVA 0x600
#define ARM_SMMU_CB_S1_TLBIASID 0x610
#define ARM_SMMU_CB_S1_TLBIVAL 0x620
#define ARM_SMMU_CB_S2_TLBIIPAS2 0x630
#define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638
#define ARM_SMMU_CB_TLBSYNC 0x7f0
#define ARM_SMMU_CB_TLBSTATUS 0x7f4
#define ARM_SMMU_CB_ATS1PR 0x800
#define ARM_SMMU_CB_ATSR 0x8f0
#define SCTLR_S1_ASIDPNE (1 << 12)
#define SCTLR_CFCFG (1 << 7)
#define SCTLR_CFIE (1 << 6)
#define SCTLR_CFRE (1 << 5)
#define SCTLR_E (1 << 4)
#define SCTLR_AFE (1 << 2)
#define SCTLR_TRE (1 << 1)
#define SCTLR_M (1 << 0)
#define CB_PAR_F (1 << 0)
#define ATSR_ACTIVE (1 << 0)
#define RESUME_RETRY (0 << 0)
#define RESUME_TERMINATE (1 << 0)
#define TTBCR2_SEP_SHIFT 15
#define TTBCR2_SEP_UPSTREAM (0x7 << TTBCR2_SEP_SHIFT)
#define TTBCR2_AS (1 << 4)
#define TTBRn_ASID_SHIFT 48
#define FSR_MULTI (1 << 31)
#define FSR_SS (1 << 30)
#define FSR_UUT (1 << 8)
#define FSR_ASF (1 << 7)
#define FSR_TLBLKF (1 << 6)
#define FSR_TLBMCF (1 << 5)
#define FSR_EF (1 << 4)
#define FSR_PF (1 << 3)
#define FSR_AFF (1 << 2)
#define FSR_TF (1 << 1)
#define FSR_IGN (FSR_AFF | FSR_ASF | \
FSR_TLBMCF | FSR_TLBLKF)
#define FSR_FAULT (FSR_MULTI | FSR_SS | FSR_UUT | \
FSR_EF | FSR_PF | FSR_TF | FSR_IGN)
#define FSYNR0_WNR (1 << 4)
#endif /* _ARM_SMMU_REGS_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -303,13 +303,15 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, ...@@ -303,13 +303,15 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
u64 size, struct device *dev) u64 size, struct device *dev)
{ {
struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
unsigned long order, base_pfn; unsigned long order, base_pfn;
struct iova_domain *iovad;
int attr; int attr;
if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
return -EINVAL; return -EINVAL;
iovad = &cookie->iovad;
/* Use the smallest supported page size for IOVA granularity */ /* Use the smallest supported page size for IOVA granularity */
order = __ffs(domain->pgsize_bitmap); order = __ffs(domain->pgsize_bitmap);
base_pfn = max_t(unsigned long, 1, base >> order); base_pfn = max_t(unsigned long, 1, base >> order);
...@@ -444,13 +446,18 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, ...@@ -444,13 +446,18 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad; struct iova_domain *iovad = &cookie->iovad;
size_t iova_off = iova_offset(iovad, dma_addr); size_t iova_off = iova_offset(iovad, dma_addr);
struct iommu_iotlb_gather iotlb_gather;
size_t unmapped;
dma_addr -= iova_off; dma_addr -= iova_off;
size = iova_align(iovad, size + iova_off); size = iova_align(iovad, size + iova_off);
iommu_iotlb_gather_init(&iotlb_gather);
unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather);
WARN_ON(unmapped != size);
WARN_ON(iommu_unmap_fast(domain, dma_addr, size) != size);
if (!cookie->fq_domain) if (!cookie->fq_domain)
iommu_tlb_sync(domain); iommu_tlb_sync(domain, &iotlb_gather);
iommu_dma_free_iova(cookie, dma_addr, size); iommu_dma_free_iova(cookie, dma_addr, size);
} }
......
...@@ -1519,6 +1519,64 @@ static const char *dma_remap_fault_reasons[] = ...@@ -1519,6 +1519,64 @@ static const char *dma_remap_fault_reasons[] =
"PCE for translation request specifies blocking", "PCE for translation request specifies blocking",
}; };
static const char * const dma_remap_sm_fault_reasons[] = {
"SM: Invalid Root Table Address",
"SM: TTM 0 for request with PASID",
"SM: TTM 0 for page group request",
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x33-0x37 */
"SM: Error attempting to access Root Entry",
"SM: Present bit in Root Entry is clear",
"SM: Non-zero reserved field set in Root Entry",
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x3B-0x3F */
"SM: Error attempting to access Context Entry",
"SM: Present bit in Context Entry is clear",
"SM: Non-zero reserved field set in the Context Entry",
"SM: Invalid Context Entry",
"SM: DTE field in Context Entry is clear",
"SM: PASID Enable field in Context Entry is clear",
"SM: PASID is larger than the max in Context Entry",
"SM: PRE field in Context-Entry is clear",
"SM: RID_PASID field error in Context-Entry",
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x49-0x4F */
"SM: Error attempting to access the PASID Directory Entry",
"SM: Present bit in Directory Entry is clear",
"SM: Non-zero reserved field set in PASID Directory Entry",
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x53-0x57 */
"SM: Error attempting to access PASID Table Entry",
"SM: Present bit in PASID Table Entry is clear",
"SM: Non-zero reserved field set in PASID Table Entry",
"SM: Invalid Scalable-Mode PASID Table Entry",
"SM: ERE field is clear in PASID Table Entry",
"SM: SRE field is clear in PASID Table Entry",
"Unknown", "Unknown",/* 0x5E-0x5F */
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x60-0x67 */
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x68-0x6F */
"SM: Error attempting to access first-level paging entry",
"SM: Present bit in first-level paging entry is clear",
"SM: Non-zero reserved field set in first-level paging entry",
"SM: Error attempting to access FL-PML4 entry",
"SM: First-level entry address beyond MGAW in Nested translation",
"SM: Read permission error in FL-PML4 entry in Nested translation",
"SM: Read permission error in first-level paging entry in Nested translation",
"SM: Write permission error in first-level paging entry in Nested translation",
"SM: Error attempting to access second-level paging entry",
"SM: Read/Write permission error in second-level paging entry",
"SM: Non-zero reserved field set in second-level paging entry",
"SM: Invalid second-level page table pointer",
"SM: A/D bit update needed in second-level entry when set up in no snoop",
"Unknown", "Unknown", "Unknown", /* 0x7D-0x7F */
"SM: Address in first-level translation is not canonical",
"SM: U/S set 0 for first-level translation with user privilege",
"SM: No execute permission for request with PASID and ER=1",
"SM: Address beyond the DMA hardware max",
"SM: Second-level entry address beyond the max",
"SM: No write permission for Write/AtomicOp request",
"SM: No read permission for Read/AtomicOp request",
"SM: Invalid address-interrupt address",
"Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", "Unknown", /* 0x88-0x8F */
"SM: A/D bit update needed in first-level entry when set up in no snoop",
};
static const char *irq_remap_fault_reasons[] = static const char *irq_remap_fault_reasons[] =
{ {
"Detected reserved fields in the decoded interrupt-remapped request", "Detected reserved fields in the decoded interrupt-remapped request",
...@@ -1536,6 +1594,10 @@ static const char *dmar_get_fault_reason(u8 fault_reason, int *fault_type) ...@@ -1536,6 +1594,10 @@ static const char *dmar_get_fault_reason(u8 fault_reason, int *fault_type)
ARRAY_SIZE(irq_remap_fault_reasons))) { ARRAY_SIZE(irq_remap_fault_reasons))) {
*fault_type = INTR_REMAP; *fault_type = INTR_REMAP;
return irq_remap_fault_reasons[fault_reason - 0x20]; return irq_remap_fault_reasons[fault_reason - 0x20];
} else if (fault_reason >= 0x30 && (fault_reason - 0x30 <
ARRAY_SIZE(dma_remap_sm_fault_reasons))) {
*fault_type = DMA_REMAP;
return dma_remap_sm_fault_reasons[fault_reason - 0x30];
} else if (fault_reason < ARRAY_SIZE(dma_remap_fault_reasons)) { } else if (fault_reason < ARRAY_SIZE(dma_remap_fault_reasons)) {
*fault_type = DMA_REMAP; *fault_type = DMA_REMAP;
return dma_remap_fault_reasons[fault_reason]; return dma_remap_fault_reasons[fault_reason];
...@@ -1611,7 +1673,8 @@ void dmar_msi_read(int irq, struct msi_msg *msg) ...@@ -1611,7 +1673,8 @@ void dmar_msi_read(int irq, struct msi_msg *msg)
} }
static int dmar_fault_do_one(struct intel_iommu *iommu, int type, static int dmar_fault_do_one(struct intel_iommu *iommu, int type,
u8 fault_reason, u16 source_id, unsigned long long addr) u8 fault_reason, int pasid, u16 source_id,
unsigned long long addr)
{ {
const char *reason; const char *reason;
int fault_type; int fault_type;
...@@ -1624,10 +1687,11 @@ static int dmar_fault_do_one(struct intel_iommu *iommu, int type, ...@@ -1624,10 +1687,11 @@ static int dmar_fault_do_one(struct intel_iommu *iommu, int type,
PCI_FUNC(source_id & 0xFF), addr >> 48, PCI_FUNC(source_id & 0xFF), addr >> 48,
fault_reason, reason); fault_reason, reason);
else else
pr_err("[%s] Request device [%02x:%02x.%d] fault addr %llx [fault reason %02d] %s\n", pr_err("[%s] Request device [%02x:%02x.%d] PASID %x fault addr %llx [fault reason %02d] %s\n",
type ? "DMA Read" : "DMA Write", type ? "DMA Read" : "DMA Write",
source_id >> 8, PCI_SLOT(source_id & 0xFF), source_id >> 8, PCI_SLOT(source_id & 0xFF),
PCI_FUNC(source_id & 0xFF), addr, fault_reason, reason); PCI_FUNC(source_id & 0xFF), pasid, addr,
fault_reason, reason);
return 0; return 0;
} }
...@@ -1659,8 +1723,9 @@ irqreturn_t dmar_fault(int irq, void *dev_id) ...@@ -1659,8 +1723,9 @@ irqreturn_t dmar_fault(int irq, void *dev_id)
u8 fault_reason; u8 fault_reason;
u16 source_id; u16 source_id;
u64 guest_addr; u64 guest_addr;
int type; int type, pasid;
u32 data; u32 data;
bool pasid_present;
/* highest 32 bits */ /* highest 32 bits */
data = readl(iommu->reg + reg + data = readl(iommu->reg + reg +
...@@ -1672,10 +1737,12 @@ irqreturn_t dmar_fault(int irq, void *dev_id) ...@@ -1672,10 +1737,12 @@ irqreturn_t dmar_fault(int irq, void *dev_id)
fault_reason = dma_frcd_fault_reason(data); fault_reason = dma_frcd_fault_reason(data);
type = dma_frcd_type(data); type = dma_frcd_type(data);
pasid = dma_frcd_pasid_value(data);
data = readl(iommu->reg + reg + data = readl(iommu->reg + reg +
fault_index * PRIMARY_FAULT_REG_LEN + 8); fault_index * PRIMARY_FAULT_REG_LEN + 8);
source_id = dma_frcd_source_id(data); source_id = dma_frcd_source_id(data);
pasid_present = dma_frcd_pasid_present(data);
guest_addr = dmar_readq(iommu->reg + reg + guest_addr = dmar_readq(iommu->reg + reg +
fault_index * PRIMARY_FAULT_REG_LEN); fault_index * PRIMARY_FAULT_REG_LEN);
guest_addr = dma_frcd_page_addr(guest_addr); guest_addr = dma_frcd_page_addr(guest_addr);
...@@ -1688,7 +1755,9 @@ irqreturn_t dmar_fault(int irq, void *dev_id) ...@@ -1688,7 +1755,9 @@ irqreturn_t dmar_fault(int irq, void *dev_id)
raw_spin_unlock_irqrestore(&iommu->register_lock, flag); raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
if (!ratelimited) if (!ratelimited)
/* Using pasid -1 if pasid is not present */
dmar_fault_do_one(iommu, type, fault_reason, dmar_fault_do_one(iommu, type, fault_reason,
pasid_present ? pasid : -1,
source_id, guest_addr); source_id, guest_addr);
fault_index++; fault_index++;
......
...@@ -566,7 +566,7 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data, ...@@ -566,7 +566,7 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
static const struct iommu_ops exynos_iommu_ops; static const struct iommu_ops exynos_iommu_ops;
static int __init exynos_sysmmu_probe(struct platform_device *pdev) static int exynos_sysmmu_probe(struct platform_device *pdev)
{ {
int irq, ret; int irq, ret;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
...@@ -583,10 +583,8 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev) ...@@ -583,10 +583,8 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev)
return PTR_ERR(data->sfrbase); return PTR_ERR(data->sfrbase);
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
if (irq <= 0) { if (irq <= 0)
dev_err(dev, "Unable to find IRQ resource\n");
return irq; return irq;
}
ret = devm_request_irq(dev, irq, exynos_sysmmu_irq, 0, ret = devm_request_irq(dev, irq, exynos_sysmmu_irq, 0,
dev_name(dev), data); dev_name(dev), data);
...@@ -1130,7 +1128,8 @@ static void exynos_iommu_tlb_invalidate_entry(struct exynos_iommu_domain *domain ...@@ -1130,7 +1128,8 @@ static void exynos_iommu_tlb_invalidate_entry(struct exynos_iommu_domain *domain
} }
static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain, static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain,
unsigned long l_iova, size_t size) unsigned long l_iova, size_t size,
struct iommu_iotlb_gather *gather)
{ {
struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain); struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
sysmmu_iova_t iova = (sysmmu_iova_t)l_iova; sysmmu_iova_t iova = (sysmmu_iova_t)l_iova;
......
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/*
* Intel IOMMU trace support
*
* Copyright (C) 2019 Intel Corporation
*
* Author: Lu Baolu <baolu.lu@linux.intel.com>
*/
#include <linux/string.h>
#include <linux/types.h>
#define CREATE_TRACE_POINTS
#include <trace/events/intel_iommu.h>
...@@ -376,13 +376,13 @@ static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque) ...@@ -376,13 +376,13 @@ static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque)
{ {
struct set_msi_sid_data *data = opaque; struct set_msi_sid_data *data = opaque;
if (data->count == 0 || PCI_BUS_NUM(alias) == PCI_BUS_NUM(data->alias))
data->busmatch_count++;
data->pdev = pdev; data->pdev = pdev;
data->alias = alias; data->alias = alias;
data->count++; data->count++;
if (PCI_BUS_NUM(alias) == pdev->bus->number)
data->busmatch_count++;
return 0; return 0;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -577,7 +577,9 @@ void queue_iova(struct iova_domain *iovad, ...@@ -577,7 +577,9 @@ void queue_iova(struct iova_domain *iovad,
spin_unlock_irqrestore(&fq->lock, flags); spin_unlock_irqrestore(&fq->lock, flags);
if (atomic_cmpxchg(&iovad->fq_timer_on, 0, 1) == 0) /* Avoid false sharing as much as possible. */
if (!atomic_read(&iovad->fq_timer_on) &&
!atomic_cmpxchg(&iovad->fq_timer_on, 0, 1))
mod_timer(&iovad->fq_timer, mod_timer(&iovad->fq_timer,
jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -206,7 +206,7 @@ static void mtk_iommu_config(struct mtk_iommu_data *data, ...@@ -206,7 +206,7 @@ static void mtk_iommu_config(struct mtk_iommu_data *data,
for (i = 0; i < fwspec->num_ids; ++i) { for (i = 0; i < fwspec->num_ids; ++i) {
larbid = mt2701_m4u_to_larb(fwspec->ids[i]); larbid = mt2701_m4u_to_larb(fwspec->ids[i]);
portid = mt2701_m4u_to_port(fwspec->ids[i]); portid = mt2701_m4u_to_port(fwspec->ids[i]);
larb_mmu = &data->smi_imu.larb_imu[larbid]; larb_mmu = &data->larb_imu[larbid];
dev_dbg(dev, "%s iommu port: %d\n", dev_dbg(dev, "%s iommu port: %d\n",
enable ? "enable" : "disable", portid); enable ? "enable" : "disable", portid);
...@@ -324,7 +324,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -324,7 +324,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
} }
static size_t mtk_iommu_unmap(struct iommu_domain *domain, static size_t mtk_iommu_unmap(struct iommu_domain *domain,
unsigned long iova, size_t size) unsigned long iova, size_t size,
struct iommu_iotlb_gather *gather)
{ {
struct mtk_iommu_domain *dom = to_mtk_domain(domain); struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned long flags; unsigned long flags;
...@@ -610,14 +611,12 @@ static int mtk_iommu_probe(struct platform_device *pdev) ...@@ -610,14 +611,12 @@ static int mtk_iommu_probe(struct platform_device *pdev)
} }
} }
data->smi_imu.larb_imu[larb_nr].dev = &plarbdev->dev; data->larb_imu[larb_nr].dev = &plarbdev->dev;
component_match_add_release(dev, &match, release_of, component_match_add_release(dev, &match, release_of,
compare_of, larb_spec.np); compare_of, larb_spec.np);
larb_nr++; larb_nr++;
} }
data->smi_imu.larb_nr = larb_nr;
platform_set_drvdata(pdev, data); platform_set_drvdata(pdev, data);
ret = mtk_iommu_hw_init(data); ret = mtk_iommu_hw_init(data);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment