Commit 4e3a16ee authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'iommu-updates-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:
 "A big part of this is a change in how devices get connected to IOMMUs
  in the core code. It contains the change from the old add_device() /
  remove_device() to the new probe_device() / release_device()
  call-backs.

  As a result functionality that was previously in the IOMMU drivers has
  been moved to the IOMMU core code, including IOMMU group allocation
  for each device. The reason for this change was to get more robust
  allocation of default domains for the iommu groups.

  A couple of fixes were necessary after this was merged into the IOMMU
  tree, but there are no known bugs left. The last fix is applied on-top
  of the merge commit for the topic branches.

  Other than that change, we have:

   - Removal of the driver private domain handling in the Intel VT-d
     driver. This was fragile code and I am glad it is gone now.

   - More Intel VT-d updates from Lu Baolu:
      - Nested Shared Virtual Addressing (SVA) support to the Intel VT-d
        driver
      - Replacement of the Intel SVM interfaces to the common IOMMU SVA
        API
      - SVA Page Request draining support

   - ARM-SMMU Updates from Will:
      - Avoid mapping reserved MMIO space on SMMUv3, so that it can be
        claimed by the PMU driver
      - Use xarray to manage ASIDs on SMMUv3
      - Reword confusing shutdown message
      - DT compatible string updates
      - Allow implementations to override the default domain type

   - A new IOMMU driver for the Allwinner Sun50i platform

   - Support for ATS gets disabled for untrusted devices (like
     Thunderbolt devices). This includes a PCI patch, acked by Bjorn.

   - Some cleanups to the AMD IOMMU driver to make more use of IOMMU
     core features.

   - Unification of some printk formats in the Intel and AMD IOMMU
     drivers and in the IOVA code.

   - Updates for DT bindings

   - A number of smaller fixes and cleanups.

* tag 'iommu-updates-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (109 commits)
  iommu: Check for deferred attach in iommu_group_do_dma_attach()
  iommu/amd: Remove redundant devid checks
  iommu/amd: Store dev_data as device iommu private data
  iommu/amd: Merge private header files
  iommu/amd: Remove PD_DMA_OPS_MASK
  iommu/amd: Consolidate domain allocation/freeing
  iommu/amd: Free page-table in protection_domain_free()
  iommu/amd: Allocate page-table in protection_domain_init()
  iommu/amd: Let free_pagetable() not rely on domain->pt_root
  iommu/amd: Unexport get_dev_data()
  iommu/vt-d: Fix compile warning
  iommu/vt-d: Remove real DMA lookup in find_domain
  iommu/vt-d: Allocate domain info for real DMA sub-devices
  iommu/vt-d: Only clear real DMA device's context entries
  iommu: Remove iommu_sva_ops::mm_exit()
  uacce: Remove mm_exit() op
  iommu/sun50i: Constify sun50i_iommu_ops
  iommu/hyper-v: Constify hyperv_ir_domain_ops
  iommu/vt-d: Use pci_ats_supported()
  iommu/arm-smmu-v3: Use pci_ats_supported()
  ...
parents 9413b9a6 431275af
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/iommu/allwinner,sun50i-h6-iommu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Allwinner H6 IOMMU Device Tree Bindings
maintainers:
- Chen-Yu Tsai <wens@csie.org>
- Maxime Ripard <mripard@kernel.org>
properties:
"#iommu-cells":
const: 1
description:
The content of the cell is the master ID.
compatible:
const: allwinner,sun50i-h6-iommu
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 1
resets:
maxItems: 1
required:
- "#iommu-cells"
- compatible
- reg
- interrupts
- clocks
- resets
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/clock/sun50i-h6-ccu.h>
#include <dt-bindings/reset/sun50i-h6-ccu.h>
iommu: iommu@30f0000 {
compatible = "allwinner,sun50i-h6-iommu";
reg = <0x030f0000 0x10000>;
interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ccu CLK_BUS_IOMMU>;
resets = <&ccu RST_BUS_IOMMU>;
#iommu-cells = <1>;
};
...
...@@ -42,7 +42,9 @@ properties: ...@@ -42,7 +42,9 @@ properties:
- const: arm,mmu-500 - const: arm,mmu-500
- const: arm,smmu-v2 - const: arm,smmu-v2
- items: - items:
- const: arm,mmu-401 - enum:
- arm,mmu-400
- arm,mmu-401
- const: arm,smmu-v1 - const: arm,smmu-v1
- enum: - enum:
- arm,smmu-v1 - arm,smmu-v1
......
...@@ -184,6 +184,9 @@ For the compatible strings below the following phandle references are required: ...@@ -184,6 +184,9 @@ For the compatible strings below the following phandle references are required:
followed by the offset within syscon for conn_box_spare0 followed by the offset within syscon for conn_box_spare0
register. register.
The Hexagon node must contain iommus property as described in ../iommu/iommu.txt
on platforms which do not have TrustZone.
= SUBNODES: = SUBNODES:
The Hexagon node must contain two subnodes, named "mba" and "mpss" representing The Hexagon node must contain two subnodes, named "mba" and "mpss" representing
the memory regions used by the Hexagon firmware. Each sub-node must contain: the memory regions used by the Hexagon firmware. Each sub-node must contain:
......
...@@ -633,6 +633,11 @@ &mdss_mdp { ...@@ -633,6 +633,11 @@ &mdss_mdp {
status = "okay"; status = "okay";
}; };
&mss_pil {
iommus = <&apps_smmu 0x780 0x1>,
<&apps_smmu 0x724 0x3>;
};
&pm8998_pwrkey { &pm8998_pwrkey {
status = "disabled"; status = "disabled";
}; };
......
...@@ -303,6 +303,15 @@ config ROCKCHIP_IOMMU ...@@ -303,6 +303,15 @@ config ROCKCHIP_IOMMU
Say Y here if you are using a Rockchip SoC that includes an IOMMU Say Y here if you are using a Rockchip SoC that includes an IOMMU
device. device.
config SUN50I_IOMMU
bool "Allwinner H6 IOMMU Support"
depends on ARCH_SUNXI || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
select IOMMU_DMA
help
Support for the IOMMU introduced in the Allwinner H6 SoCs.
config TEGRA_IOMMU_GART config TEGRA_IOMMU_GART
bool "Tegra GART IOMMU Support" bool "Tegra GART IOMMU Support"
depends on ARCH_TEGRA_2x_SOC depends on ARCH_TEGRA_2x_SOC
......
...@@ -29,6 +29,7 @@ obj-$(CONFIG_MTK_IOMMU_V1) += mtk_iommu_v1.o ...@@ -29,6 +29,7 @@ obj-$(CONFIG_MTK_IOMMU_V1) += mtk_iommu_v1.o
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o
obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o
obj-$(CONFIG_SUN50I_IOMMU) += sun50i-iommu.o
obj-$(CONFIG_TEGRA_IOMMU_GART) += tegra-gart.o obj-$(CONFIG_TEGRA_IOMMU_GART) += tegra-gart.o
obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2009-2010 Advanced Micro Devices, Inc.
* Author: Joerg Roedel <jroedel@suse.de>
*/
#ifndef AMD_IOMMU_H #ifndef AMD_IOMMU_H
#define AMD_IOMMU_H #define AMD_IOMMU_H
int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line); #include <linux/iommu.h>
#include "amd_iommu_types.h"
extern int amd_iommu_get_num_iommus(void);
extern int amd_iommu_init_dma_ops(void);
extern int amd_iommu_init_passthrough(void);
extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
extern void amd_iommu_apply_erratum_63(u16 devid);
extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
extern int amd_iommu_init_devices(void);
extern void amd_iommu_uninit_devices(void);
extern void amd_iommu_init_notifier(void);
extern int amd_iommu_init_api(void);
#ifdef CONFIG_AMD_IOMMU_DEBUGFS
void amd_iommu_debugfs_setup(struct amd_iommu *iommu);
#else
static inline void amd_iommu_debugfs_setup(struct amd_iommu *iommu) {}
#endif
/* Needed for interrupt remapping */
extern int amd_iommu_prepare(void);
extern int amd_iommu_enable(void);
extern void amd_iommu_disable(void);
extern int amd_iommu_reenable(int);
extern int amd_iommu_enable_faulting(void);
extern int amd_iommu_guest_ir;
/* IOMMUv2 specific functions */
struct iommu_domain;
extern bool amd_iommu_v2_supported(void);
extern int amd_iommu_register_ppr_notifier(struct notifier_block *nb);
extern int amd_iommu_unregister_ppr_notifier(struct notifier_block *nb);
extern void amd_iommu_domain_direct_map(struct iommu_domain *dom);
extern int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids);
extern int amd_iommu_flush_page(struct iommu_domain *dom, int pasid,
u64 address);
extern int amd_iommu_flush_tlb(struct iommu_domain *dom, int pasid);
extern int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, int pasid,
unsigned long cr3);
extern int amd_iommu_domain_clear_gcr3(struct iommu_domain *dom, int pasid);
extern struct iommu_domain *amd_iommu_get_v2_domain(struct pci_dev *pdev);
#ifdef CONFIG_IRQ_REMAP
extern int amd_iommu_create_irq_domain(struct amd_iommu *iommu);
#else
static inline int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
{
return 0;
}
#endif
#define PPR_SUCCESS 0x0
#define PPR_INVALID 0x1
#define PPR_FAILURE 0xf
extern int amd_iommu_complete_ppr(struct pci_dev *pdev, int pasid,
int status, int tag);
static inline bool is_rd890_iommu(struct pci_dev *pdev)
{
return (pdev->vendor == PCI_VENDOR_ID_ATI) &&
(pdev->device == PCI_DEVICE_ID_RD890_IOMMU);
}
static inline bool iommu_feature(struct amd_iommu *iommu, u64 f)
{
if (!(iommu->cap & (1 << IOMMU_CAP_EFR)))
return false;
return !!(iommu->features & f);
}
static inline u64 iommu_virt_to_phys(void *vaddr)
{
return (u64)__sme_set(virt_to_phys(vaddr));
}
static inline void *iommu_phys_to_virt(unsigned long paddr)
{
return phys_to_virt(__sme_clr(paddr));
}
extern bool translation_pre_enabled(struct amd_iommu *iommu);
extern bool amd_iommu_is_attach_deferred(struct iommu_domain *domain,
struct device *dev);
extern int __init add_special_device(u8 type, u8 id, u16 *devid,
bool cmd_line);
#ifdef CONFIG_DMI #ifdef CONFIG_DMI
void amd_iommu_apply_ivrs_quirks(void); void amd_iommu_apply_ivrs_quirks(void);
......
...@@ -8,10 +8,9 @@ ...@@ -8,10 +8,9 @@
*/ */
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/iommu.h>
#include <linux/pci.h> #include <linux/pci.h>
#include "amd_iommu_proto.h"
#include "amd_iommu_types.h" #include "amd_iommu.h"
static struct dentry *amd_iommu_debugfs; static struct dentry *amd_iommu_debugfs;
static DEFINE_MUTEX(amd_iommu_debugfs_lock); static DEFINE_MUTEX(amd_iommu_debugfs_lock);
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/amd-iommu.h> #include <linux/amd-iommu.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/iommu.h>
#include <linux/kmemleak.h> #include <linux/kmemleak.h>
#include <linux/mem_encrypt.h> #include <linux/mem_encrypt.h>
#include <asm/pci-direct.h> #include <asm/pci-direct.h>
...@@ -32,9 +31,8 @@ ...@@ -32,9 +31,8 @@
#include <asm/irq_remapping.h> #include <asm/irq_remapping.h>
#include <linux/crash_dump.h> #include <linux/crash_dump.h>
#include "amd_iommu.h" #include "amd_iommu.h"
#include "amd_iommu_proto.h"
#include "amd_iommu_types.h"
#include "irq_remapping.h" #include "irq_remapping.h"
/* /*
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2009-2010 Advanced Micro Devices, Inc.
* Author: Joerg Roedel <jroedel@suse.de>
*/
#ifndef _ASM_X86_AMD_IOMMU_PROTO_H
#define _ASM_X86_AMD_IOMMU_PROTO_H
#include "amd_iommu_types.h"
extern int amd_iommu_get_num_iommus(void);
extern int amd_iommu_init_dma_ops(void);
extern int amd_iommu_init_passthrough(void);
extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
extern void amd_iommu_apply_erratum_63(u16 devid);
extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
extern int amd_iommu_init_devices(void);
extern void amd_iommu_uninit_devices(void);
extern void amd_iommu_init_notifier(void);
extern int amd_iommu_init_api(void);
#ifdef CONFIG_AMD_IOMMU_DEBUGFS
void amd_iommu_debugfs_setup(struct amd_iommu *iommu);
#else
static inline void amd_iommu_debugfs_setup(struct amd_iommu *iommu) {}
#endif
/* Needed for interrupt remapping */
extern int amd_iommu_prepare(void);
extern int amd_iommu_enable(void);
extern void amd_iommu_disable(void);
extern int amd_iommu_reenable(int);
extern int amd_iommu_enable_faulting(void);
extern int amd_iommu_guest_ir;
/* IOMMUv2 specific functions */
struct iommu_domain;
extern bool amd_iommu_v2_supported(void);
extern int amd_iommu_register_ppr_notifier(struct notifier_block *nb);
extern int amd_iommu_unregister_ppr_notifier(struct notifier_block *nb);
extern void amd_iommu_domain_direct_map(struct iommu_domain *dom);
extern int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids);
extern int amd_iommu_flush_page(struct iommu_domain *dom, int pasid,
u64 address);
extern int amd_iommu_flush_tlb(struct iommu_domain *dom, int pasid);
extern int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, int pasid,
unsigned long cr3);
extern int amd_iommu_domain_clear_gcr3(struct iommu_domain *dom, int pasid);
extern struct iommu_domain *amd_iommu_get_v2_domain(struct pci_dev *pdev);
#ifdef CONFIG_IRQ_REMAP
extern int amd_iommu_create_irq_domain(struct amd_iommu *iommu);
#else
static inline int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
{
return 0;
}
#endif
#define PPR_SUCCESS 0x0
#define PPR_INVALID 0x1
#define PPR_FAILURE 0xf
extern int amd_iommu_complete_ppr(struct pci_dev *pdev, int pasid,
int status, int tag);
static inline bool is_rd890_iommu(struct pci_dev *pdev)
{
return (pdev->vendor == PCI_VENDOR_ID_ATI) &&
(pdev->device == PCI_DEVICE_ID_RD890_IOMMU);
}
static inline bool iommu_feature(struct amd_iommu *iommu, u64 f)
{
if (!(iommu->cap & (1 << IOMMU_CAP_EFR)))
return false;
return !!(iommu->features & f);
}
static inline u64 iommu_virt_to_phys(void *vaddr)
{
return (u64)__sme_set(virt_to_phys(vaddr));
}
static inline void *iommu_phys_to_virt(unsigned long paddr)
{
return phys_to_virt(__sme_clr(paddr));
}
extern bool translation_pre_enabled(struct amd_iommu *iommu);
extern struct iommu_dev_data *get_dev_data(struct device *dev);
#endif /* _ASM_X86_AMD_IOMMU_PROTO_H */
...@@ -398,7 +398,7 @@ extern bool amd_iommu_dump; ...@@ -398,7 +398,7 @@ extern bool amd_iommu_dump;
#define DUMP_printk(format, arg...) \ #define DUMP_printk(format, arg...) \
do { \ do { \
if (amd_iommu_dump) \ if (amd_iommu_dump) \
printk(KERN_INFO "AMD-Vi: " format, ## arg); \ pr_info("AMD-Vi: " format, ## arg); \
} while(0); } while(0);
/* global flag if IOMMUs cache non-present entries */ /* global flag if IOMMUs cache non-present entries */
...@@ -645,7 +645,6 @@ struct iommu_dev_data { ...@@ -645,7 +645,6 @@ struct iommu_dev_data {
struct pci_dev *pdev; struct pci_dev *pdev;
u16 devid; /* PCI Device ID */ u16 devid; /* PCI Device ID */
bool iommu_v2; /* Device can make use of IOMMUv2 */ bool iommu_v2; /* Device can make use of IOMMUv2 */
bool passthrough; /* Device is identity mapped */
struct { struct {
bool enabled; bool enabled;
int qdep; int qdep;
......
...@@ -13,13 +13,11 @@ ...@@ -13,13 +13,11 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include <linux/iommu.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include "amd_iommu_types.h" #include "amd_iommu.h"
#include "amd_iommu_proto.h"
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Joerg Roedel <jroedel@suse.de>"); MODULE_AUTHOR("Joerg Roedel <jroedel@suse.de>");
...@@ -517,13 +515,12 @@ static int ppr_notifier(struct notifier_block *nb, unsigned long e, void *data) ...@@ -517,13 +515,12 @@ static int ppr_notifier(struct notifier_block *nb, unsigned long e, void *data)
struct amd_iommu_fault *iommu_fault; struct amd_iommu_fault *iommu_fault;
struct pasid_state *pasid_state; struct pasid_state *pasid_state;
struct device_state *dev_state; struct device_state *dev_state;
struct pci_dev *pdev = NULL;
unsigned long flags; unsigned long flags;
struct fault *fault; struct fault *fault;
bool finish; bool finish;
u16 tag, devid; u16 tag, devid;
int ret; int ret;
struct iommu_dev_data *dev_data;
struct pci_dev *pdev = NULL;
iommu_fault = data; iommu_fault = data;
tag = iommu_fault->tag & 0x1ff; tag = iommu_fault->tag & 0x1ff;
...@@ -534,12 +531,11 @@ static int ppr_notifier(struct notifier_block *nb, unsigned long e, void *data) ...@@ -534,12 +531,11 @@ static int ppr_notifier(struct notifier_block *nb, unsigned long e, void *data)
devid & 0xff); devid & 0xff);
if (!pdev) if (!pdev)
return -ENODEV; return -ENODEV;
dev_data = get_dev_data(&pdev->dev);
/* In kdump kernel pci dev is not initialized yet -> send INVALID */
ret = NOTIFY_DONE; ret = NOTIFY_DONE;
if (translation_pre_enabled(amd_iommu_rlookup_table[devid])
&& dev_data->defer_attach) { /* In kdump kernel pci dev is not initialized yet -> send INVALID */
if (amd_iommu_is_attach_deferred(NULL, &pdev->dev)) {
amd_iommu_complete_ppr(pdev, iommu_fault->pasid, amd_iommu_complete_ppr(pdev, iommu_fault->pasid,
PPR_INVALID, tag); PPR_INVALID, tag);
goto out; goto out;
......
...@@ -150,6 +150,8 @@ static const struct arm_smmu_impl arm_mmu500_impl = { ...@@ -150,6 +150,8 @@ static const struct arm_smmu_impl arm_mmu500_impl = {
struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu)
{ {
const struct device_node *np = smmu->dev->of_node;
/* /*
* We will inevitably have to combine model-specific implementation * We will inevitably have to combine model-specific implementation
* quirks with platform-specific integration quirks, but everything * quirks with platform-specific integration quirks, but everything
...@@ -166,11 +168,11 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) ...@@ -166,11 +168,11 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu)
break; break;
} }
if (of_property_read_bool(smmu->dev->of_node, if (of_property_read_bool(np, "calxeda,smmu-secure-config-access"))
"calxeda,smmu-secure-config-access"))
smmu->impl = &calxeda_impl; smmu->impl = &calxeda_impl;
if (of_device_is_compatible(smmu->dev->of_node, "qcom,sdm845-smmu-500")) if (of_device_is_compatible(np, "qcom,sdm845-smmu-500") ||
of_device_is_compatible(np, "qcom,sc7180-smmu-500"))
return qcom_smmu_impl_init(smmu); return qcom_smmu_impl_init(smmu);
return smmu; return smmu;
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* Copyright (c) 2019, The Linux Foundation. All rights reserved. * Copyright (c) 2019, The Linux Foundation. All rights reserved.
*/ */
#include <linux/of_device.h>
#include <linux/qcom_scm.h> #include <linux/qcom_scm.h>
#include "arm-smmu.h" #include "arm-smmu.h"
...@@ -11,12 +12,29 @@ struct qcom_smmu { ...@@ -11,12 +12,29 @@ struct qcom_smmu {
struct arm_smmu_device smmu; struct arm_smmu_device smmu;
}; };
static const struct of_device_id qcom_smmu_client_of_match[] = {
{ .compatible = "qcom,adreno" },
{ .compatible = "qcom,mdp4" },
{ .compatible = "qcom,mdss" },
{ .compatible = "qcom,sc7180-mdss" },
{ .compatible = "qcom,sc7180-mss-pil" },
{ .compatible = "qcom,sdm845-mdss" },
{ .compatible = "qcom,sdm845-mss-pil" },
{ }
};
static int qcom_smmu_def_domain_type(struct device *dev)
{
const struct of_device_id *match =
of_match_device(qcom_smmu_client_of_match, dev);
return match ? IOMMU_DOMAIN_IDENTITY : 0;
}
static int qcom_sdm845_smmu500_reset(struct arm_smmu_device *smmu) static int qcom_sdm845_smmu500_reset(struct arm_smmu_device *smmu)
{ {
int ret; int ret;
arm_mmu500_reset(smmu);
/* /*
* To address performance degradation in non-real time clients, * To address performance degradation in non-real time clients,
* such as USB and UFS, turn off wait-for-safe on sdm845 based boards, * such as USB and UFS, turn off wait-for-safe on sdm845 based boards,
...@@ -30,8 +48,21 @@ static int qcom_sdm845_smmu500_reset(struct arm_smmu_device *smmu) ...@@ -30,8 +48,21 @@ static int qcom_sdm845_smmu500_reset(struct arm_smmu_device *smmu)
return ret; return ret;
} }
static int qcom_smmu500_reset(struct arm_smmu_device *smmu)
{
const struct device_node *np = smmu->dev->of_node;
arm_mmu500_reset(smmu);
if (of_device_is_compatible(np, "qcom,sdm845-smmu-500"))
return qcom_sdm845_smmu500_reset(smmu);
return 0;
}
static const struct arm_smmu_impl qcom_smmu_impl = { static const struct arm_smmu_impl qcom_smmu_impl = {
.reset = qcom_sdm845_smmu500_reset, .def_domain_type = qcom_smmu_def_domain_type,
.reset = qcom_smmu500_reset,
}; };
struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu) struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
......
...@@ -171,6 +171,8 @@ ...@@ -171,6 +171,8 @@
#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 #define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc #define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
#define ARM_SMMU_REG_SZ 0xe00
/* Common MSI config fields */ /* Common MSI config fields */
#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) #define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
#define MSI_CFG2_SH GENMASK(5, 4) #define MSI_CFG2_SH GENMASK(5, 4)
...@@ -628,6 +630,7 @@ struct arm_smmu_strtab_cfg { ...@@ -628,6 +630,7 @@ struct arm_smmu_strtab_cfg {
struct arm_smmu_device { struct arm_smmu_device {
struct device *dev; struct device *dev;
void __iomem *base; void __iomem *base;
void __iomem *page1;
#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0) #define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0)
#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1) #define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1)
...@@ -664,7 +667,6 @@ struct arm_smmu_device { ...@@ -664,7 +667,6 @@ struct arm_smmu_device {
#define ARM_SMMU_MAX_ASIDS (1 << 16) #define ARM_SMMU_MAX_ASIDS (1 << 16)
unsigned int asid_bits; unsigned int asid_bits;
DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
#define ARM_SMMU_MAX_VMIDS (1 << 16) #define ARM_SMMU_MAX_VMIDS (1 << 16)
unsigned int vmid_bits; unsigned int vmid_bits;
...@@ -724,6 +726,8 @@ struct arm_smmu_option_prop { ...@@ -724,6 +726,8 @@ struct arm_smmu_option_prop {
const char *prop; const char *prop;
}; };
static DEFINE_XARRAY_ALLOC1(asid_xa);
static struct arm_smmu_option_prop arm_smmu_options[] = { static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
...@@ -733,9 +737,8 @@ static struct arm_smmu_option_prop arm_smmu_options[] = { ...@@ -733,9 +737,8 @@ static struct arm_smmu_option_prop arm_smmu_options[] = {
static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset, static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
struct arm_smmu_device *smmu) struct arm_smmu_device *smmu)
{ {
if ((offset > SZ_64K) && if (offset > SZ_64K)
(smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY)) return smmu->page1 + offset - SZ_64K;
offset -= SZ_64K;
return smmu->base + offset; return smmu->base + offset;
} }
...@@ -1763,6 +1766,14 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain) ...@@ -1763,6 +1766,14 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
cdcfg->cdtab = NULL; cdcfg->cdtab = NULL;
} }
static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
{
if (!cd->asid)
return;
xa_erase(&asid_xa, cd->asid);
}
/* Stream table manipulation functions */ /* Stream table manipulation functions */
static void static void
arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
...@@ -2448,10 +2459,9 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) ...@@ -2448,10 +2459,9 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
if (cfg->cdcfg.cdtab) { if (cfg->cdcfg.cdtab)
arm_smmu_free_cd_tables(smmu_domain); arm_smmu_free_cd_tables(smmu_domain);
arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid); arm_smmu_free_asid(&cfg->cd);
}
} else { } else {
struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
if (cfg->vmid) if (cfg->vmid)
...@@ -2466,14 +2476,15 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, ...@@ -2466,14 +2476,15 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
struct io_pgtable_cfg *pgtbl_cfg) struct io_pgtable_cfg *pgtbl_cfg)
{ {
int ret; int ret;
int asid; u32 asid;
struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr; typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits); ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
if (asid < 0) XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
return asid; if (ret)
return ret;
cfg->s1cdmax = master->ssid_bits; cfg->s1cdmax = master->ssid_bits;
...@@ -2506,7 +2517,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, ...@@ -2506,7 +2517,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
out_free_cd_tables: out_free_cd_tables:
arm_smmu_free_cd_tables(smmu_domain); arm_smmu_free_cd_tables(smmu_domain);
out_free_asid: out_free_asid:
arm_smmu_bitmap_free(smmu->asid_map, asid); arm_smmu_free_asid(&cfg->cd);
return ret; return ret;
} }
...@@ -2652,26 +2663,20 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master) ...@@ -2652,26 +2663,20 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
} }
} }
#ifdef CONFIG_PCI_ATS
static bool arm_smmu_ats_supported(struct arm_smmu_master *master) static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
{ {
struct pci_dev *pdev; struct device *dev = master->dev;
struct arm_smmu_device *smmu = master->smmu; struct arm_smmu_device *smmu = master->smmu;
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
if (!(smmu->features & ARM_SMMU_FEAT_ATS) || !dev_is_pci(master->dev) || if (!(smmu->features & ARM_SMMU_FEAT_ATS))
!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS) || pci_ats_disabled())
return false; return false;
pdev = to_pci_dev(master->dev); if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
return !pdev->untrusted && pdev->ats_cap;
}
#else
static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
{
return false; return false;
return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
} }
#endif
static void arm_smmu_enable_ats(struct arm_smmu_master *master) static void arm_smmu_enable_ats(struct arm_smmu_master *master)
{ {
...@@ -2914,27 +2919,26 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) ...@@ -2914,27 +2919,26 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
static struct iommu_ops arm_smmu_ops; static struct iommu_ops arm_smmu_ops;
static int arm_smmu_add_device(struct device *dev) static struct iommu_device *arm_smmu_probe_device(struct device *dev)
{ {
int i, ret; int i, ret;
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct arm_smmu_master *master; struct arm_smmu_master *master;
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct iommu_group *group;
if (!fwspec || fwspec->ops != &arm_smmu_ops) if (!fwspec || fwspec->ops != &arm_smmu_ops)
return -ENODEV; return ERR_PTR(-ENODEV);
if (WARN_ON_ONCE(dev_iommu_priv_get(dev))) if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
return -EBUSY; return ERR_PTR(-EBUSY);
smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode); smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
if (!smmu) if (!smmu)
return -ENODEV; return ERR_PTR(-ENODEV);
master = kzalloc(sizeof(*master), GFP_KERNEL); master = kzalloc(sizeof(*master), GFP_KERNEL);
if (!master) if (!master)
return -ENOMEM; return ERR_PTR(-ENOMEM);
master->dev = dev; master->dev = dev;
master->smmu = smmu; master->smmu = smmu;
...@@ -2975,43 +2979,24 @@ static int arm_smmu_add_device(struct device *dev) ...@@ -2975,43 +2979,24 @@ static int arm_smmu_add_device(struct device *dev)
master->ssid_bits = min_t(u8, master->ssid_bits, master->ssid_bits = min_t(u8, master->ssid_bits,
CTXDESC_LINEAR_CDMAX); CTXDESC_LINEAR_CDMAX);
ret = iommu_device_link(&smmu->iommu, dev); return &smmu->iommu;
if (ret)
goto err_disable_pasid;
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group)) {
ret = PTR_ERR(group);
goto err_unlink;
}
iommu_group_put(group);
return 0;
err_unlink:
iommu_device_unlink(&smmu->iommu, dev);
err_disable_pasid:
arm_smmu_disable_pasid(master);
err_free_master: err_free_master:
kfree(master); kfree(master);
dev_iommu_priv_set(dev, NULL); dev_iommu_priv_set(dev, NULL);
return ret; return ERR_PTR(ret);
} }
static void arm_smmu_remove_device(struct device *dev) static void arm_smmu_release_device(struct device *dev)
{ {
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct arm_smmu_master *master; struct arm_smmu_master *master;
struct arm_smmu_device *smmu;
if (!fwspec || fwspec->ops != &arm_smmu_ops) if (!fwspec || fwspec->ops != &arm_smmu_ops)
return; return;
master = dev_iommu_priv_get(dev); master = dev_iommu_priv_get(dev);
smmu = master->smmu;
arm_smmu_detach_dev(master); arm_smmu_detach_dev(master);
iommu_group_remove_device(dev);
iommu_device_unlink(&smmu->iommu, dev);
arm_smmu_disable_pasid(master); arm_smmu_disable_pasid(master);
kfree(master); kfree(master);
iommu_fwspec_free(dev); iommu_fwspec_free(dev);
...@@ -3138,8 +3123,8 @@ static struct iommu_ops arm_smmu_ops = { ...@@ -3138,8 +3123,8 @@ static struct iommu_ops arm_smmu_ops = {
.flush_iotlb_all = arm_smmu_flush_iotlb_all, .flush_iotlb_all = arm_smmu_flush_iotlb_all,
.iotlb_sync = arm_smmu_iotlb_sync, .iotlb_sync = arm_smmu_iotlb_sync,
.iova_to_phys = arm_smmu_iova_to_phys, .iova_to_phys = arm_smmu_iova_to_phys,
.add_device = arm_smmu_add_device, .probe_device = arm_smmu_probe_device,
.remove_device = arm_smmu_remove_device, .release_device = arm_smmu_release_device,
.device_group = arm_smmu_device_group, .device_group = arm_smmu_device_group,
.domain_get_attr = arm_smmu_domain_get_attr, .domain_get_attr = arm_smmu_domain_get_attr,
.domain_set_attr = arm_smmu_domain_set_attr, .domain_set_attr = arm_smmu_domain_set_attr,
...@@ -4021,6 +4006,18 @@ err_reset_pci_ops: __maybe_unused; ...@@ -4021,6 +4006,18 @@ err_reset_pci_ops: __maybe_unused;
return err; return err;
} }
static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
resource_size_t size)
{
struct resource res = {
.flags = IORESOURCE_MEM,
.start = start,
.end = start + size - 1,
};
return devm_ioremap_resource(dev, &res);
}
static int arm_smmu_device_probe(struct platform_device *pdev) static int arm_smmu_device_probe(struct platform_device *pdev)
{ {
int irq, ret; int irq, ret;
...@@ -4056,10 +4053,23 @@ static int arm_smmu_device_probe(struct platform_device *pdev) ...@@ -4056,10 +4053,23 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
} }
ioaddr = res->start; ioaddr = res->start;
smmu->base = devm_ioremap_resource(dev, res); /*
* Don't map the IMPLEMENTATION DEFINED regions, since they may contain
* the PMCG registers which are reserved by the PMU driver.
*/
smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
if (IS_ERR(smmu->base)) if (IS_ERR(smmu->base))
return PTR_ERR(smmu->base); return PTR_ERR(smmu->base);
if (arm_smmu_resource_size(smmu) > SZ_64K) {
smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
ARM_SMMU_REG_SZ);
if (IS_ERR(smmu->page1))
return PTR_ERR(smmu->page1);
} else {
smmu->page1 = smmu->base;
}
/* Interrupt lines */ /* Interrupt lines */
irq = platform_get_irq_byname_optional(pdev, "combined"); irq = platform_get_irq_byname_optional(pdev, "combined");
......
...@@ -220,7 +220,7 @@ static int arm_smmu_register_legacy_master(struct device *dev, ...@@ -220,7 +220,7 @@ static int arm_smmu_register_legacy_master(struct device *dev,
* With the legacy DT binding in play, we have no guarantees about * With the legacy DT binding in play, we have no guarantees about
* probe order, but then we're also not doing default domains, so we can * probe order, but then we're also not doing default domains, so we can
* delay setting bus ops until we're sure every possible SMMU is ready, * delay setting bus ops until we're sure every possible SMMU is ready,
* and that way ensure that no add_device() calls get missed. * and that way ensure that no probe_device() calls get missed.
*/ */
static int arm_smmu_legacy_bus_init(void) static int arm_smmu_legacy_bus_init(void)
{ {
...@@ -1062,7 +1062,6 @@ static int arm_smmu_master_alloc_smes(struct device *dev) ...@@ -1062,7 +1062,6 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev); struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev);
struct arm_smmu_device *smmu = cfg->smmu; struct arm_smmu_device *smmu = cfg->smmu;
struct arm_smmu_smr *smrs = smmu->smrs; struct arm_smmu_smr *smrs = smmu->smrs;
struct iommu_group *group;
int i, idx, ret; int i, idx, ret;
mutex_lock(&smmu->stream_map_mutex); mutex_lock(&smmu->stream_map_mutex);
...@@ -1090,18 +1089,9 @@ static int arm_smmu_master_alloc_smes(struct device *dev) ...@@ -1090,18 +1089,9 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
cfg->smendx[i] = (s16)idx; cfg->smendx[i] = (s16)idx;
} }
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group)) {
ret = PTR_ERR(group);
goto out_err;
}
iommu_group_put(group);
/* It worked! Now, poke the actual hardware */ /* It worked! Now, poke the actual hardware */
for_each_cfg_sme(cfg, fwspec, i, idx) { for_each_cfg_sme(cfg, fwspec, i, idx)
arm_smmu_write_sme(smmu, idx); arm_smmu_write_sme(smmu, idx);
smmu->s2crs[idx].group = group;
}
mutex_unlock(&smmu->stream_map_mutex); mutex_unlock(&smmu->stream_map_mutex);
return 0; return 0;
...@@ -1172,7 +1162,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1172,7 +1162,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
/* /*
* FIXME: The arch/arm DMA API code tries to attach devices to its own * FIXME: The arch/arm DMA API code tries to attach devices to its own
* domains between of_xlate() and add_device() - we have no way to cope * domains between of_xlate() and probe_device() - we have no way to cope
* with that, so until ARM gets converted to rely on groups and default * with that, so until ARM gets converted to rely on groups and default
* domains, just say no (but more politely than by dereferencing NULL). * domains, just say no (but more politely than by dereferencing NULL).
* This should be at least a WARN_ON once that's sorted. * This should be at least a WARN_ON once that's sorted.
...@@ -1382,7 +1372,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode) ...@@ -1382,7 +1372,7 @@ struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
return dev ? dev_get_drvdata(dev) : NULL; return dev ? dev_get_drvdata(dev) : NULL;
} }
static int arm_smmu_add_device(struct device *dev) static struct iommu_device *arm_smmu_probe_device(struct device *dev)
{ {
struct arm_smmu_device *smmu = NULL; struct arm_smmu_device *smmu = NULL;
struct arm_smmu_master_cfg *cfg; struct arm_smmu_master_cfg *cfg;
...@@ -1403,7 +1393,7 @@ static int arm_smmu_add_device(struct device *dev) ...@@ -1403,7 +1393,7 @@ static int arm_smmu_add_device(struct device *dev)
} else if (fwspec && fwspec->ops == &arm_smmu_ops) { } else if (fwspec && fwspec->ops == &arm_smmu_ops) {
smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode); smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
} else { } else {
return -ENODEV; return ERR_PTR(-ENODEV);
} }
ret = -EINVAL; ret = -EINVAL;
...@@ -1444,21 +1434,19 @@ static int arm_smmu_add_device(struct device *dev) ...@@ -1444,21 +1434,19 @@ static int arm_smmu_add_device(struct device *dev)
if (ret) if (ret)
goto out_cfg_free; goto out_cfg_free;
iommu_device_link(&smmu->iommu, dev);
device_link_add(dev, smmu->dev, device_link_add(dev, smmu->dev,
DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_SUPPLIER); DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_SUPPLIER);
return 0; return &smmu->iommu;
out_cfg_free: out_cfg_free:
kfree(cfg); kfree(cfg);
out_free: out_free:
iommu_fwspec_free(dev); iommu_fwspec_free(dev);
return ret; return ERR_PTR(ret);
} }
static void arm_smmu_remove_device(struct device *dev) static void arm_smmu_release_device(struct device *dev)
{ {
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct arm_smmu_master_cfg *cfg; struct arm_smmu_master_cfg *cfg;
...@@ -1475,13 +1463,11 @@ static void arm_smmu_remove_device(struct device *dev) ...@@ -1475,13 +1463,11 @@ static void arm_smmu_remove_device(struct device *dev)
if (ret < 0) if (ret < 0)
return; return;
iommu_device_unlink(&smmu->iommu, dev);
arm_smmu_master_free_smes(cfg, fwspec); arm_smmu_master_free_smes(cfg, fwspec);
arm_smmu_rpm_put(smmu); arm_smmu_rpm_put(smmu);
dev_iommu_priv_set(dev, NULL); dev_iommu_priv_set(dev, NULL);
iommu_group_remove_device(dev);
kfree(cfg); kfree(cfg);
iommu_fwspec_free(dev); iommu_fwspec_free(dev);
} }
...@@ -1512,6 +1498,11 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev) ...@@ -1512,6 +1498,11 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
else else
group = generic_device_group(dev); group = generic_device_group(dev);
/* Remember group for faster lookups */
if (!IS_ERR(group))
for_each_cfg_sme(cfg, fwspec, i, idx)
smmu->s2crs[idx].group = group;
return group; return group;
} }
...@@ -1618,6 +1609,17 @@ static void arm_smmu_get_resv_regions(struct device *dev, ...@@ -1618,6 +1609,17 @@ static void arm_smmu_get_resv_regions(struct device *dev,
iommu_dma_get_resv_regions(dev, head); iommu_dma_get_resv_regions(dev, head);
} }
static int arm_smmu_def_domain_type(struct device *dev)
{
struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev);
const struct arm_smmu_impl *impl = cfg->smmu->impl;
if (impl && impl->def_domain_type)
return impl->def_domain_type(dev);
return 0;
}
static struct iommu_ops arm_smmu_ops = { static struct iommu_ops arm_smmu_ops = {
.capable = arm_smmu_capable, .capable = arm_smmu_capable,
.domain_alloc = arm_smmu_domain_alloc, .domain_alloc = arm_smmu_domain_alloc,
...@@ -1628,14 +1630,15 @@ static struct iommu_ops arm_smmu_ops = { ...@@ -1628,14 +1630,15 @@ static struct iommu_ops arm_smmu_ops = {
.flush_iotlb_all = arm_smmu_flush_iotlb_all, .flush_iotlb_all = arm_smmu_flush_iotlb_all,
.iotlb_sync = arm_smmu_iotlb_sync, .iotlb_sync = arm_smmu_iotlb_sync,
.iova_to_phys = arm_smmu_iova_to_phys, .iova_to_phys = arm_smmu_iova_to_phys,
.add_device = arm_smmu_add_device, .probe_device = arm_smmu_probe_device,
.remove_device = arm_smmu_remove_device, .release_device = arm_smmu_release_device,
.device_group = arm_smmu_device_group, .device_group = arm_smmu_device_group,
.domain_get_attr = arm_smmu_domain_get_attr, .domain_get_attr = arm_smmu_domain_get_attr,
.domain_set_attr = arm_smmu_domain_set_attr, .domain_set_attr = arm_smmu_domain_set_attr,
.of_xlate = arm_smmu_of_xlate, .of_xlate = arm_smmu_of_xlate,
.get_resv_regions = arm_smmu_get_resv_regions, .get_resv_regions = arm_smmu_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions,
.def_domain_type = arm_smmu_def_domain_type,
.pgsize_bitmap = -1UL, /* Restricted during device attach */ .pgsize_bitmap = -1UL, /* Restricted during device attach */
}; };
...@@ -2253,7 +2256,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev) ...@@ -2253,7 +2256,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev)
return -ENODEV; return -ENODEV;
if (!bitmap_empty(smmu->context_map, ARM_SMMU_MAX_CBS)) if (!bitmap_empty(smmu->context_map, ARM_SMMU_MAX_CBS))
dev_err(&pdev->dev, "removing device with active domains!\n"); dev_notice(&pdev->dev, "disabling translation\n");
arm_smmu_bus_init(NULL); arm_smmu_bus_init(NULL);
iommu_device_unregister(&smmu->iommu); iommu_device_unregister(&smmu->iommu);
......
...@@ -386,6 +386,7 @@ struct arm_smmu_impl { ...@@ -386,6 +386,7 @@ struct arm_smmu_impl {
int (*init_context)(struct arm_smmu_domain *smmu_domain); int (*init_context)(struct arm_smmu_domain *smmu_domain);
void (*tlb_sync)(struct arm_smmu_device *smmu, int page, int sync, void (*tlb_sync)(struct arm_smmu_device *smmu, int page, int sync,
int status); int status);
int (*def_domain_type)(struct device *dev);
}; };
static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n)
......
...@@ -963,6 +963,7 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr) ...@@ -963,6 +963,7 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr)
warn_invalid_dmar(phys_addr, " returns all ones"); warn_invalid_dmar(phys_addr, " returns all ones");
goto unmap; goto unmap;
} }
iommu->vccap = dmar_readq(iommu->reg + DMAR_VCCAP_REG);
/* the registers might be more than one page */ /* the registers might be more than one page */
map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap), map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap),
...@@ -1156,12 +1157,11 @@ static inline void reclaim_free_desc(struct q_inval *qi) ...@@ -1156,12 +1157,11 @@ static inline void reclaim_free_desc(struct q_inval *qi)
} }
} }
static int qi_check_fault(struct intel_iommu *iommu, int index) static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index)
{ {
u32 fault; u32 fault;
int head, tail; int head, tail;
struct q_inval *qi = iommu->qi; struct q_inval *qi = iommu->qi;
int wait_index = (index + 1) % QI_LENGTH;
int shift = qi_shift(iommu); int shift = qi_shift(iommu);
if (qi->desc_status[wait_index] == QI_ABORT) if (qi->desc_status[wait_index] == QI_ABORT)
...@@ -1224,17 +1224,21 @@ static int qi_check_fault(struct intel_iommu *iommu, int index) ...@@ -1224,17 +1224,21 @@ static int qi_check_fault(struct intel_iommu *iommu, int index)
} }
/* /*
* Submit the queued invalidation descriptor to the remapping * Function to submit invalidation descriptors of all types to the queued
* hardware unit and wait for its completion. * invalidation interface(QI). Multiple descriptors can be submitted at a
* time, a wait descriptor will be appended to each submission to ensure
* hardware has completed the invalidation before return. Wait descriptors
* can be part of the submission but it will not be polled for completion.
*/ */
int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
unsigned int count, unsigned long options)
{ {
int rc;
struct q_inval *qi = iommu->qi; struct q_inval *qi = iommu->qi;
int offset, shift, length;
struct qi_desc wait_desc; struct qi_desc wait_desc;
int wait_index, index; int wait_index, index;
unsigned long flags; unsigned long flags;
int offset, shift;
int rc, i;
if (!qi) if (!qi)
return 0; return 0;
...@@ -1243,32 +1247,41 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) ...@@ -1243,32 +1247,41 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu)
rc = 0; rc = 0;
raw_spin_lock_irqsave(&qi->q_lock, flags); raw_spin_lock_irqsave(&qi->q_lock, flags);
while (qi->free_cnt < 3) { /*
* Check if we have enough empty slots in the queue to submit,
* the calculation is based on:
* # of desc + 1 wait desc + 1 space between head and tail
*/
while (qi->free_cnt < count + 2) {
raw_spin_unlock_irqrestore(&qi->q_lock, flags); raw_spin_unlock_irqrestore(&qi->q_lock, flags);
cpu_relax(); cpu_relax();
raw_spin_lock_irqsave(&qi->q_lock, flags); raw_spin_lock_irqsave(&qi->q_lock, flags);
} }
index = qi->free_head; index = qi->free_head;
wait_index = (index + 1) % QI_LENGTH; wait_index = (index + count) % QI_LENGTH;
shift = qi_shift(iommu); shift = qi_shift(iommu);
length = 1 << shift;
qi->desc_status[index] = qi->desc_status[wait_index] = QI_IN_USE; for (i = 0; i < count; i++) {
offset = ((index + i) % QI_LENGTH) << shift;
memcpy(qi->desc + offset, &desc[i], 1 << shift);
qi->desc_status[(index + i) % QI_LENGTH] = QI_IN_USE;
}
qi->desc_status[wait_index] = QI_IN_USE;
offset = index << shift;
memcpy(qi->desc + offset, desc, length);
wait_desc.qw0 = QI_IWD_STATUS_DATA(QI_DONE) | wait_desc.qw0 = QI_IWD_STATUS_DATA(QI_DONE) |
QI_IWD_STATUS_WRITE | QI_IWD_TYPE; QI_IWD_STATUS_WRITE | QI_IWD_TYPE;
if (options & QI_OPT_WAIT_DRAIN)
wait_desc.qw0 |= QI_IWD_PRQ_DRAIN;
wait_desc.qw1 = virt_to_phys(&qi->desc_status[wait_index]); wait_desc.qw1 = virt_to_phys(&qi->desc_status[wait_index]);
wait_desc.qw2 = 0; wait_desc.qw2 = 0;
wait_desc.qw3 = 0; wait_desc.qw3 = 0;
offset = wait_index << shift; offset = wait_index << shift;
memcpy(qi->desc + offset, &wait_desc, length); memcpy(qi->desc + offset, &wait_desc, 1 << shift);
qi->free_head = (qi->free_head + 2) % QI_LENGTH; qi->free_head = (qi->free_head + count + 1) % QI_LENGTH;
qi->free_cnt -= 2; qi->free_cnt -= count + 1;
/* /*
* update the HW tail register indicating the presence of * update the HW tail register indicating the presence of
...@@ -1284,7 +1297,7 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) ...@@ -1284,7 +1297,7 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu)
* a deadlock where the interrupt context can wait indefinitely * a deadlock where the interrupt context can wait indefinitely
* for free slots in the queue. * for free slots in the queue.
*/ */
rc = qi_check_fault(iommu, index); rc = qi_check_fault(iommu, index, wait_index);
if (rc) if (rc)
break; break;
...@@ -1293,7 +1306,8 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) ...@@ -1293,7 +1306,8 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu)
raw_spin_lock(&qi->q_lock); raw_spin_lock(&qi->q_lock);
} }
qi->desc_status[index] = QI_DONE; for (i = 0; i < count; i++)
qi->desc_status[(index + i) % QI_LENGTH] = QI_DONE;
reclaim_free_desc(qi); reclaim_free_desc(qi);
raw_spin_unlock_irqrestore(&qi->q_lock, flags); raw_spin_unlock_irqrestore(&qi->q_lock, flags);
...@@ -1317,7 +1331,7 @@ void qi_global_iec(struct intel_iommu *iommu) ...@@ -1317,7 +1331,7 @@ void qi_global_iec(struct intel_iommu *iommu)
desc.qw3 = 0; desc.qw3 = 0;
/* should never fail */ /* should never fail */
qi_submit_sync(&desc, iommu); qi_submit_sync(iommu, &desc, 1, 0);
} }
void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm,
...@@ -1331,7 +1345,7 @@ void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, ...@@ -1331,7 +1345,7 @@ void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm,
desc.qw2 = 0; desc.qw2 = 0;
desc.qw3 = 0; desc.qw3 = 0;
qi_submit_sync(&desc, iommu); qi_submit_sync(iommu, &desc, 1, 0);
} }
void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
...@@ -1355,7 +1369,7 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, ...@@ -1355,7 +1369,7 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
desc.qw2 = 0; desc.qw2 = 0;
desc.qw3 = 0; desc.qw3 = 0;
qi_submit_sync(&desc, iommu); qi_submit_sync(iommu, &desc, 1, 0);
} }
void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
...@@ -1377,7 +1391,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, ...@@ -1377,7 +1391,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
desc.qw2 = 0; desc.qw2 = 0;
desc.qw3 = 0; desc.qw3 = 0;
qi_submit_sync(&desc, iommu); qi_submit_sync(iommu, &desc, 1, 0);
} }
/* PASID-based IOTLB invalidation */ /* PASID-based IOTLB invalidation */
...@@ -1418,7 +1432,46 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, ...@@ -1418,7 +1432,46 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
QI_EIOTLB_AM(mask); QI_EIOTLB_AM(mask);
} }
qi_submit_sync(&desc, iommu); qi_submit_sync(iommu, &desc, 1, 0);
}
/* PASID-based device IOTLB Invalidate */
void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
u32 pasid, u16 qdep, u64 addr,
unsigned int size_order, u64 granu)
{
unsigned long mask = 1UL << (VTD_PAGE_SHIFT + size_order - 1);
struct qi_desc desc = {.qw1 = 0, .qw2 = 0, .qw3 = 0};
desc.qw0 = QI_DEV_EIOTLB_PASID(pasid) | QI_DEV_EIOTLB_SID(sid) |
QI_DEV_EIOTLB_QDEP(qdep) | QI_DEIOTLB_TYPE |
QI_DEV_IOTLB_PFSID(pfsid);
desc.qw1 = QI_DEV_EIOTLB_GLOB(granu);
/*
* If S bit is 0, we only flush a single page. If S bit is set,
* The least significant zero bit indicates the invalidation address
* range. VT-d spec 6.5.2.6.
* e.g. address bit 12[0] indicates 8KB, 13[0] indicates 16KB.
* size order = 0 is PAGE_SIZE 4KB
* Max Invs Pending (MIP) is set to 0 for now until we have DIT in
* ECAP.
*/
desc.qw1 |= addr & ~mask;
if (size_order)
desc.qw1 |= QI_DEV_EIOTLB_SIZE;
qi_submit_sync(iommu, &desc, 1, 0);
}
void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did,
u64 granu, int pasid)
{
struct qi_desc desc = {.qw1 = 0, .qw2 = 0, .qw3 = 0};
desc.qw0 = QI_PC_PASID(pasid) | QI_PC_DID(did) |
QI_PC_GRAN(granu) | QI_PC_TYPE;
qi_submit_sync(iommu, &desc, 1, 0);
} }
/* /*
......
...@@ -1235,19 +1235,13 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *iommu_domain, ...@@ -1235,19 +1235,13 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *iommu_domain,
return phys; return phys;
} }
static int exynos_iommu_add_device(struct device *dev) static struct iommu_device *exynos_iommu_probe_device(struct device *dev)
{ {
struct exynos_iommu_owner *owner = dev->archdata.iommu; struct exynos_iommu_owner *owner = dev->archdata.iommu;
struct sysmmu_drvdata *data; struct sysmmu_drvdata *data;
struct iommu_group *group;
if (!has_sysmmu(dev)) if (!has_sysmmu(dev))
return -ENODEV; return ERR_PTR(-ENODEV);
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group))
return PTR_ERR(group);
list_for_each_entry(data, &owner->controllers, owner_node) { list_for_each_entry(data, &owner->controllers, owner_node) {
/* /*
...@@ -1259,12 +1253,15 @@ static int exynos_iommu_add_device(struct device *dev) ...@@ -1259,12 +1253,15 @@ static int exynos_iommu_add_device(struct device *dev)
DL_FLAG_STATELESS | DL_FLAG_STATELESS |
DL_FLAG_PM_RUNTIME); DL_FLAG_PM_RUNTIME);
} }
iommu_group_put(group);
return 0; /* There is always at least one entry, see exynos_iommu_of_xlate() */
data = list_first_entry(&owner->controllers,
struct sysmmu_drvdata, owner_node);
return &data->iommu;
} }
static void exynos_iommu_remove_device(struct device *dev) static void exynos_iommu_release_device(struct device *dev)
{ {
struct exynos_iommu_owner *owner = dev->archdata.iommu; struct exynos_iommu_owner *owner = dev->archdata.iommu;
struct sysmmu_drvdata *data; struct sysmmu_drvdata *data;
...@@ -1282,7 +1279,6 @@ static void exynos_iommu_remove_device(struct device *dev) ...@@ -1282,7 +1279,6 @@ static void exynos_iommu_remove_device(struct device *dev)
iommu_group_put(group); iommu_group_put(group);
} }
} }
iommu_group_remove_device(dev);
list_for_each_entry(data, &owner->controllers, owner_node) list_for_each_entry(data, &owner->controllers, owner_node)
device_link_del(data->link); device_link_del(data->link);
...@@ -1331,8 +1327,8 @@ static const struct iommu_ops exynos_iommu_ops = { ...@@ -1331,8 +1327,8 @@ static const struct iommu_ops exynos_iommu_ops = {
.unmap = exynos_iommu_unmap, .unmap = exynos_iommu_unmap,
.iova_to_phys = exynos_iommu_iova_to_phys, .iova_to_phys = exynos_iommu_iova_to_phys,
.device_group = generic_device_group, .device_group = generic_device_group,
.add_device = exynos_iommu_add_device, .probe_device = exynos_iommu_probe_device,
.remove_device = exynos_iommu_remove_device, .release_device = exynos_iommu_release_device,
.pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE, .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE,
.of_xlate = exynos_iommu_of_xlate, .of_xlate = exynos_iommu_of_xlate,
}; };
......
...@@ -1016,25 +1016,13 @@ static struct iommu_group *fsl_pamu_device_group(struct device *dev) ...@@ -1016,25 +1016,13 @@ static struct iommu_group *fsl_pamu_device_group(struct device *dev)
return group; return group;
} }
static int fsl_pamu_add_device(struct device *dev) static struct iommu_device *fsl_pamu_probe_device(struct device *dev)
{ {
struct iommu_group *group; return &pamu_iommu;
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group))
return PTR_ERR(group);
iommu_group_put(group);
iommu_device_link(&pamu_iommu, dev);
return 0;
} }
static void fsl_pamu_remove_device(struct device *dev) static void fsl_pamu_release_device(struct device *dev)
{ {
iommu_device_unlink(&pamu_iommu, dev);
iommu_group_remove_device(dev);
} }
static const struct iommu_ops fsl_pamu_ops = { static const struct iommu_ops fsl_pamu_ops = {
...@@ -1048,8 +1036,8 @@ static const struct iommu_ops fsl_pamu_ops = { ...@@ -1048,8 +1036,8 @@ static const struct iommu_ops fsl_pamu_ops = {
.iova_to_phys = fsl_pamu_iova_to_phys, .iova_to_phys = fsl_pamu_iova_to_phys,
.domain_set_attr = fsl_pamu_set_domain_attr, .domain_set_attr = fsl_pamu_set_domain_attr,
.domain_get_attr = fsl_pamu_get_domain_attr, .domain_get_attr = fsl_pamu_get_domain_attr,
.add_device = fsl_pamu_add_device, .probe_device = fsl_pamu_probe_device,
.remove_device = fsl_pamu_remove_device, .release_device = fsl_pamu_release_device,
.device_group = fsl_pamu_device_group, .device_group = fsl_pamu_device_group,
}; };
......
...@@ -131,7 +131,7 @@ static int hyperv_irq_remapping_activate(struct irq_domain *domain, ...@@ -131,7 +131,7 @@ static int hyperv_irq_remapping_activate(struct irq_domain *domain,
return 0; return 0;
} }
static struct irq_domain_ops hyperv_ir_domain_ops = { static const struct irq_domain_ops hyperv_ir_domain_ops = {
.alloc = hyperv_irq_remapping_alloc, .alloc = hyperv_irq_remapping_alloc,
.free = hyperv_irq_remapping_free, .free = hyperv_irq_remapping_free,
.activate = hyperv_irq_remapping_activate, .activate = hyperv_irq_remapping_activate,
......
...@@ -372,6 +372,66 @@ static int domain_translation_struct_show(struct seq_file *m, void *unused) ...@@ -372,6 +372,66 @@ static int domain_translation_struct_show(struct seq_file *m, void *unused)
} }
DEFINE_SHOW_ATTRIBUTE(domain_translation_struct); DEFINE_SHOW_ATTRIBUTE(domain_translation_struct);
static void invalidation_queue_entry_show(struct seq_file *m,
struct intel_iommu *iommu)
{
int index, shift = qi_shift(iommu);
struct qi_desc *desc;
int offset;
if (ecap_smts(iommu->ecap))
seq_puts(m, "Index\t\tqw0\t\t\tqw1\t\t\tqw2\t\t\tqw3\t\t\tstatus\n");
else
seq_puts(m, "Index\t\tqw0\t\t\tqw1\t\t\tstatus\n");
for (index = 0; index < QI_LENGTH; index++) {
offset = index << shift;
desc = iommu->qi->desc + offset;
if (ecap_smts(iommu->ecap))
seq_printf(m, "%5d\t%016llx\t%016llx\t%016llx\t%016llx\t%016x\n",
index, desc->qw0, desc->qw1,
desc->qw2, desc->qw3,
iommu->qi->desc_status[index]);
else
seq_printf(m, "%5d\t%016llx\t%016llx\t%016x\n",
index, desc->qw0, desc->qw1,
iommu->qi->desc_status[index]);
}
}
static int invalidation_queue_show(struct seq_file *m, void *unused)
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
unsigned long flags;
struct q_inval *qi;
int shift;
rcu_read_lock();
for_each_active_iommu(iommu, drhd) {
qi = iommu->qi;
shift = qi_shift(iommu);
if (!qi || !ecap_qis(iommu->ecap))
continue;
seq_printf(m, "Invalidation queue on IOMMU: %s\n", iommu->name);
raw_spin_lock_irqsave(&qi->q_lock, flags);
seq_printf(m, " Base: 0x%llx\tHead: %lld\tTail: %lld\n",
(u64)virt_to_phys(qi->desc),
dmar_readq(iommu->reg + DMAR_IQH_REG) >> shift,
dmar_readq(iommu->reg + DMAR_IQT_REG) >> shift);
invalidation_queue_entry_show(m, iommu);
raw_spin_unlock_irqrestore(&qi->q_lock, flags);
seq_putc(m, '\n');
}
rcu_read_unlock();
return 0;
}
DEFINE_SHOW_ATTRIBUTE(invalidation_queue);
#ifdef CONFIG_IRQ_REMAP #ifdef CONFIG_IRQ_REMAP
static void ir_tbl_remap_entry_show(struct seq_file *m, static void ir_tbl_remap_entry_show(struct seq_file *m,
struct intel_iommu *iommu) struct intel_iommu *iommu)
...@@ -490,6 +550,8 @@ void __init intel_iommu_debugfs_init(void) ...@@ -490,6 +550,8 @@ void __init intel_iommu_debugfs_init(void)
debugfs_create_file("domain_translation_struct", 0444, debugfs_create_file("domain_translation_struct", 0444,
intel_iommu_debug, NULL, intel_iommu_debug, NULL,
&domain_translation_struct_fops); &domain_translation_struct_fops);
debugfs_create_file("invalidation_queue", 0444, intel_iommu_debug,
NULL, &invalidation_queue_fops);
#ifdef CONFIG_IRQ_REMAP #ifdef CONFIG_IRQ_REMAP
debugfs_create_file("ir_translation_struct", 0444, intel_iommu_debug, debugfs_create_file("ir_translation_struct", 0444, intel_iommu_debug,
NULL, &ir_translation_struct_fops); NULL, &ir_translation_struct_fops);
......
This diff is collapsed.
This diff is collapsed.
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#define PASID_MAX 0x100000 #define PASID_MAX 0x100000
#define PASID_PTE_MASK 0x3F #define PASID_PTE_MASK 0x3F
#define PASID_PTE_PRESENT 1 #define PASID_PTE_PRESENT 1
#define PASID_PTE_FPD 2
#define PDE_PFN_MASK PAGE_MASK #define PDE_PFN_MASK PAGE_MASK
#define PASID_PDE_SHIFT 6 #define PASID_PDE_SHIFT 6
#define MAX_NR_PASID_BITS 20 #define MAX_NR_PASID_BITS 20
...@@ -23,6 +24,16 @@ ...@@ -23,6 +24,16 @@
#define is_pasid_enabled(entry) (((entry)->lo >> 3) & 0x1) #define is_pasid_enabled(entry) (((entry)->lo >> 3) & 0x1)
#define get_pasid_dir_size(entry) (1 << ((((entry)->lo >> 9) & 0x7) + 7)) #define get_pasid_dir_size(entry) (1 << ((((entry)->lo >> 9) & 0x7) + 7))
/* Virtual command interface for enlightened pasid management. */
#define VCMD_CMD_ALLOC 0x1
#define VCMD_CMD_FREE 0x2
#define VCMD_VRSP_IP 0x1
#define VCMD_VRSP_SC(e) (((e) >> 1) & 0x3)
#define VCMD_VRSP_SC_SUCCESS 0
#define VCMD_VRSP_SC_NO_PASID_AVAIL 1
#define VCMD_VRSP_SC_INVALID_PASID 1
#define VCMD_VRSP_RESULT_PASID(e) (((e) >> 8) & 0xfffff)
#define VCMD_CMD_OPERAND(e) ((e) << 8)
/* /*
* Domain ID reserved for pasid entries programmed for first-level * Domain ID reserved for pasid entries programmed for first-level
* only and pass-through transfer modes. * only and pass-through transfer modes.
...@@ -36,6 +47,7 @@ ...@@ -36,6 +47,7 @@
* to vmalloc or even module mappings. * to vmalloc or even module mappings.
*/ */
#define PASID_FLAG_SUPERVISOR_MODE BIT(0) #define PASID_FLAG_SUPERVISOR_MODE BIT(0)
#define PASID_FLAG_NESTED BIT(1)
/* /*
* The PASID_FLAG_FL5LP flag Indicates using 5-level paging for first- * The PASID_FLAG_FL5LP flag Indicates using 5-level paging for first-
...@@ -51,6 +63,11 @@ struct pasid_entry { ...@@ -51,6 +63,11 @@ struct pasid_entry {
u64 val[8]; u64 val[8];
}; };
#define PASID_ENTRY_PGTT_FL_ONLY (1)
#define PASID_ENTRY_PGTT_SL_ONLY (2)
#define PASID_ENTRY_PGTT_NESTED (3)
#define PASID_ENTRY_PGTT_PT (4)
/* The representative of a PASID table */ /* The representative of a PASID table */
struct pasid_table { struct pasid_table {
void *table; /* pasid table pointer */ void *table; /* pasid table pointer */
...@@ -99,7 +116,13 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, ...@@ -99,7 +116,13 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
int intel_pasid_setup_pass_through(struct intel_iommu *iommu, int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
struct dmar_domain *domain, struct dmar_domain *domain,
struct device *dev, int pasid); struct device *dev, int pasid);
int intel_pasid_setup_nested(struct intel_iommu *iommu,
struct device *dev, pgd_t *pgd, int pasid,
struct iommu_gpasid_bind_data_vtd *pasid_data,
struct dmar_domain *domain, int addr_width);
void intel_pasid_tear_down_entry(struct intel_iommu *iommu, void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
struct device *dev, int pasid); struct device *dev, int pasid,
bool fault_ignore);
int vcmd_alloc_pasid(struct intel_iommu *iommu, unsigned int *pasid);
void vcmd_free_pasid(struct intel_iommu *iommu, unsigned int pasid);
#endif /* __INTEL_PASID_H */ #endif /* __INTEL_PASID_H */
This diff is collapsed.
...@@ -151,7 +151,7 @@ static int qi_flush_iec(struct intel_iommu *iommu, int index, int mask) ...@@ -151,7 +151,7 @@ static int qi_flush_iec(struct intel_iommu *iommu, int index, int mask)
desc.qw2 = 0; desc.qw2 = 0;
desc.qw3 = 0; desc.qw3 = 0;
return qi_submit_sync(&desc, iommu); return qi_submit_sync(iommu, &desc, 1, 0);
} }
static int modify_irte(struct irq_2_iommu *irq_iommu, static int modify_irte(struct irq_2_iommu *irq_iommu,
......
This diff is collapsed.
...@@ -253,7 +253,7 @@ int iova_cache_get(void) ...@@ -253,7 +253,7 @@ int iova_cache_get(void)
SLAB_HWCACHE_ALIGN, NULL); SLAB_HWCACHE_ALIGN, NULL);
if (!iova_cache) { if (!iova_cache) {
mutex_unlock(&iova_cache_mutex); mutex_unlock(&iova_cache_mutex);
printk(KERN_ERR "Couldn't create iova cache\n"); pr_err("Couldn't create iova cache\n");
return -ENOMEM; return -ENOMEM;
} }
} }
...@@ -718,7 +718,7 @@ copy_reserved_iova(struct iova_domain *from, struct iova_domain *to) ...@@ -718,7 +718,7 @@ copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi); new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
if (!new_iova) if (!new_iova)
printk(KERN_ERR "Reserve iova range %lx@%lx failed\n", pr_err("Reserve iova range %lx@%lx failed\n",
iova->pfn_lo, iova->pfn_lo); iova->pfn_lo, iova->pfn_lo);
} }
spin_unlock_irqrestore(&from->iova_rbtree_lock, flags); spin_unlock_irqrestore(&from->iova_rbtree_lock, flags);
......
This diff is collapsed.
This diff is collapsed.
...@@ -441,38 +441,26 @@ static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain, ...@@ -441,38 +441,26 @@ static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
return pa; return pa;
} }
static int mtk_iommu_add_device(struct device *dev) static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
{ {
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct mtk_iommu_data *data; struct mtk_iommu_data *data;
struct iommu_group *group;
if (!fwspec || fwspec->ops != &mtk_iommu_ops) if (!fwspec || fwspec->ops != &mtk_iommu_ops)
return -ENODEV; /* Not a iommu client device */ return ERR_PTR(-ENODEV); /* Not a iommu client device */
data = dev_iommu_priv_get(dev); data = dev_iommu_priv_get(dev);
iommu_device_link(&data->iommu, dev);
group = iommu_group_get_for_dev(dev); return &data->iommu;
if (IS_ERR(group))
return PTR_ERR(group);
iommu_group_put(group);
return 0;
} }
static void mtk_iommu_remove_device(struct device *dev) static void mtk_iommu_release_device(struct device *dev)
{ {
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct mtk_iommu_data *data;
if (!fwspec || fwspec->ops != &mtk_iommu_ops) if (!fwspec || fwspec->ops != &mtk_iommu_ops)
return; return;
data = dev_iommu_priv_get(dev);
iommu_device_unlink(&data->iommu, dev);
iommu_group_remove_device(dev);
iommu_fwspec_free(dev); iommu_fwspec_free(dev);
} }
...@@ -526,8 +514,8 @@ static const struct iommu_ops mtk_iommu_ops = { ...@@ -526,8 +514,8 @@ static const struct iommu_ops mtk_iommu_ops = {
.flush_iotlb_all = mtk_iommu_flush_iotlb_all, .flush_iotlb_all = mtk_iommu_flush_iotlb_all,
.iotlb_sync = mtk_iommu_iotlb_sync, .iotlb_sync = mtk_iommu_iotlb_sync,
.iova_to_phys = mtk_iommu_iova_to_phys, .iova_to_phys = mtk_iommu_iova_to_phys,
.add_device = mtk_iommu_add_device, .probe_device = mtk_iommu_probe_device,
.remove_device = mtk_iommu_remove_device, .release_device = mtk_iommu_release_device,
.device_group = mtk_iommu_device_group, .device_group = mtk_iommu_device_group,
.of_xlate = mtk_iommu_of_xlate, .of_xlate = mtk_iommu_of_xlate,
.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment