Commit bb97be23 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'iommu-updates-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:

 - A big cleanup and optimization patch-set for the Tegra GART driver

 - Documentation updates and fixes for the IOMMU-API

 - Support for page request in Intel VT-d scalable mode

 - Intel VT-d dma_[un]map_resource() support

 - Updates to the ATS enabling code for PCI (acked by Bjorn) and Intel
   VT-d to align with the latest version of the ATS spec

 - Relaxed IRQ source checking in the Intel VT-d driver for some aliased
   devices, needed for future devices which send IRQ messages from more
   than on request-ID

 - IRQ remapping driver for Hyper-V

 - Patches to make generic IOVA and IO-Page-Table code usable outside of
   the IOMMU code

 - Various other small fixes and cleanups

* tag 'iommu-updates-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (60 commits)
  iommu/vt-d: Get domain ID before clear pasid entry
  iommu/vt-d: Fix NULL pointer reference in intel_svm_bind_mm()
  iommu/vt-d: Set context field after value initialized
  iommu/vt-d: Disable ATS support on untrusted devices
  iommu/mediatek: Fix semicolon code style issue
  MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS scope
  iommu/hyper-v: Add Hyper-V stub IOMMU driver
  x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available
  PCI/ATS: Add inline to pci_prg_resp_pasid_required()
  iommu/vt-d: Check identity map for hot-added devices
  iommu: Fix IOMMU debugfs fallout
  iommu: Document iommu_ops.is_attach_deferred()
  iommu: Document iommu_ops.iotlb_sync_map()
  iommu/vt-d: Enable ATS only if the device uses page aligned address.
  PCI/ATS: Add pci_ats_page_aligned() interface
  iommu/vt-d: Fix PRI/PASID dependency issue.
  PCI/ATS: Add pci_prg_resp_pasid_required() interface.
  iommu/vt-d: Allow interrupts from the entire bus for aliased devices
  iommu/vt-d: Add helper to set an IRTE to verify only the bus number
  iommu: Fix flush_tlb_all typo
  ...
parents b7a7d1c1 d05e4c86
NVIDIA Tegra 20 GART
Required properties:
- compatible: "nvidia,tegra20-gart"
- reg: Two pairs of cells specifying the physical address and size of
the memory controller registers and the GART aperture respectively.
Example:
gart {
compatible = "nvidia,tegra20-gart";
reg = <0x7000f024 0x00000018 /* controller registers */
0x58000000 0x02000000>; /* GART aperture */
};
NVIDIA Tegra20 MC(Memory Controller) NVIDIA Tegra20 MC(Memory Controller)
Required properties: Required properties:
- compatible : "nvidia,tegra20-mc" - compatible : "nvidia,tegra20-mc-gart"
- reg : Should contain 2 register ranges(address and length); see the - reg : Should contain 2 register ranges: physical base address and length of
example below. Note that the MC registers are interleaved with the the controller's registers and the GART aperture respectively.
GART registers, and hence must be represented as multiple ranges. - clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- mc: the module's clock input
- interrupts : Should contain MC General interrupt. - interrupts : Should contain MC General interrupt.
- #reset-cells : Should be 1. This cell represents memory client module ID. - #reset-cells : Should be 1. This cell represents memory client module ID.
The assignments may be found in header file <dt-bindings/memory/tegra20-mc.h> The assignments may be found in header file <dt-bindings/memory/tegra20-mc.h>
or in the TRM documentation. or in the TRM documentation.
- #iommu-cells: Should be 0. This cell represents the number of cells in an
IOMMU specifier needed to encode an address. GART supports only a single
address space that is shared by all devices, therefore no additional
information needed for the address encoding.
Example: Example:
mc: memory-controller@7000f000 { mc: memory-controller@7000f000 {
compatible = "nvidia,tegra20-mc"; compatible = "nvidia,tegra20-mc-gart";
reg = <0x7000f000 0x024 reg = <0x7000f000 0x400 /* controller registers */
0x7000f03c 0x3c4>; 0x58000000 0x02000000>; /* GART aperture */
interrupts = <0 77 0x04>; clocks = <&tegra_car TEGRA20_CLK_MC>;
clock-names = "mc";
interrupts = <GIC_SPI 77 0x04>;
#reset-cells = <1>; #reset-cells = <1>;
#iommu-cells = <0>;
}; };
video-codec@6001a000 { video-codec@6001a000 {
compatible = "nvidia,tegra20-vde"; compatible = "nvidia,tegra20-vde";
... ...
resets = <&mc TEGRA20_MC_RESET_VDE>; resets = <&mc TEGRA20_MC_RESET_VDE>;
iommus = <&mc>;
}; };
...@@ -7170,6 +7170,7 @@ F: drivers/net/hyperv/ ...@@ -7170,6 +7170,7 @@ F: drivers/net/hyperv/
F: drivers/scsi/storvsc_drv.c F: drivers/scsi/storvsc_drv.c
F: drivers/uio/uio_hv_generic.c F: drivers/uio/uio_hv_generic.c
F: drivers/video/fbdev/hyperv_fb.c F: drivers/video/fbdev/hyperv_fb.c
F: drivers/iommu/hyperv_iommu.c
F: net/vmw_vsock/hyperv_transport.c F: net/vmw_vsock/hyperv_transport.c
F: include/linux/hyperv.h F: include/linux/hyperv.h
F: include/uapi/linux/hyperv.h F: include/uapi/linux/hyperv.h
......
...@@ -616,17 +616,14 @@ pmc@7000e400 { ...@@ -616,17 +616,14 @@ pmc@7000e400 {
}; };
mc: memory-controller@7000f000 { mc: memory-controller@7000f000 {
compatible = "nvidia,tegra20-mc"; compatible = "nvidia,tegra20-mc-gart";
reg = <0x7000f000 0x024 reg = <0x7000f000 0x400 /* controller registers */
0x7000f03c 0x3c4>; 0x58000000 0x02000000>; /* GART aperture */
clocks = <&tegra_car TEGRA20_CLK_MC>;
clock-names = "mc";
interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>;
#reset-cells = <1>; #reset-cells = <1>;
}; #iommu-cells = <0>;
iommu@7000f024 {
compatible = "nvidia,tegra20-gart";
reg = <0x7000f024 0x00000018 /* controller registers */
0x58000000 0x02000000>; /* GART aperture */
}; };
memory-controller@7000f400 { memory-controller@7000f400 {
......
...@@ -328,6 +328,18 @@ static void __init ms_hyperv_init_platform(void) ...@@ -328,6 +328,18 @@ static void __init ms_hyperv_init_platform(void)
# ifdef CONFIG_SMP # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
# endif # endif
/*
* Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
* set x2apic destination mode to physcial mode when x2apic is available
* and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
* have 8-bit APIC id.
*/
# ifdef CONFIG_X86_X2APIC
if (x2apic_supported())
x2apic_phys = 1;
# endif
#endif #endif
} }
......
...@@ -56,7 +56,7 @@ obj-y += tty/ ...@@ -56,7 +56,7 @@ obj-y += tty/
obj-y += char/ obj-y += char/
# iommu/ comes before gpu as gpu are using iommu controllers # iommu/ comes before gpu as gpu are using iommu controllers
obj-$(CONFIG_IOMMU_SUPPORT) += iommu/ obj-y += iommu/
# gpu/ comes after char for AGP vs DRM startup and after iommu # gpu/ comes after char for AGP vs DRM startup and after iommu
obj-y += gpu/ obj-y += gpu/
......
# The IOVA library may also be used by non-IOMMU_API users
config IOMMU_IOVA
tristate
# IOMMU_API always gets selected by whoever wants it. # IOMMU_API always gets selected by whoever wants it.
config IOMMU_API config IOMMU_API
bool bool
...@@ -81,9 +85,6 @@ config IOMMU_DEFAULT_PASSTHROUGH ...@@ -81,9 +85,6 @@ config IOMMU_DEFAULT_PASSTHROUGH
If unsure, say N here. If unsure, say N here.
config IOMMU_IOVA
tristate
config OF_IOMMU config OF_IOMMU
def_bool y def_bool y
depends on OF && IOMMU_API depends on OF && IOMMU_API
...@@ -282,6 +283,7 @@ config ROCKCHIP_IOMMU ...@@ -282,6 +283,7 @@ config ROCKCHIP_IOMMU
config TEGRA_IOMMU_GART config TEGRA_IOMMU_GART
bool "Tegra GART IOMMU Support" bool "Tegra GART IOMMU Support"
depends on ARCH_TEGRA_2x_SOC depends on ARCH_TEGRA_2x_SOC
depends on TEGRA_MC
select IOMMU_API select IOMMU_API
help help
Enables support for remapping discontiguous physical memory Enables support for remapping discontiguous physical memory
...@@ -435,4 +437,13 @@ config QCOM_IOMMU ...@@ -435,4 +437,13 @@ config QCOM_IOMMU
help help
Support for IOMMU on certain Qualcomm SoCs. Support for IOMMU on certain Qualcomm SoCs.
config HYPERV_IOMMU
bool "Hyper-V x2APIC IRQ Handling"
depends on HYPERV
select IOMMU_API
default HYPERV
help
Stub IOMMU driver to handle IRQs as to allow Hyper-V Linux
guests to run with x2APIC mode enabled.
endif # IOMMU_SUPPORT endif # IOMMU_SUPPORT
...@@ -32,3 +32,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o ...@@ -32,3 +32,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
obj-$(CONFIG_S390_IOMMU) += s390-iommu.o obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
*/ */
#define pr_fmt(fmt) "AMD-Vi: " fmt #define pr_fmt(fmt) "AMD-Vi: " fmt
#define dev_fmt(fmt) pr_fmt(fmt)
#include <linux/ratelimit.h> #include <linux/ratelimit.h>
#include <linux/pci.h> #include <linux/pci.h>
...@@ -279,10 +280,10 @@ static u16 get_alias(struct device *dev) ...@@ -279,10 +280,10 @@ static u16 get_alias(struct device *dev)
return pci_alias; return pci_alias;
} }
pr_info("Using IVRS reported alias %02x:%02x.%d " pci_info(pdev, "Using IVRS reported alias %02x:%02x.%d "
"for device %s[%04x:%04x], kernel reported alias " "for device [%04x:%04x], kernel reported alias "
"%02x:%02x.%d\n", PCI_BUS_NUM(ivrs_alias), PCI_SLOT(ivrs_alias), "%02x:%02x.%d\n", PCI_BUS_NUM(ivrs_alias), PCI_SLOT(ivrs_alias),
PCI_FUNC(ivrs_alias), dev_name(dev), pdev->vendor, pdev->device, PCI_FUNC(ivrs_alias), pdev->vendor, pdev->device,
PCI_BUS_NUM(pci_alias), PCI_SLOT(pci_alias), PCI_BUS_NUM(pci_alias), PCI_SLOT(pci_alias),
PCI_FUNC(pci_alias)); PCI_FUNC(pci_alias));
...@@ -293,9 +294,8 @@ static u16 get_alias(struct device *dev) ...@@ -293,9 +294,8 @@ static u16 get_alias(struct device *dev)
if (pci_alias == devid && if (pci_alias == devid &&
PCI_BUS_NUM(ivrs_alias) == pdev->bus->number) { PCI_BUS_NUM(ivrs_alias) == pdev->bus->number) {
pci_add_dma_alias(pdev, ivrs_alias & 0xff); pci_add_dma_alias(pdev, ivrs_alias & 0xff);
pr_info("Added PCI DMA alias %02x.%d for %s\n", pci_info(pdev, "Added PCI DMA alias %02x.%d\n",
PCI_SLOT(ivrs_alias), PCI_FUNC(ivrs_alias), PCI_SLOT(ivrs_alias), PCI_FUNC(ivrs_alias));
dev_name(dev));
} }
return ivrs_alias; return ivrs_alias;
...@@ -545,7 +545,7 @@ static void amd_iommu_report_page_fault(u16 devid, u16 domain_id, ...@@ -545,7 +545,7 @@ static void amd_iommu_report_page_fault(u16 devid, u16 domain_id,
dev_data = get_dev_data(&pdev->dev); dev_data = get_dev_data(&pdev->dev);
if (dev_data && __ratelimit(&dev_data->rs)) { if (dev_data && __ratelimit(&dev_data->rs)) {
dev_err(&pdev->dev, "Event logged [IO_PAGE_FAULT domain=0x%04x address=0x%llx flags=0x%04x]\n", pci_err(pdev, "Event logged [IO_PAGE_FAULT domain=0x%04x address=0x%llx flags=0x%04x]\n",
domain_id, address, flags); domain_id, address, flags);
} else if (printk_ratelimit()) { } else if (printk_ratelimit()) {
pr_err("Event logged [IO_PAGE_FAULT device=%02x:%02x.%x domain=0x%04x address=0x%llx flags=0x%04x]\n", pr_err("Event logged [IO_PAGE_FAULT device=%02x:%02x.%x domain=0x%04x address=0x%llx flags=0x%04x]\n",
...@@ -2258,8 +2258,7 @@ static int amd_iommu_add_device(struct device *dev) ...@@ -2258,8 +2258,7 @@ static int amd_iommu_add_device(struct device *dev)
ret = iommu_init_device(dev); ret = iommu_init_device(dev);
if (ret) { if (ret) {
if (ret != -ENOTSUPP) if (ret != -ENOTSUPP)
pr_err("Failed to initialize device %s - trying to proceed anyway\n", dev_err(dev, "Failed to initialize - trying to proceed anyway\n");
dev_name(dev));
iommu_ignore_device(dev); iommu_ignore_device(dev);
dev->dma_ops = NULL; dev->dma_ops = NULL;
...@@ -2569,6 +2568,7 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, ...@@ -2569,6 +2568,7 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
struct scatterlist *s; struct scatterlist *s;
unsigned long address; unsigned long address;
u64 dma_mask; u64 dma_mask;
int ret;
domain = get_domain(dev); domain = get_domain(dev);
if (IS_ERR(domain)) if (IS_ERR(domain))
...@@ -2591,7 +2591,6 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, ...@@ -2591,7 +2591,6 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
for (j = 0; j < pages; ++j) { for (j = 0; j < pages; ++j) {
unsigned long bus_addr, phys_addr; unsigned long bus_addr, phys_addr;
int ret;
bus_addr = address + s->dma_address + (j << PAGE_SHIFT); bus_addr = address + s->dma_address + (j << PAGE_SHIFT);
phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT); phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT);
...@@ -2612,8 +2611,8 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, ...@@ -2612,8 +2611,8 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
return nelems; return nelems;
out_unmap: out_unmap:
pr_err("%s: IOMMU mapping error in map_sg (io-pages: %d)\n", dev_err(dev, "IOMMU mapping error in map_sg (io-pages: %d reason: %d)\n",
dev_name(dev), npages); npages, ret);
for_each_sg(sglist, s, nelems, i) { for_each_sg(sglist, s, nelems, i) {
int j, pages = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE); int j, pages = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE);
...@@ -2807,7 +2806,7 @@ static int init_reserved_iova_ranges(void) ...@@ -2807,7 +2806,7 @@ static int init_reserved_iova_ranges(void)
IOVA_PFN(r->start), IOVA_PFN(r->start),
IOVA_PFN(r->end)); IOVA_PFN(r->end));
if (!val) { if (!val) {
pr_err("Reserve pci-resource range failed\n"); pci_err(pdev, "Reserve pci-resource range %pR failed\n", r);
return -ENOMEM; return -ENOMEM;
} }
} }
...@@ -3177,8 +3176,7 @@ static void amd_iommu_get_resv_regions(struct device *dev, ...@@ -3177,8 +3176,7 @@ static void amd_iommu_get_resv_regions(struct device *dev,
length, prot, length, prot,
IOMMU_RESV_DIRECT); IOMMU_RESV_DIRECT);
if (!region) { if (!region) {
pr_err("Out of memory allocating dm-regions for %s\n", dev_err(dev, "Out of memory allocating dm-regions\n");
dev_name(dev));
return; return;
} }
list_add_tail(&region->list, head); list_add_tail(&region->list, head);
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
*/ */
#define pr_fmt(fmt) "AMD-Vi: " fmt #define pr_fmt(fmt) "AMD-Vi: " fmt
#define dev_fmt(fmt) pr_fmt(fmt)
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/acpi.h> #include <linux/acpi.h>
...@@ -1457,8 +1458,7 @@ static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu) ...@@ -1457,8 +1458,7 @@ static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
pci_write_config_dword(iommu->dev, 0xf0, 0x90 | (1 << 8)); pci_write_config_dword(iommu->dev, 0xf0, 0x90 | (1 << 8));
pci_write_config_dword(iommu->dev, 0xf4, value | 0x4); pci_write_config_dword(iommu->dev, 0xf4, value | 0x4);
pr_info("Applying erratum 746 workaround for IOMMU at %s\n", pci_info(iommu->dev, "Applying erratum 746 workaround\n");
dev_name(&iommu->dev->dev));
/* Clear the enable writing bit */ /* Clear the enable writing bit */
pci_write_config_dword(iommu->dev, 0xf0, 0x90); pci_write_config_dword(iommu->dev, 0xf0, 0x90);
...@@ -1488,8 +1488,7 @@ static void amd_iommu_ats_write_check_workaround(struct amd_iommu *iommu) ...@@ -1488,8 +1488,7 @@ static void amd_iommu_ats_write_check_workaround(struct amd_iommu *iommu)
/* Set L2_DEBUG_3[AtsIgnoreIWDis] = 1 */ /* Set L2_DEBUG_3[AtsIgnoreIWDis] = 1 */
iommu_write_l2(iommu, 0x47, value | BIT(0)); iommu_write_l2(iommu, 0x47, value | BIT(0));
pr_info("Applying ATS write check workaround for IOMMU at %s\n", pci_info(iommu->dev, "Applying ATS write check workaround\n");
dev_name(&iommu->dev->dev));
} }
/* /*
...@@ -1665,6 +1664,7 @@ static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, ...@@ -1665,6 +1664,7 @@ static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
static void init_iommu_perf_ctr(struct amd_iommu *iommu) static void init_iommu_perf_ctr(struct amd_iommu *iommu)
{ {
struct pci_dev *pdev = iommu->dev;
u64 val = 0xabcd, val2 = 0; u64 val = 0xabcd, val2 = 0;
if (!iommu_feature(iommu, FEATURE_PC)) if (!iommu_feature(iommu, FEATURE_PC))
...@@ -1676,12 +1676,12 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu) ...@@ -1676,12 +1676,12 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) || if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
(iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) || (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
(val != val2)) { (val != val2)) {
pr_err("Unable to write to IOMMU perf counter.\n"); pci_err(pdev, "Unable to write to IOMMU perf counter.\n");
amd_iommu_pc_present = false; amd_iommu_pc_present = false;
return; return;
} }
pr_info("IOMMU performance counters supported\n"); pci_info(pdev, "IOMMU performance counters supported\n");
val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET); val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET);
iommu->max_banks = (u8) ((val >> 12) & 0x3f); iommu->max_banks = (u8) ((val >> 12) & 0x3f);
...@@ -1840,14 +1840,14 @@ static void print_iommu_info(void) ...@@ -1840,14 +1840,14 @@ static void print_iommu_info(void)
struct amd_iommu *iommu; struct amd_iommu *iommu;
for_each_iommu(iommu) { for_each_iommu(iommu) {
struct pci_dev *pdev = iommu->dev;
int i; int i;
pr_info("Found IOMMU at %s cap 0x%hx\n", pci_info(pdev, "Found IOMMU cap 0x%hx\n", iommu->cap_ptr);
dev_name(&iommu->dev->dev), iommu->cap_ptr);
if (iommu->cap & (1 << IOMMU_CAP_EFR)) { if (iommu->cap & (1 << IOMMU_CAP_EFR)) {
pr_info("Extended features (%#llx):\n", pci_info(pdev, "Extended features (%#llx):\n",
iommu->features); iommu->features);
for (i = 0; i < ARRAY_SIZE(feat_str); ++i) { for (i = 0; i < ARRAY_SIZE(feat_str); ++i) {
if (iommu_feature(iommu, (1ULL << i))) if (iommu_feature(iommu, (1ULL << i)))
pr_cont(" %s", feat_str[i]); pr_cont(" %s", feat_str[i]);
......
...@@ -370,29 +370,6 @@ static struct pasid_state *mn_to_state(struct mmu_notifier *mn) ...@@ -370,29 +370,6 @@ static struct pasid_state *mn_to_state(struct mmu_notifier *mn)
return container_of(mn, struct pasid_state, mn); return container_of(mn, struct pasid_state, mn);
} }
static void __mn_flush_page(struct mmu_notifier *mn,
unsigned long address)
{
struct pasid_state *pasid_state;
struct device_state *dev_state;
pasid_state = mn_to_state(mn);
dev_state = pasid_state->device_state;
amd_iommu_flush_page(dev_state->domain, pasid_state->pasid, address);
}
static int mn_clear_flush_young(struct mmu_notifier *mn,
struct mm_struct *mm,
unsigned long start,
unsigned long end)
{
for (; start < end; start += PAGE_SIZE)
__mn_flush_page(mn, start);
return 0;
}
static void mn_invalidate_range(struct mmu_notifier *mn, static void mn_invalidate_range(struct mmu_notifier *mn,
struct mm_struct *mm, struct mm_struct *mm,
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
...@@ -430,7 +407,6 @@ static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm) ...@@ -430,7 +407,6 @@ static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm)
static const struct mmu_notifier_ops iommu_mn = { static const struct mmu_notifier_ops iommu_mn = {
.release = mn_release, .release = mn_release,
.clear_flush_young = mn_clear_flush_young,
.invalidate_range = mn_invalidate_range, .invalidate_range = mn_invalidate_range,
}; };
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <linux/dma-iommu.h> #include <linux/dma-iommu.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/iopoll.h> #include <linux/iopoll.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -32,8 +33,6 @@ ...@@ -32,8 +33,6 @@
#include <linux/amba/bus.h> #include <linux/amba/bus.h>
#include "io-pgtable.h"
/* MMIO registers */ /* MMIO registers */
#define ARM_SMMU_IDR0 0x0 #define ARM_SMMU_IDR0 0x0
#define IDR0_ST_LVL GENMASK(28, 27) #define IDR0_ST_LVL GENMASK(28, 27)
......
...@@ -39,6 +39,7 @@ ...@@ -39,6 +39,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/io-64-nonatomic-hi-lo.h> #include <linux/io-64-nonatomic-hi-lo.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/iopoll.h> #include <linux/iopoll.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -56,7 +57,6 @@ ...@@ -56,7 +57,6 @@
#include <linux/amba/bus.h> #include <linux/amba/bus.h>
#include <linux/fsl/mc.h> #include <linux/fsl/mc.h>
#include "io-pgtable.h"
#include "arm-smmu-regs.h" #include "arm-smmu-regs.h"
#define ARM_MMU500_ACTLR_CPRE (1 << 1) #define ARM_MMU500_ACTLR_CPRE (1 << 1)
......
...@@ -289,7 +289,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, ...@@ -289,7 +289,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
{ {
struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad; struct iova_domain *iovad = &cookie->iovad;
unsigned long order, base_pfn, end_pfn; unsigned long order, base_pfn;
int attr; int attr;
if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
...@@ -298,7 +298,6 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, ...@@ -298,7 +298,6 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
/* Use the smallest supported page size for IOVA granularity */ /* Use the smallest supported page size for IOVA granularity */
order = __ffs(domain->pgsize_bitmap); order = __ffs(domain->pgsize_bitmap);
base_pfn = max_t(unsigned long, 1, base >> order); base_pfn = max_t(unsigned long, 1, base >> order);
end_pfn = (base + size - 1) >> order;
/* Check the domain allows at least some access to the device... */ /* Check the domain allows at least some access to the device... */
if (domain->geometry.force_aperture) { if (domain->geometry.force_aperture) {
......
// SPDX-License-Identifier: GPL-2.0
/*
* Hyper-V stub IOMMU driver.
*
* Copyright (C) 2019, Microsoft, Inc.
*
* Author : Lan Tianyu <Tianyu.Lan@microsoft.com>
*/
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/iommu.h>
#include <linux/module.h>
#include <asm/apic.h>
#include <asm/cpu.h>
#include <asm/hw_irq.h>
#include <asm/io_apic.h>
#include <asm/irq_remapping.h>
#include <asm/hypervisor.h>
#include "irq_remapping.h"
#ifdef CONFIG_IRQ_REMAP
/*
* According 82093AA IO-APIC spec , IO APIC has a 24-entry Interrupt
* Redirection Table. Hyper-V exposes one single IO-APIC and so define
* 24 IO APIC remmapping entries.
*/
#define IOAPIC_REMAPPING_ENTRY 24
static cpumask_t ioapic_max_cpumask = { CPU_BITS_NONE };
static struct irq_domain *ioapic_ir_domain;
static int hyperv_ir_set_affinity(struct irq_data *data,
const struct cpumask *mask, bool force)
{
struct irq_data *parent = data->parent_data;
struct irq_cfg *cfg = irqd_cfg(data);
struct IO_APIC_route_entry *entry;
int ret;
/* Return error If new irq affinity is out of ioapic_max_cpumask. */
if (!cpumask_subset(mask, &ioapic_max_cpumask))
return -EINVAL;
ret = parent->chip->irq_set_affinity(parent, mask, force);
if (ret < 0 || ret == IRQ_SET_MASK_OK_DONE)
return ret;
entry = data->chip_data;
entry->dest = cfg->dest_apicid;
entry->vector = cfg->vector;
send_cleanup_vector(cfg);
return 0;
}
static struct irq_chip hyperv_ir_chip = {
.name = "HYPERV-IR",
.irq_ack = apic_ack_irq,
.irq_set_affinity = hyperv_ir_set_affinity,
};
static int hyperv_irq_remapping_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs,
void *arg)
{
struct irq_alloc_info *info = arg;
struct irq_data *irq_data;
struct irq_desc *desc;
int ret = 0;
if (!info || info->type != X86_IRQ_ALLOC_TYPE_IOAPIC || nr_irqs > 1)
return -EINVAL;
ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, arg);
if (ret < 0)
return ret;
irq_data = irq_domain_get_irq_data(domain, virq);
if (!irq_data) {
irq_domain_free_irqs_common(domain, virq, nr_irqs);
return -EINVAL;
}
irq_data->chip = &hyperv_ir_chip;
/*
* If there is interrupt remapping function of IOMMU, setting irq
* affinity only needs to change IRTE of IOMMU. But Hyper-V doesn't
* support interrupt remapping function, setting irq affinity of IO-APIC
* interrupts still needs to change IO-APIC registers. But ioapic_
* configure_entry() will ignore value of cfg->vector and cfg->
* dest_apicid when IO-APIC's parent irq domain is not the vector
* domain.(See ioapic_configure_entry()) In order to setting vector
* and dest_apicid to IO-APIC register, IO-APIC entry pointer is saved
* in the chip_data and hyperv_irq_remapping_activate()/hyperv_ir_set_
* affinity() set vector and dest_apicid directly into IO-APIC entry.
*/
irq_data->chip_data = info->ioapic_entry;
/*
* Hypver-V IO APIC irq affinity should be in the scope of
* ioapic_max_cpumask because no irq remapping support.
*/
desc = irq_data_to_desc(irq_data);
cpumask_copy(desc->irq_common_data.affinity, &ioapic_max_cpumask);
return 0;
}
static void hyperv_irq_remapping_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
irq_domain_free_irqs_common(domain, virq, nr_irqs);
}
static int hyperv_irq_remapping_activate(struct irq_domain *domain,
struct irq_data *irq_data, bool reserve)
{
struct irq_cfg *cfg = irqd_cfg(irq_data);
struct IO_APIC_route_entry *entry = irq_data->chip_data;
entry->dest = cfg->dest_apicid;
entry->vector = cfg->vector;
return 0;
}
static struct irq_domain_ops hyperv_ir_domain_ops = {
.alloc = hyperv_irq_remapping_alloc,
.free = hyperv_irq_remapping_free,
.activate = hyperv_irq_remapping_activate,
};
static int __init hyperv_prepare_irq_remapping(void)
{
struct fwnode_handle *fn;
int i;
if (!hypervisor_is_type(X86_HYPER_MS_HYPERV) ||
!x2apic_supported())
return -ENODEV;
fn = irq_domain_alloc_named_id_fwnode("HYPERV-IR", 0);
if (!fn)
return -ENOMEM;
ioapic_ir_domain =
irq_domain_create_hierarchy(arch_get_ir_parent_domain(),
0, IOAPIC_REMAPPING_ENTRY, fn,
&hyperv_ir_domain_ops, NULL);
irq_domain_free_fwnode(fn);
/*
* Hyper-V doesn't provide irq remapping function for
* IO-APIC and so IO-APIC only accepts 8-bit APIC ID.
* Cpu's APIC ID is read from ACPI MADT table and APIC IDs
* in the MADT table on Hyper-v are sorted monotonic increasingly.
* APIC ID reflects cpu topology. There maybe some APIC ID
* gaps when cpu number in a socket is not power of two. Prepare
* max cpu affinity for IOAPIC irqs. Scan cpu 0-255 and set cpu
* into ioapic_max_cpumask if its APIC ID is less than 256.
*/
for (i = min_t(unsigned int, num_possible_cpus() - 1, 255); i >= 0; i--)
if (cpu_physical_id(i) < 256)
cpumask_set_cpu(i, &ioapic_max_cpumask);
return 0;
}
static int __init hyperv_enable_irq_remapping(void)
{
return IRQ_REMAP_X2APIC_MODE;
}
static struct irq_domain *hyperv_get_ir_irq_domain(struct irq_alloc_info *info)
{
if (info->type == X86_IRQ_ALLOC_TYPE_IOAPIC)
return ioapic_ir_domain;
else
return NULL;
}
struct irq_remap_ops hyperv_irq_remap_ops = {
.prepare = hyperv_prepare_irq_remapping,
.enable = hyperv_enable_irq_remapping,
.get_ir_irq_domain = hyperv_get_ir_irq_domain,
};
#endif
This diff is collapsed.
...@@ -466,8 +466,8 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, ...@@ -466,8 +466,8 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
if (WARN_ON(!pte)) if (WARN_ON(!pte))
return; return;
intel_pasid_clear_entry(dev, pasid);
did = pasid_get_domain_id(pte); did = pasid_get_domain_id(pte);
intel_pasid_clear_entry(dev, pasid);
if (!ecap_coherent(iommu->ecap)) if (!ecap_coherent(iommu->ecap))
clflush_cache_range(pte, sizeof(*pte)); clflush_cache_range(pte, sizeof(*pte));
......
...@@ -180,14 +180,6 @@ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, ...@@ -180,14 +180,6 @@ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address,
rcu_read_unlock(); rcu_read_unlock();
} }
static void intel_change_pte(struct mmu_notifier *mn, struct mm_struct *mm,
unsigned long address, pte_t pte)
{
struct intel_svm *svm = container_of(mn, struct intel_svm, notifier);
intel_flush_svm_range(svm, address, 1, 1, 0);
}
/* Pages have been freed at this point */ /* Pages have been freed at this point */
static void intel_invalidate_range(struct mmu_notifier *mn, static void intel_invalidate_range(struct mmu_notifier *mn,
struct mm_struct *mm, struct mm_struct *mm,
...@@ -227,7 +219,6 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) ...@@ -227,7 +219,6 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
static const struct mmu_notifier_ops intel_mmuops = { static const struct mmu_notifier_ops intel_mmuops = {
.release = intel_mm_release, .release = intel_mm_release,
.change_pte = intel_change_pte,
.invalidate_range = intel_invalidate_range, .invalidate_range = intel_invalidate_range,
}; };
...@@ -243,7 +234,7 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_ ...@@ -243,7 +234,7 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_
int pasid_max; int pasid_max;
int ret; int ret;
if (!iommu) if (!iommu || dmar_disabled)
return -EINVAL; return -EINVAL;
if (dev_is_pci(dev)) { if (dev_is_pci(dev)) {
...@@ -470,20 +461,31 @@ EXPORT_SYMBOL_GPL(intel_svm_is_pasid_valid); ...@@ -470,20 +461,31 @@ EXPORT_SYMBOL_GPL(intel_svm_is_pasid_valid);
/* Page request queue descriptor */ /* Page request queue descriptor */
struct page_req_dsc { struct page_req_dsc {
u64 srr:1; union {
u64 bof:1; struct {
u64 pasid_present:1; u64 type:8;
u64 lpig:1; u64 pasid_present:1;
u64 pasid:20; u64 priv_data_present:1;
u64 bus:8; u64 rsvd:6;
u64 private:23; u64 rid:16;
u64 prg_index:9; u64 pasid:20;
u64 rd_req:1; u64 exe_req:1;
u64 wr_req:1; u64 pm_req:1;
u64 exe_req:1; u64 rsvd2:10;
u64 priv_req:1; };
u64 devfn:8; u64 qw_0;
u64 addr:52; };
union {
struct {
u64 rd_req:1;
u64 wr_req:1;
u64 lpig:1;
u64 prg_index:9;
u64 addr:52;
};
u64 qw_1;
};
u64 priv_data[2];
}; };
#define PRQ_RING_MASK ((0x1000 << PRQ_ORDER) - 0x10) #define PRQ_RING_MASK ((0x1000 << PRQ_ORDER) - 0x10)
...@@ -596,7 +598,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) ...@@ -596,7 +598,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
/* Accounting for major/minor faults? */ /* Accounting for major/minor faults? */
rcu_read_lock(); rcu_read_lock();
list_for_each_entry_rcu(sdev, &svm->devs, list) { list_for_each_entry_rcu(sdev, &svm->devs, list) {
if (sdev->sid == PCI_DEVID(req->bus, req->devfn)) if (sdev->sid == req->rid)
break; break;
} }
/* Other devices can go away, but the drivers are not permitted /* Other devices can go away, but the drivers are not permitted
...@@ -609,33 +611,35 @@ static irqreturn_t prq_event_thread(int irq, void *d) ...@@ -609,33 +611,35 @@ static irqreturn_t prq_event_thread(int irq, void *d)
if (sdev && sdev->ops && sdev->ops->fault_cb) { if (sdev && sdev->ops && sdev->ops->fault_cb) {
int rwxp = (req->rd_req << 3) | (req->wr_req << 2) | int rwxp = (req->rd_req << 3) | (req->wr_req << 2) |
(req->exe_req << 1) | (req->priv_req); (req->exe_req << 1) | (req->pm_req);
sdev->ops->fault_cb(sdev->dev, req->pasid, req->addr, req->private, rwxp, result); sdev->ops->fault_cb(sdev->dev, req->pasid, req->addr,
req->priv_data, rwxp, result);
} }
/* We get here in the error case where the PASID lookup failed, /* We get here in the error case where the PASID lookup failed,
and these can be NULL. Do not use them below this point! */ and these can be NULL. Do not use them below this point! */
sdev = NULL; sdev = NULL;
svm = NULL; svm = NULL;
no_pasid: no_pasid:
if (req->lpig) { if (req->lpig || req->priv_data_present) {
/* Page Group Response */ /*
* Per VT-d spec. v3.0 ch7.7, system software must
* respond with page group response if private data
* is present (PDP) or last page in group (LPIG) bit
* is set. This is an additional VT-d feature beyond
* PCI ATS spec.
*/
resp.qw0 = QI_PGRP_PASID(req->pasid) | resp.qw0 = QI_PGRP_PASID(req->pasid) |
QI_PGRP_DID((req->bus << 8) | req->devfn) | QI_PGRP_DID(req->rid) |
QI_PGRP_PASID_P(req->pasid_present) | QI_PGRP_PASID_P(req->pasid_present) |
QI_PGRP_PDP(req->pasid_present) |
QI_PGRP_RESP_CODE(result) |
QI_PGRP_RESP_TYPE; QI_PGRP_RESP_TYPE;
resp.qw1 = QI_PGRP_IDX(req->prg_index) | resp.qw1 = QI_PGRP_IDX(req->prg_index) |
QI_PGRP_PRIV(req->private) | QI_PGRP_LPIG(req->lpig);
QI_PGRP_RESP_CODE(result);
} else if (req->srr) { if (req->priv_data_present)
/* Page Stream Response */ memcpy(&resp.qw2, req->priv_data,
resp.qw0 = QI_PSTRM_IDX(req->prg_index) | sizeof(req->priv_data));
QI_PSTRM_PRIV(req->private) |
QI_PSTRM_BUS(req->bus) |
QI_PSTRM_PASID(req->pasid) |
QI_PSTRM_RESP_TYPE;
resp.qw1 = QI_PSTRM_ADDR(address) |
QI_PSTRM_DEVFN(req->devfn) |
QI_PSTRM_RESP_CODE(result);
} }
resp.qw2 = 0; resp.qw2 = 0;
resp.qw3 = 0; resp.qw3 = 0;
......
...@@ -294,6 +294,18 @@ static void set_irte_sid(struct irte *irte, unsigned int svt, ...@@ -294,6 +294,18 @@ static void set_irte_sid(struct irte *irte, unsigned int svt,
irte->sid = sid; irte->sid = sid;
} }
/*
* Set an IRTE to match only the bus number. Interrupt requests that reference
* this IRTE must have a requester-id whose bus number is between or equal
* to the start_bus and end_bus arguments.
*/
static void set_irte_verify_bus(struct irte *irte, unsigned int start_bus,
unsigned int end_bus)
{
set_irte_sid(irte, SVT_VERIFY_BUS, SQ_ALL_16,
(start_bus << 8) | end_bus);
}
static int set_ioapic_sid(struct irte *irte, int apic) static int set_ioapic_sid(struct irte *irte, int apic)
{ {
int i; int i;
...@@ -356,6 +368,8 @@ static int set_hpet_sid(struct irte *irte, u8 id) ...@@ -356,6 +368,8 @@ static int set_hpet_sid(struct irte *irte, u8 id)
struct set_msi_sid_data { struct set_msi_sid_data {
struct pci_dev *pdev; struct pci_dev *pdev;
u16 alias; u16 alias;
int count;
int busmatch_count;
}; };
static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque) static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque)
...@@ -364,6 +378,10 @@ static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque) ...@@ -364,6 +378,10 @@ static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque)
data->pdev = pdev; data->pdev = pdev;
data->alias = alias; data->alias = alias;
data->count++;
if (PCI_BUS_NUM(alias) == pdev->bus->number)
data->busmatch_count++;
return 0; return 0;
} }
...@@ -375,6 +393,8 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev) ...@@ -375,6 +393,8 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev)
if (!irte || !dev) if (!irte || !dev)
return -1; return -1;
data.count = 0;
data.busmatch_count = 0;
pci_for_each_dma_alias(dev, set_msi_sid_cb, &data); pci_for_each_dma_alias(dev, set_msi_sid_cb, &data);
/* /*
...@@ -383,6 +403,11 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev) ...@@ -383,6 +403,11 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev)
* device is the case of a PCIe-to-PCI bridge, where the alias is for * device is the case of a PCIe-to-PCI bridge, where the alias is for
* the subordinate bus. In this case we can only verify the bus. * the subordinate bus. In this case we can only verify the bus.
* *
* If there are multiple aliases, all with the same bus number,
* then all we can do is verify the bus. This is typical in NTB
* hardware which use proxy IDs where the device will generate traffic
* from multiple devfn numbers on the same bus.
*
* If the alias device is on a different bus than our source device * If the alias device is on a different bus than our source device
* then we have a topology based alias, use it. * then we have a topology based alias, use it.
* *
...@@ -391,9 +416,10 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev) ...@@ -391,9 +416,10 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev)
* original device. * original device.
*/ */
if (PCI_BUS_NUM(data.alias) != data.pdev->bus->number) if (PCI_BUS_NUM(data.alias) != data.pdev->bus->number)
set_irte_sid(irte, SVT_VERIFY_BUS, SQ_ALL_16, set_irte_verify_bus(irte, PCI_BUS_NUM(data.alias),
PCI_DEVID(PCI_BUS_NUM(data.alias), dev->bus->number);
dev->bus->number)); else if (data.count >= 2 && data.busmatch_count == data.count)
set_irte_verify_bus(irte, dev->bus->number, dev->bus->number);
else if (data.pdev->bus->number != dev->bus->number) else if (data.pdev->bus->number != dev->bus->number)
set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, data.alias); set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, data.alias);
else else
......
...@@ -35,6 +35,7 @@ ...@@ -35,6 +35,7 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kmemleak.h> #include <linux/kmemleak.h>
...@@ -45,8 +46,6 @@ ...@@ -45,8 +46,6 @@
#include <asm/barrier.h> #include <asm/barrier.h>
#include "io-pgtable.h"
/* Struct accessors */ /* Struct accessors */
#define io_pgtable_to_data(x) \ #define io_pgtable_to_data(x) \
container_of((x), struct arm_v7s_io_pgtable, iop) container_of((x), struct arm_v7s_io_pgtable, iop)
...@@ -217,7 +216,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, ...@@ -217,7 +216,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
if (dma != phys) if (dma != phys)
goto out_unmap; goto out_unmap;
} }
kmemleak_ignore(table); if (lvl == 2)
kmemleak_ignore(table);
return table; return table;
out_unmap: out_unmap:
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sizes.h> #include <linux/sizes.h>
...@@ -31,8 +32,6 @@ ...@@ -31,8 +32,6 @@
#include <asm/barrier.h> #include <asm/barrier.h>
#include "io-pgtable.h"
#define ARM_LPAE_MAX_ADDR_BITS 52 #define ARM_LPAE_MAX_ADDR_BITS 52
#define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 #define ARM_LPAE_S2_MAX_CONCAT_PAGES 16
#define ARM_LPAE_MAX_LEVELS 4 #define ARM_LPAE_MAX_LEVELS 4
......
...@@ -19,11 +19,10 @@ ...@@ -19,11 +19,10 @@
*/ */
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/io-pgtable.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
#include "io-pgtable.h"
static const struct io_pgtable_init_fns * static const struct io_pgtable_init_fns *
io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = { io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = {
#ifdef CONFIG_IOMMU_IO_PGTABLE_LPAE #ifdef CONFIG_IOMMU_IO_PGTABLE_LPAE
...@@ -61,6 +60,7 @@ struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, ...@@ -61,6 +60,7 @@ struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
return &iop->ops; return &iop->ops;
} }
EXPORT_SYMBOL_GPL(alloc_io_pgtable_ops);
/* /*
* It is the IOMMU driver's responsibility to ensure that the page table * It is the IOMMU driver's responsibility to ensure that the page table
...@@ -77,3 +77,4 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops) ...@@ -77,3 +77,4 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops)
io_pgtable_tlb_flush_all(iop); io_pgtable_tlb_flush_all(iop);
io_pgtable_init_table[iop->fmt]->free(iop); io_pgtable_init_table[iop->fmt]->free(iop);
} }
EXPORT_SYMBOL_GPL(free_io_pgtable_ops);
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/debugfs.h> #include <linux/debugfs.h>
struct dentry *iommu_debugfs_dir; struct dentry *iommu_debugfs_dir;
EXPORT_SYMBOL_GPL(iommu_debugfs_dir);
/** /**
* iommu_debugfs_setup - create the top-level iommu directory in debugfs * iommu_debugfs_setup - create the top-level iommu directory in debugfs
...@@ -23,9 +24,9 @@ struct dentry *iommu_debugfs_dir; ...@@ -23,9 +24,9 @@ struct dentry *iommu_debugfs_dir;
* Emit a strong warning at boot time to indicate that this feature is * Emit a strong warning at boot time to indicate that this feature is
* enabled. * enabled.
* *
* This function is called from iommu_init; drivers may then call * This function is called from iommu_init; drivers may then use
* iommu_debugfs_new_driver_dir() to instantiate a vendor-specific * iommu_debugfs_dir to instantiate a vendor-specific directory to be used
* directory to be used to expose internal data. * to expose internal data.
*/ */
void iommu_debugfs_setup(void) void iommu_debugfs_setup(void)
{ {
...@@ -48,19 +49,3 @@ void iommu_debugfs_setup(void) ...@@ -48,19 +49,3 @@ void iommu_debugfs_setup(void)
pr_warn("*************************************************************\n"); pr_warn("*************************************************************\n");
} }
} }
/**
* iommu_debugfs_new_driver_dir - create a vendor directory under debugfs/iommu
* @vendor: name of the vendor-specific subdirectory to create
*
* This function is called by an IOMMU driver to create the top-level debugfs
* directory for that driver.
*
* Return: upon success, a pointer to the dentry for the new directory.
* NULL in case of failure.
*/
struct dentry *iommu_debugfs_new_driver_dir(const char *vendor)
{
return debugfs_create_dir(vendor, iommu_debugfs_dir);
}
EXPORT_SYMBOL_GPL(iommu_debugfs_new_driver_dir);
...@@ -668,7 +668,7 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev) ...@@ -668,7 +668,7 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
trace_add_device_to_group(group->id, dev); trace_add_device_to_group(group->id, dev);
pr_info("Adding device %s to group %d\n", dev_name(dev), group->id); dev_info(dev, "Adding to iommu group %d\n", group->id);
return 0; return 0;
...@@ -684,7 +684,7 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev) ...@@ -684,7 +684,7 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
sysfs_remove_link(&dev->kobj, "iommu_group"); sysfs_remove_link(&dev->kobj, "iommu_group");
err_free_device: err_free_device:
kfree(device); kfree(device);
pr_err("Failed to add device %s to group %d: %d\n", dev_name(dev), group->id, ret); dev_err(dev, "Failed to add to iommu group %d: %d\n", group->id, ret);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(iommu_group_add_device); EXPORT_SYMBOL_GPL(iommu_group_add_device);
...@@ -701,7 +701,7 @@ void iommu_group_remove_device(struct device *dev) ...@@ -701,7 +701,7 @@ void iommu_group_remove_device(struct device *dev)
struct iommu_group *group = dev->iommu_group; struct iommu_group *group = dev->iommu_group;
struct group_device *tmp_device, *device = NULL; struct group_device *tmp_device, *device = NULL;
pr_info("Removing device %s from group %d\n", dev_name(dev), group->id); dev_info(dev, "Removing from iommu group %d\n", group->id);
/* Pre-notify listeners that a device is being removed. */ /* Pre-notify listeners that a device is being removed. */
blocking_notifier_call_chain(&group->notifier, blocking_notifier_call_chain(&group->notifier,
...@@ -1585,13 +1585,14 @@ static size_t iommu_pgsize(struct iommu_domain *domain, ...@@ -1585,13 +1585,14 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
int iommu_map(struct iommu_domain *domain, unsigned long iova, int iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot) phys_addr_t paddr, size_t size, int prot)
{ {
const struct iommu_ops *ops = domain->ops;
unsigned long orig_iova = iova; unsigned long orig_iova = iova;
unsigned int min_pagesz; unsigned int min_pagesz;
size_t orig_size = size; size_t orig_size = size;
phys_addr_t orig_paddr = paddr; phys_addr_t orig_paddr = paddr;
int ret = 0; int ret = 0;
if (unlikely(domain->ops->map == NULL || if (unlikely(ops->map == NULL ||
domain->pgsize_bitmap == 0UL)) domain->pgsize_bitmap == 0UL))
return -ENODEV; return -ENODEV;
...@@ -1620,7 +1621,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -1620,7 +1621,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
iova, &paddr, pgsize); iova, &paddr, pgsize);
ret = domain->ops->map(domain, iova, paddr, pgsize, prot); ret = ops->map(domain, iova, paddr, pgsize, prot);
if (ret) if (ret)
break; break;
...@@ -1629,6 +1630,9 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -1629,6 +1630,9 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
size -= pgsize; size -= pgsize;
} }
if (ops->iotlb_sync_map)
ops->iotlb_sync_map(domain);
/* unroll mapping in case something went wrong */ /* unroll mapping in case something went wrong */
if (ret) if (ret)
iommu_unmap(domain, orig_iova, orig_size - size); iommu_unmap(domain, orig_iova, orig_size - size);
...@@ -1951,7 +1955,7 @@ int iommu_request_dm_for_dev(struct device *dev) ...@@ -1951,7 +1955,7 @@ int iommu_request_dm_for_dev(struct device *dev)
iommu_domain_free(group->default_domain); iommu_domain_free(group->default_domain);
group->default_domain = dm_domain; group->default_domain = dm_domain;
pr_info("Using direct mapping for device %s\n", dev_name(dev)); dev_info(dev, "Using iommu direct mapping\n");
ret = 0; ret = 0;
out: out:
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h> #include <linux/of_device.h>
...@@ -35,8 +36,6 @@ ...@@ -35,8 +36,6 @@
#define arm_iommu_detach_device(...) do {} while (0) #define arm_iommu_detach_device(...) do {} while (0)
#endif #endif
#include "io-pgtable.h"
#define IPMMU_CTX_MAX 8 #define IPMMU_CTX_MAX 8
struct ipmmu_features { struct ipmmu_features {
......
...@@ -103,6 +103,9 @@ int __init irq_remapping_prepare(void) ...@@ -103,6 +103,9 @@ int __init irq_remapping_prepare(void)
else if (IS_ENABLED(CONFIG_AMD_IOMMU) && else if (IS_ENABLED(CONFIG_AMD_IOMMU) &&
amd_iommu_irq_ops.prepare() == 0) amd_iommu_irq_ops.prepare() == 0)
remap_ops = &amd_iommu_irq_ops; remap_ops = &amd_iommu_irq_ops;
else if (IS_ENABLED(CONFIG_HYPERV_IOMMU) &&
hyperv_irq_remap_ops.prepare() == 0)
remap_ops = &hyperv_irq_remap_ops;
else else
return -ENOSYS; return -ENOSYS;
......
...@@ -64,6 +64,7 @@ struct irq_remap_ops { ...@@ -64,6 +64,7 @@ struct irq_remap_ops {
extern struct irq_remap_ops intel_irq_remap_ops; extern struct irq_remap_ops intel_irq_remap_ops;
extern struct irq_remap_ops amd_iommu_irq_ops; extern struct irq_remap_ops amd_iommu_irq_ops;
extern struct irq_remap_ops hyperv_irq_remap_ops;
#else /* CONFIG_IRQ_REMAP */ #else /* CONFIG_IRQ_REMAP */
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/io-pgtable.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
...@@ -37,7 +38,6 @@ ...@@ -37,7 +38,6 @@
#include "msm_iommu_hw-8xxx.h" #include "msm_iommu_hw-8xxx.h"
#include "msm_iommu.h" #include "msm_iommu.h"
#include "io-pgtable.h"
#define MRC(reg, processor, op1, crn, crm, op2) \ #define MRC(reg, processor, op1, crn, crm, op2) \
__asm__ __volatile__ ( \ __asm__ __volatile__ ( \
...@@ -461,10 +461,10 @@ static int msm_iommu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -461,10 +461,10 @@ static int msm_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
master->num = master->num =
msm_iommu_alloc_ctx(iommu->context_map, msm_iommu_alloc_ctx(iommu->context_map,
0, iommu->ncb); 0, iommu->ncb);
if (IS_ERR_VALUE(master->num)) { if (IS_ERR_VALUE(master->num)) {
ret = -ENODEV; ret = -ENODEV;
goto fail; goto fail;
} }
config_mids(iommu, master); config_mids(iommu, master);
__program_context(iommu->base, master->num, __program_context(iommu->base, master->num,
priv); priv);
......
...@@ -19,13 +19,12 @@ ...@@ -19,13 +19,12 @@
#include <linux/component.h> #include <linux/component.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <soc/mediatek/smi.h> #include <soc/mediatek/smi.h>
#include "io-pgtable.h"
struct mtk_iommu_suspend_reg { struct mtk_iommu_suspend_reg {
u32 standard_axi_mode; u32 standard_axi_mode;
u32 dcm_dis; u32 dcm_dis;
......
...@@ -474,7 +474,7 @@ static int mtk_iommu_add_device(struct device *dev) ...@@ -474,7 +474,7 @@ static int mtk_iommu_add_device(struct device *dev)
return err; return err;
} }
return iommu_device_link(&data->iommu, dev);; return iommu_device_link(&data->iommu, dev);
} }
static void mtk_iommu_remove_device(struct device *dev) static void mtk_iommu_remove_device(struct device *dev)
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/io-64-nonatomic-hi-lo.h> #include <linux/io-64-nonatomic-hi-lo.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/iopoll.h> #include <linux/iopoll.h>
#include <linux/kconfig.h> #include <linux/kconfig.h>
...@@ -42,7 +43,6 @@ ...@@ -42,7 +43,6 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include "io-pgtable.h"
#include "arm-smmu-regs.h" #include "arm-smmu-regs.h"
#define SMMU_INTR_SEL_NS 0x2000 #define SMMU_INTR_SEL_NS 0x2000
......
This diff is collapsed.
...@@ -982,10 +982,6 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev, ...@@ -982,10 +982,6 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev,
u32 value; u32 value;
int err; int err;
/* This can happen on Tegra20 which doesn't have an SMMU */
if (!soc)
return NULL;
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL); smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
if (!smmu) if (!smmu)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/sort.h> #include <linux/sort.h>
...@@ -38,6 +39,7 @@ ...@@ -38,6 +39,7 @@
#define MC_ERR_ADR 0x0c #define MC_ERR_ADR 0x0c
#define MC_GART_ERROR_REQ 0x30
#define MC_DECERR_EMEM_OTHERS_STATUS 0x58 #define MC_DECERR_EMEM_OTHERS_STATUS 0x58
#define MC_SECURITY_VIOLATION_STATUS 0x74 #define MC_SECURITY_VIOLATION_STATUS 0x74
...@@ -51,7 +53,7 @@ ...@@ -51,7 +53,7 @@
static const struct of_device_id tegra_mc_of_match[] = { static const struct of_device_id tegra_mc_of_match[] = {
#ifdef CONFIG_ARCH_TEGRA_2x_SOC #ifdef CONFIG_ARCH_TEGRA_2x_SOC
{ .compatible = "nvidia,tegra20-mc", .data = &tegra20_mc_soc }, { .compatible = "nvidia,tegra20-mc-gart", .data = &tegra20_mc_soc },
#endif #endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC #ifdef CONFIG_ARCH_TEGRA_3x_SOC
{ .compatible = "nvidia,tegra30-mc", .data = &tegra30_mc_soc }, { .compatible = "nvidia,tegra30-mc", .data = &tegra30_mc_soc },
...@@ -161,7 +163,7 @@ static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev, ...@@ -161,7 +163,7 @@ static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev,
/* block clients DMA requests */ /* block clients DMA requests */
err = rst_ops->block_dma(mc, rst); err = rst_ops->block_dma(mc, rst);
if (err) { if (err) {
dev_err(mc->dev, "Failed to block %s DMA: %d\n", dev_err(mc->dev, "failed to block %s DMA: %d\n",
rst->name, err); rst->name, err);
return err; return err;
} }
...@@ -171,7 +173,7 @@ static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev, ...@@ -171,7 +173,7 @@ static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev,
/* wait for completion of the outstanding DMA requests */ /* wait for completion of the outstanding DMA requests */
while (!rst_ops->dma_idling(mc, rst)) { while (!rst_ops->dma_idling(mc, rst)) {
if (!retries--) { if (!retries--) {
dev_err(mc->dev, "Failed to flush %s DMA\n", dev_err(mc->dev, "failed to flush %s DMA\n",
rst->name); rst->name);
return -EBUSY; return -EBUSY;
} }
...@@ -184,7 +186,7 @@ static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev, ...@@ -184,7 +186,7 @@ static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev,
/* clear clients DMA requests sitting before arbitration */ /* clear clients DMA requests sitting before arbitration */
err = rst_ops->hotreset_assert(mc, rst); err = rst_ops->hotreset_assert(mc, rst);
if (err) { if (err) {
dev_err(mc->dev, "Failed to hot reset %s: %d\n", dev_err(mc->dev, "failed to hot reset %s: %d\n",
rst->name, err); rst->name, err);
return err; return err;
} }
...@@ -213,7 +215,7 @@ static int tegra_mc_hotreset_deassert(struct reset_controller_dev *rcdev, ...@@ -213,7 +215,7 @@ static int tegra_mc_hotreset_deassert(struct reset_controller_dev *rcdev,
/* take out client from hot reset */ /* take out client from hot reset */
err = rst_ops->hotreset_deassert(mc, rst); err = rst_ops->hotreset_deassert(mc, rst);
if (err) { if (err) {
dev_err(mc->dev, "Failed to deassert hot reset %s: %d\n", dev_err(mc->dev, "failed to deassert hot reset %s: %d\n",
rst->name, err); rst->name, err);
return err; return err;
} }
...@@ -223,7 +225,7 @@ static int tegra_mc_hotreset_deassert(struct reset_controller_dev *rcdev, ...@@ -223,7 +225,7 @@ static int tegra_mc_hotreset_deassert(struct reset_controller_dev *rcdev,
/* allow new DMA requests to proceed to arbitration */ /* allow new DMA requests to proceed to arbitration */
err = rst_ops->unblock_dma(mc, rst); err = rst_ops->unblock_dma(mc, rst);
if (err) { if (err) {
dev_err(mc->dev, "Failed to unblock %s DMA : %d\n", dev_err(mc->dev, "failed to unblock %s DMA : %d\n",
rst->name, err); rst->name, err);
return err; return err;
} }
...@@ -575,8 +577,15 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data) ...@@ -575,8 +577,15 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
break; break;
case MC_INT_INVALID_GART_PAGE: case MC_INT_INVALID_GART_PAGE:
dev_err_ratelimited(mc->dev, "%s\n", error); reg = MC_GART_ERROR_REQ;
continue; value = mc_readl(mc, reg);
id = (value >> 1) & mc->soc->client_id_mask;
desc = error_names[2];
if (value & BIT(0))
direction = "write";
break;
case MC_INT_SECURITY_VIOLATION: case MC_INT_SECURITY_VIOLATION:
reg = MC_SECURITY_VIOLATION_STATUS; reg = MC_SECURITY_VIOLATION_STATUS;
...@@ -611,23 +620,18 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data) ...@@ -611,23 +620,18 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
static int tegra_mc_probe(struct platform_device *pdev) static int tegra_mc_probe(struct platform_device *pdev)
{ {
const struct of_device_id *match;
struct resource *res; struct resource *res;
struct tegra_mc *mc; struct tegra_mc *mc;
void *isr; void *isr;
int err; int err;
match = of_match_node(tegra_mc_of_match, pdev->dev.of_node);
if (!match)
return -ENODEV;
mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL); mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL);
if (!mc) if (!mc)
return -ENOMEM; return -ENOMEM;
platform_set_drvdata(pdev, mc); platform_set_drvdata(pdev, mc);
spin_lock_init(&mc->lock); spin_lock_init(&mc->lock);
mc->soc = match->data; mc->soc = of_device_get_match_data(&pdev->dev);
mc->dev = &pdev->dev; mc->dev = &pdev->dev;
/* length of MC tick in nanoseconds */ /* length of MC tick in nanoseconds */
...@@ -638,38 +642,35 @@ static int tegra_mc_probe(struct platform_device *pdev) ...@@ -638,38 +642,35 @@ static int tegra_mc_probe(struct platform_device *pdev)
if (IS_ERR(mc->regs)) if (IS_ERR(mc->regs))
return PTR_ERR(mc->regs); return PTR_ERR(mc->regs);
mc->clk = devm_clk_get(&pdev->dev, "mc");
if (IS_ERR(mc->clk)) {
dev_err(&pdev->dev, "failed to get MC clock: %ld\n",
PTR_ERR(mc->clk));
return PTR_ERR(mc->clk);
}
#ifdef CONFIG_ARCH_TEGRA_2x_SOC #ifdef CONFIG_ARCH_TEGRA_2x_SOC
if (mc->soc == &tegra20_mc_soc) { if (mc->soc == &tegra20_mc_soc) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
mc->regs2 = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mc->regs2))
return PTR_ERR(mc->regs2);
isr = tegra20_mc_irq; isr = tegra20_mc_irq;
} else } else
#endif #endif
{ {
mc->clk = devm_clk_get(&pdev->dev, "mc");
if (IS_ERR(mc->clk)) {
dev_err(&pdev->dev, "failed to get MC clock: %ld\n",
PTR_ERR(mc->clk));
return PTR_ERR(mc->clk);
}
err = tegra_mc_setup_latency_allowance(mc); err = tegra_mc_setup_latency_allowance(mc);
if (err < 0) { if (err < 0) {
dev_err(&pdev->dev, "failed to setup latency allowance: %d\n", dev_err(&pdev->dev,
"failed to setup latency allowance: %d\n",
err); err);
return err; return err;
} }
isr = tegra_mc_irq; isr = tegra_mc_irq;
}
err = tegra_mc_setup_timings(mc); err = tegra_mc_setup_timings(mc);
if (err < 0) { if (err < 0) {
dev_err(&pdev->dev, "failed to setup timings: %d\n", err); dev_err(&pdev->dev, "failed to setup timings: %d\n",
return err; err);
return err;
}
} }
mc->irq = platform_get_irq(pdev, 0); mc->irq = platform_get_irq(pdev, 0);
...@@ -678,11 +679,11 @@ static int tegra_mc_probe(struct platform_device *pdev) ...@@ -678,11 +679,11 @@ static int tegra_mc_probe(struct platform_device *pdev)
return mc->irq; return mc->irq;
} }
WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n"); WARN(!mc->soc->client_id_mask, "missing client ID mask for this SoC\n");
mc_writel(mc, mc->soc->intmask, MC_INTMASK); mc_writel(mc, mc->soc->intmask, MC_INTMASK);
err = devm_request_irq(&pdev->dev, mc->irq, isr, IRQF_SHARED, err = devm_request_irq(&pdev->dev, mc->irq, isr, 0,
dev_name(&pdev->dev), mc); dev_name(&pdev->dev), mc);
if (err < 0) { if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", mc->irq, dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", mc->irq,
...@@ -695,20 +696,65 @@ static int tegra_mc_probe(struct platform_device *pdev) ...@@ -695,20 +696,65 @@ static int tegra_mc_probe(struct platform_device *pdev)
dev_err(&pdev->dev, "failed to register reset controller: %d\n", dev_err(&pdev->dev, "failed to register reset controller: %d\n",
err); err);
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_SMMU)) { if (IS_ENABLED(CONFIG_TEGRA_IOMMU_SMMU) && mc->soc->smmu) {
mc->smmu = tegra_smmu_probe(&pdev->dev, mc->soc->smmu, mc); mc->smmu = tegra_smmu_probe(&pdev->dev, mc->soc->smmu, mc);
if (IS_ERR(mc->smmu)) if (IS_ERR(mc->smmu)) {
dev_err(&pdev->dev, "failed to probe SMMU: %ld\n", dev_err(&pdev->dev, "failed to probe SMMU: %ld\n",
PTR_ERR(mc->smmu)); PTR_ERR(mc->smmu));
mc->smmu = NULL;
}
}
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && !mc->soc->smmu) {
mc->gart = tegra_gart_probe(&pdev->dev, mc);
if (IS_ERR(mc->gart)) {
dev_err(&pdev->dev, "failed to probe GART: %ld\n",
PTR_ERR(mc->gart));
mc->gart = NULL;
}
}
return 0;
}
static int tegra_mc_suspend(struct device *dev)
{
struct tegra_mc *mc = dev_get_drvdata(dev);
int err;
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && mc->gart) {
err = tegra_gart_suspend(mc->gart);
if (err)
return err;
} }
return 0; return 0;
} }
static int tegra_mc_resume(struct device *dev)
{
struct tegra_mc *mc = dev_get_drvdata(dev);
int err;
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && mc->gart) {
err = tegra_gart_resume(mc->gart);
if (err)
return err;
}
return 0;
}
static const struct dev_pm_ops tegra_mc_pm_ops = {
.suspend = tegra_mc_suspend,
.resume = tegra_mc_resume,
};
static struct platform_driver tegra_mc_driver = { static struct platform_driver tegra_mc_driver = {
.driver = { .driver = {
.name = "tegra-mc", .name = "tegra-mc",
.of_match_table = tegra_mc_of_match, .of_match_table = tegra_mc_of_match,
.pm = &tegra_mc_pm_ops,
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
.prevent_deferred_probe = true, .prevent_deferred_probe = true,
......
...@@ -26,19 +26,13 @@ ...@@ -26,19 +26,13 @@
static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset) static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset)
{ {
if (mc->regs2 && offset >= 0x24) return readl_relaxed(mc->regs + offset);
return readl(mc->regs2 + offset - 0x3c);
return readl(mc->regs + offset);
} }
static inline void mc_writel(struct tegra_mc *mc, u32 value, static inline void mc_writel(struct tegra_mc *mc, u32 value,
unsigned long offset) unsigned long offset)
{ {
if (mc->regs2 && offset >= 0x24) writel_relaxed(value, mc->regs + offset);
return writel(value, mc->regs2 + offset - 0x3c);
writel(value, mc->regs + offset);
} }
extern const struct tegra_mc_reset_ops terga_mc_reset_ops_common; extern const struct tegra_mc_reset_ops terga_mc_reset_ops_common;
......
...@@ -142,6 +142,33 @@ int pci_ats_queue_depth(struct pci_dev *dev) ...@@ -142,6 +142,33 @@ int pci_ats_queue_depth(struct pci_dev *dev)
} }
EXPORT_SYMBOL_GPL(pci_ats_queue_depth); EXPORT_SYMBOL_GPL(pci_ats_queue_depth);
/**
* pci_ats_page_aligned - Return Page Aligned Request bit status.
* @pdev: the PCI device
*
* Returns 1, if the Untranslated Addresses generated by the device
* are always aligned or 0 otherwise.
*
* Per PCIe spec r4.0, sec 10.5.1.2, if the Page Aligned Request bit
* is set, it indicates the Untranslated Addresses generated by the
* device are always aligned to a 4096 byte boundary.
*/
int pci_ats_page_aligned(struct pci_dev *pdev)
{
u16 cap;
if (!pdev->ats_cap)
return 0;
pci_read_config_word(pdev, pdev->ats_cap + PCI_ATS_CAP, &cap);
if (cap & PCI_ATS_CAP_PAGE_ALIGNED)
return 1;
return 0;
}
EXPORT_SYMBOL_GPL(pci_ats_page_aligned);
#ifdef CONFIG_PCI_PRI #ifdef CONFIG_PCI_PRI
/** /**
* pci_enable_pri - Enable PRI capability * pci_enable_pri - Enable PRI capability
...@@ -368,6 +395,36 @@ int pci_pasid_features(struct pci_dev *pdev) ...@@ -368,6 +395,36 @@ int pci_pasid_features(struct pci_dev *pdev)
} }
EXPORT_SYMBOL_GPL(pci_pasid_features); EXPORT_SYMBOL_GPL(pci_pasid_features);
/**
* pci_prg_resp_pasid_required - Return PRG Response PASID Required bit
* status.
* @pdev: PCI device structure
*
* Returns 1 if PASID is required in PRG Response Message, 0 otherwise.
*
* Even though the PRG response PASID status is read from PRI Status
* Register, since this API will mainly be used by PASID users, this
* function is defined within #ifdef CONFIG_PCI_PASID instead of
* CONFIG_PCI_PRI.
*/
int pci_prg_resp_pasid_required(struct pci_dev *pdev)
{
u16 status;
int pos;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
if (!pos)
return 0;
pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status);
if (status & PCI_PRI_STATUS_PASID)
return 1;
return 0;
}
EXPORT_SYMBOL_GPL(pci_prg_resp_pasid_required);
#define PASID_NUMBER_SHIFT 8 #define PASID_NUMBER_SHIFT 8
#define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT) #define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT)
/** /**
......
...@@ -374,20 +374,17 @@ enum { ...@@ -374,20 +374,17 @@ enum {
#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52)) #define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
#define QI_DEV_EIOTLB_MAX_INVS 32 #define QI_DEV_EIOTLB_MAX_INVS 32
#define QI_PGRP_IDX(idx) (((u64)(idx)) << 55) /* Page group response descriptor QW0 */
#define QI_PGRP_PRIV(priv) (((u64)(priv)) << 32)
#define QI_PGRP_RESP_CODE(res) ((u64)(res))
#define QI_PGRP_PASID(pasid) (((u64)(pasid)) << 32)
#define QI_PGRP_DID(did) (((u64)(did)) << 16)
#define QI_PGRP_PASID_P(p) (((u64)(p)) << 4) #define QI_PGRP_PASID_P(p) (((u64)(p)) << 4)
#define QI_PGRP_PDP(p) (((u64)(p)) << 5)
#define QI_PGRP_RESP_CODE(res) (((u64)(res)) << 12)
#define QI_PGRP_DID(rid) (((u64)(rid)) << 16)
#define QI_PGRP_PASID(pasid) (((u64)(pasid)) << 32)
/* Page group response descriptor QW1 */
#define QI_PGRP_LPIG(x) (((u64)(x)) << 2)
#define QI_PGRP_IDX(idx) (((u64)(idx)) << 3)
#define QI_PSTRM_ADDR(addr) (((u64)(addr)) & VTD_PAGE_MASK)
#define QI_PSTRM_DEVFN(devfn) (((u64)(devfn)) << 4)
#define QI_PSTRM_RESP_CODE(res) ((u64)(res))
#define QI_PSTRM_IDX(idx) (((u64)(idx)) << 55)
#define QI_PSTRM_PRIV(priv) (((u64)(priv)) << 32)
#define QI_PSTRM_BUS(bus) (((u64)(bus)) << 24)
#define QI_PSTRM_PASID(pasid) (((u64)(pasid)) << 4)
#define QI_RESP_SUCCESS 0x0 #define QI_RESP_SUCCESS 0x0
#define QI_RESP_INVALID 0x1 #define QI_RESP_INVALID 0x1
......
...@@ -20,7 +20,7 @@ struct device; ...@@ -20,7 +20,7 @@ struct device;
struct svm_dev_ops { struct svm_dev_ops {
void (*fault_cb)(struct device *dev, int pasid, u64 address, void (*fault_cb)(struct device *dev, int pasid, u64 address,
u32 private, int rwxp, int response); void *private, int rwxp, int response);
}; };
/* Values for rxwp in fault_cb callback */ /* Values for rxwp in fault_cb callback */
......
...@@ -167,8 +167,9 @@ struct iommu_resv_region { ...@@ -167,8 +167,9 @@ struct iommu_resv_region {
* @detach_dev: detach device from an iommu domain * @detach_dev: detach device from an iommu domain
* @map: map a physically contiguous memory region to an iommu domain * @map: map a physically contiguous memory region to an iommu domain
* @unmap: unmap a physically contiguous memory region from an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain
* @flush_tlb_all: Synchronously flush all hardware TLBs for this domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain
* @iotlb_range_add: Add a given iova range to the flush queue for this domain * @iotlb_range_add: Add a given iova range to the flush queue for this domain
* @iotlb_sync_map: Sync mappings created recently using @map to the hardware
* @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
* queue * queue
* @iova_to_phys: translate iova to physical address * @iova_to_phys: translate iova to physical address
...@@ -183,6 +184,8 @@ struct iommu_resv_region { ...@@ -183,6 +184,8 @@ struct iommu_resv_region {
* @domain_window_enable: Configure and enable a particular window for a domain * @domain_window_enable: Configure and enable a particular window for a domain
* @domain_window_disable: Disable a particular window for a domain * @domain_window_disable: Disable a particular window for a domain
* @of_xlate: add OF master IDs to iommu grouping * @of_xlate: add OF master IDs to iommu grouping
* @is_attach_deferred: Check if domain attach should be deferred from iommu
* driver init to device driver init (default no)
* @pgsize_bitmap: bitmap of all possible supported page sizes * @pgsize_bitmap: bitmap of all possible supported page sizes
*/ */
struct iommu_ops { struct iommu_ops {
...@@ -201,6 +204,7 @@ struct iommu_ops { ...@@ -201,6 +204,7 @@ struct iommu_ops {
void (*flush_iotlb_all)(struct iommu_domain *domain); void (*flush_iotlb_all)(struct iommu_domain *domain);
void (*iotlb_range_add)(struct iommu_domain *domain, void (*iotlb_range_add)(struct iommu_domain *domain,
unsigned long iova, size_t size); unsigned long iova, size_t size);
void (*iotlb_sync_map)(struct iommu_domain *domain);
void (*iotlb_sync)(struct iommu_domain *domain); void (*iotlb_sync)(struct iommu_domain *domain);
phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
int (*add_device)(struct device *dev); int (*add_device)(struct device *dev);
......
...@@ -40,6 +40,7 @@ void pci_disable_pasid(struct pci_dev *pdev); ...@@ -40,6 +40,7 @@ void pci_disable_pasid(struct pci_dev *pdev);
void pci_restore_pasid_state(struct pci_dev *pdev); void pci_restore_pasid_state(struct pci_dev *pdev);
int pci_pasid_features(struct pci_dev *pdev); int pci_pasid_features(struct pci_dev *pdev);
int pci_max_pasids(struct pci_dev *pdev); int pci_max_pasids(struct pci_dev *pdev);
int pci_prg_resp_pasid_required(struct pci_dev *pdev);
#else /* CONFIG_PCI_PASID */ #else /* CONFIG_PCI_PASID */
...@@ -66,6 +67,10 @@ static inline int pci_max_pasids(struct pci_dev *pdev) ...@@ -66,6 +67,10 @@ static inline int pci_max_pasids(struct pci_dev *pdev)
return -EINVAL; return -EINVAL;
} }
static inline int pci_prg_resp_pasid_required(struct pci_dev *pdev)
{
return 0;
}
#endif /* CONFIG_PCI_PASID */ #endif /* CONFIG_PCI_PASID */
......
...@@ -1527,11 +1527,13 @@ void pci_ats_init(struct pci_dev *dev); ...@@ -1527,11 +1527,13 @@ void pci_ats_init(struct pci_dev *dev);
int pci_enable_ats(struct pci_dev *dev, int ps); int pci_enable_ats(struct pci_dev *dev, int ps);
void pci_disable_ats(struct pci_dev *dev); void pci_disable_ats(struct pci_dev *dev);
int pci_ats_queue_depth(struct pci_dev *dev); int pci_ats_queue_depth(struct pci_dev *dev);
int pci_ats_page_aligned(struct pci_dev *dev);
#else #else
static inline void pci_ats_init(struct pci_dev *d) { } static inline void pci_ats_init(struct pci_dev *d) { }
static inline int pci_enable_ats(struct pci_dev *d, int ps) { return -ENODEV; } static inline int pci_enable_ats(struct pci_dev *d, int ps) { return -ENODEV; }
static inline void pci_disable_ats(struct pci_dev *d) { } static inline void pci_disable_ats(struct pci_dev *d) { }
static inline int pci_ats_queue_depth(struct pci_dev *d) { return -ENODEV; } static inline int pci_ats_queue_depth(struct pci_dev *d) { return -ENODEV; }
static inline int pci_ats_page_aligned(struct pci_dev *dev) { return 0; }
#endif #endif
#ifdef CONFIG_PCIE_PTM #ifdef CONFIG_PCIE_PTM
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#ifndef __SOC_TEGRA_MC_H__ #ifndef __SOC_TEGRA_MC_H__
#define __SOC_TEGRA_MC_H__ #define __SOC_TEGRA_MC_H__
#include <linux/err.h>
#include <linux/reset-controller.h> #include <linux/reset-controller.h>
#include <linux/types.h> #include <linux/types.h>
...@@ -77,6 +78,7 @@ struct tegra_smmu_soc { ...@@ -77,6 +78,7 @@ struct tegra_smmu_soc {
struct tegra_mc; struct tegra_mc;
struct tegra_smmu; struct tegra_smmu;
struct gart_device;
#ifdef CONFIG_TEGRA_IOMMU_SMMU #ifdef CONFIG_TEGRA_IOMMU_SMMU
struct tegra_smmu *tegra_smmu_probe(struct device *dev, struct tegra_smmu *tegra_smmu_probe(struct device *dev,
...@@ -96,6 +98,28 @@ static inline void tegra_smmu_remove(struct tegra_smmu *smmu) ...@@ -96,6 +98,28 @@ static inline void tegra_smmu_remove(struct tegra_smmu *smmu)
} }
#endif #endif
#ifdef CONFIG_TEGRA_IOMMU_GART
struct gart_device *tegra_gart_probe(struct device *dev, struct tegra_mc *mc);
int tegra_gart_suspend(struct gart_device *gart);
int tegra_gart_resume(struct gart_device *gart);
#else
static inline struct gart_device *
tegra_gart_probe(struct device *dev, struct tegra_mc *mc)
{
return ERR_PTR(-ENODEV);
}
static inline int tegra_gart_suspend(struct gart_device *gart)
{
return -ENODEV;
}
static inline int tegra_gart_resume(struct gart_device *gart)
{
return -ENODEV;
}
#endif
struct tegra_mc_reset { struct tegra_mc_reset {
const char *name; const char *name;
unsigned long id; unsigned long id;
...@@ -144,7 +168,8 @@ struct tegra_mc_soc { ...@@ -144,7 +168,8 @@ struct tegra_mc_soc {
struct tegra_mc { struct tegra_mc {
struct device *dev; struct device *dev;
struct tegra_smmu *smmu; struct tegra_smmu *smmu;
void __iomem *regs, *regs2; struct gart_device *gart;
void __iomem *regs;
struct clk *clk; struct clk *clk;
int irq; int irq;
......
...@@ -866,6 +866,7 @@ ...@@ -866,6 +866,7 @@
#define PCI_ATS_CAP 0x04 /* ATS Capability Register */ #define PCI_ATS_CAP 0x04 /* ATS Capability Register */
#define PCI_ATS_CAP_QDEP(x) ((x) & 0x1f) /* Invalidate Queue Depth */ #define PCI_ATS_CAP_QDEP(x) ((x) & 0x1f) /* Invalidate Queue Depth */
#define PCI_ATS_MAX_QDEP 32 /* Max Invalidate Queue Depth */ #define PCI_ATS_MAX_QDEP 32 /* Max Invalidate Queue Depth */
#define PCI_ATS_CAP_PAGE_ALIGNED 0x0020 /* Page Aligned Request */
#define PCI_ATS_CTRL 0x06 /* ATS Control Register */ #define PCI_ATS_CTRL 0x06 /* ATS Control Register */
#define PCI_ATS_CTRL_ENABLE 0x8000 /* ATS Enable */ #define PCI_ATS_CTRL_ENABLE 0x8000 /* ATS Enable */
#define PCI_ATS_CTRL_STU(x) ((x) & 0x1f) /* Smallest Translation Unit */ #define PCI_ATS_CTRL_STU(x) ((x) & 0x1f) /* Smallest Translation Unit */
...@@ -880,6 +881,7 @@ ...@@ -880,6 +881,7 @@
#define PCI_PRI_STATUS_RF 0x001 /* Response Failure */ #define PCI_PRI_STATUS_RF 0x001 /* Response Failure */
#define PCI_PRI_STATUS_UPRGI 0x002 /* Unexpected PRG index */ #define PCI_PRI_STATUS_UPRGI 0x002 /* Unexpected PRG index */
#define PCI_PRI_STATUS_STOPPED 0x100 /* PRI Stopped */ #define PCI_PRI_STATUS_STOPPED 0x100 /* PRI Stopped */
#define PCI_PRI_STATUS_PASID 0x8000 /* PRG Response PASID Required */
#define PCI_PRI_MAX_REQ 0x08 /* PRI max reqs supported */ #define PCI_PRI_MAX_REQ 0x08 /* PRI max reqs supported */
#define PCI_PRI_ALLOC_REQ 0x0c /* PRI max reqs allowed */ #define PCI_PRI_ALLOC_REQ 0x0c /* PRI max reqs allowed */
#define PCI_EXT_CAP_PRI_SIZEOF 16 #define PCI_EXT_CAP_PRI_SIZEOF 16
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment