Commit e5a59464 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping updates from Christoph Hellwig:

 - replace the force_dma flag with a dma_configure bus method. (Nipun
   Gupta, although one patch is іncorrectly attributed to me due to a
   git rebase bug)

 - use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)

 - remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
   right thing for bounce buffering.

 - move dma-debug initialization to common code, and apply a few
   cleanups to the dma-debug code.

 - cleanup the Kconfig mess around swiotlb selection

 - swiotlb comment fixup (Yisheng Xie)

 - a trivial swiotlb fix. (Dan Carpenter)

 - support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)

 - add a new generic dma-noncoherent dma_map_ops implementation and use
   it for arc, c6x and nds32.

 - improve scatterlist validity checking in dma-debug. (Robin Murphy)

 - add a struct device quirk to limit the dma-mask to 32-bit due to
   bridge/system issues, and switch x86 to use it instead of a local
   hack for VIA bridges.

 - handle devices without a dma_mask more gracefully in the dma-direct
   code.

* tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping: (48 commits)
  dma-direct: don't crash on device without dma_mask
  nds32: use generic dma_noncoherent_ops
  nds32: implement the unmap_sg DMA operation
  nds32: consolidate DMA cache maintainance routines
  x86/pci-dma: switch the VIA 32-bit DMA quirk to use the struct device flag
  x86/pci-dma: remove the explicit nodac and allowdac option
  x86/pci-dma: remove the experimental forcesac boot option
  Documentation/x86: remove a stray reference to pci-nommu.c
  core, dma-direct: add a flag 32-bit dma limits
  dma-mapping: remove unused gfp_t parameter to arch_dma_alloc_attrs
  dma-debug: check scatterlist segments
  c6x: use generic dma_noncoherent_ops
  arc: use generic dma_noncoherent_ops
  arc: fix arc_dma_{map,unmap}_page
  arc: fix arc_dma_sync_sg_for_{cpu,device}
  arc: simplify arc_dma_sync_single_for_{cpu,device}
  dma-mapping: provide a generic dma-noncoherent implementation
  dma-mapping: simplify Kconfig dependencies
  riscv: add swiotlb support
  riscv: only enable ZONE_DMA32 for 64-bit
  ...
parents f956d08a 2550bbfd
...@@ -1705,7 +1705,6 @@ ...@@ -1705,7 +1705,6 @@
nopanic nopanic
merge merge
nomerge nomerge
forcesac
soft soft
pt [x86, IA-64] pt [x86, IA-64]
nobypass [PPC/POWERNV] nobypass [PPC/POWERNV]
......
#
# Feature name: dma-api-debug
# Kconfig: HAVE_DMA_API_DEBUG
# description: arch supports DMA debug facilities
#
-----------------------
| arch |status|
-----------------------
| alpha: | TODO |
| arc: | TODO |
| arm: | ok |
| arm64: | ok |
| c6x: | ok |
| h8300: | TODO |
| hexagon: | TODO |
| ia64: | ok |
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
| powerpc: | ok |
| s390: | ok |
| sh: | ok |
| sparc: | ok |
| um: | TODO |
| unicore32: | TODO |
| x86: | ok |
| xtensa: | ok |
-----------------------
...@@ -187,9 +187,9 @@ PCI ...@@ -187,9 +187,9 @@ PCI
IOMMU (input/output memory management unit) IOMMU (input/output memory management unit)
Currently four x86-64 PCI-DMA mapping implementations exist: Multiple x86-64 PCI-DMA mapping implementations exist, for example:
1. <arch/x86_64/kernel/pci-nommu.c>: use no hardware/software IOMMU at all 1. <lib/dma-direct.c>: use no hardware/software IOMMU at all
(e.g. because you have < 3 GB memory). (e.g. because you have < 3 GB memory).
Kernel boot message: "PCI-DMA: Disabling IOMMU" Kernel boot message: "PCI-DMA: Disabling IOMMU"
...@@ -208,7 +208,7 @@ IOMMU (input/output memory management unit) ...@@ -208,7 +208,7 @@ IOMMU (input/output memory management unit)
Kernel boot message: "PCI-DMA: Using Calgary IOMMU" Kernel boot message: "PCI-DMA: Using Calgary IOMMU"
iommu=[<size>][,noagp][,off][,force][,noforce][,leak[=<nr_of_leak_pages>] iommu=[<size>][,noagp][,off][,force][,noforce][,leak[=<nr_of_leak_pages>]
[,memaper[=<order>]][,merge][,forcesac][,fullflush][,nomerge] [,memaper[=<order>]][,merge][,fullflush][,nomerge]
[,noaperture][,calgary] [,noaperture][,calgary]
General iommu options: General iommu options:
...@@ -235,14 +235,7 @@ IOMMU (input/output memory management unit) ...@@ -235,14 +235,7 @@ IOMMU (input/output memory management unit)
(experimental). (experimental).
nomerge Don't do scatter-gather (SG) merging. nomerge Don't do scatter-gather (SG) merging.
noaperture Ask the IOMMU not to touch the aperture for AGP. noaperture Ask the IOMMU not to touch the aperture for AGP.
forcesac Force single-address cycle (SAC) mode for masks <40bits
(experimental).
noagp Don't initialize the AGP driver and use full aperture. noagp Don't initialize the AGP driver and use full aperture.
allowdac Allow double-address cycle (DAC) mode, i.e. DMA >4GB.
DAC is used with 32-bit PCI to push a 64-bit address in
two cycles. When off all DMA over >4GB is forced through
an IOMMU or software bounce buffering.
nodac Forbid DAC mode, i.e. DMA >4GB.
panic Always panic when IOMMU overflows. panic Always panic when IOMMU overflows.
calgary Use the Calgary IOMMU if it is available calgary Use the Calgary IOMMU if it is available
......
...@@ -4330,12 +4330,14 @@ W: http://git.infradead.org/users/hch/dma-mapping.git ...@@ -4330,12 +4330,14 @@ W: http://git.infradead.org/users/hch/dma-mapping.git
S: Supported S: Supported
F: lib/dma-debug.c F: lib/dma-debug.c
F: lib/dma-direct.c F: lib/dma-direct.c
F: lib/dma-noncoherent.c
F: lib/dma-virt.c F: lib/dma-virt.c
F: drivers/base/dma-mapping.c F: drivers/base/dma-mapping.c
F: drivers/base/dma-coherent.c F: drivers/base/dma-coherent.c
F: include/asm-generic/dma-mapping.h F: include/asm-generic/dma-mapping.h
F: include/linux/dma-direct.h F: include/linux/dma-direct.h
F: include/linux/dma-mapping.h F: include/linux/dma-mapping.h
F: include/linux/dma-noncoherent.h
DME1737 HARDWARE MONITOR DRIVER DME1737 HARDWARE MONITOR DRIVER
M: Juerg Haefliger <juergh@gmail.com> M: Juerg Haefliger <juergh@gmail.com>
......
...@@ -278,9 +278,6 @@ config HAVE_CLK ...@@ -278,9 +278,6 @@ config HAVE_CLK
The <linux/clk.h> calls support software clock gating and The <linux/clk.h> calls support software clock gating and
thus are a key power management tool on many systems. thus are a key power management tool on many systems.
config HAVE_DMA_API_DEBUG
bool
config HAVE_HW_BREAKPOINT config HAVE_HW_BREAKPOINT
bool bool
depends on PERF_EVENTS depends on PERF_EVENTS
......
...@@ -10,6 +10,8 @@ config ALPHA ...@@ -10,6 +10,8 @@ config ALPHA
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
select VIRT_TO_BUS select VIRT_TO_BUS
select GENERIC_IRQ_PROBE select GENERIC_IRQ_PROBE
select AUTO_IRQ_AFFINITY if SMP select AUTO_IRQ_AFFINITY if SMP
...@@ -64,15 +66,6 @@ config ZONE_DMA ...@@ -64,15 +66,6 @@ config ZONE_DMA
bool bool
default y default y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool bool
default y default y
...@@ -346,9 +339,6 @@ config PCI_DOMAINS ...@@ -346,9 +339,6 @@ config PCI_DOMAINS
config PCI_SYSCALL config PCI_SYSCALL
def_bool PCI def_bool PCI
config IOMMU_HELPER
def_bool PCI
config ALPHA_NONAME config ALPHA_NONAME
bool bool
depends on ALPHA_BOOK1 || ALPHA_NONAME_CH depends on ALPHA_BOOK1 || ALPHA_NONAME_CH
......
...@@ -56,11 +56,6 @@ struct pci_controller { ...@@ -56,11 +56,6 @@ struct pci_controller {
/* IOMMU controls. */ /* IOMMU controls. */
/* The PCI address space does not equal the physical memory address space.
The networking and block device layers use this boolean for bounce buffer
decisions. */
#define PCI_DMA_BUS_IS_PHYS 0
/* TODO: integrate with include/asm-generic/pci.h ? */ /* TODO: integrate with include/asm-generic/pci.h ? */
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{ {
......
...@@ -9,11 +9,15 @@ ...@@ -9,11 +9,15 @@
config ARC config ARC
def_bool y def_bool y
select ARC_TIMERS select ARC_TIMERS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select BUILDTIME_EXTABLE_SORT select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS select CLONE_BACKWARDS
select COMMON_CLK select COMMON_CLK
select DMA_NONCOHERENT_OPS
select DMA_NONCOHERENT_MMAP
select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC)
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select GENERIC_FIND_FIRST_BIT select GENERIC_FIND_FIRST_BIT
...@@ -453,16 +457,11 @@ config ARC_HAS_PAE40 ...@@ -453,16 +457,11 @@ config ARC_HAS_PAE40
default n default n
depends on ISA_ARCV2 depends on ISA_ARCV2
select HIGHMEM select HIGHMEM
select PHYS_ADDR_T_64BIT
help help
Enable access to physical memory beyond 4G, only supported on Enable access to physical memory beyond 4G, only supported on
ARC cores with 40 bit Physical Addressing support ARC cores with 40 bit Physical Addressing support
config ARCH_PHYS_ADDR_T_64BIT
def_bool ARC_HAS_PAE40
config ARCH_DMA_ADDR_T_64BIT
bool
config ARC_KVADDR_SIZE config ARC_KVADDR_SIZE
int "Kernel Virtual Address Space size (MB)" int "Kernel Virtual Address Space size (MB)"
range 0 512 range 0 512
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
generic-y += bugs.h generic-y += bugs.h
generic-y += device.h generic-y += device.h
generic-y += div64.h generic-y += div64.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h generic-y += emergency-restart.h
generic-y += extable.h generic-y += extable.h
generic-y += fb.h generic-y += fb.h
......
/*
* DMA Mapping glue for ARC
*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef ASM_ARC_DMA_MAPPING_H
#define ASM_ARC_DMA_MAPPING_H
extern const struct dma_map_ops arc_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &arc_dma_ops;
}
#endif
...@@ -16,12 +16,6 @@ ...@@ -16,12 +16,6 @@
#define PCIBIOS_MIN_MEM 0x100000 #define PCIBIOS_MIN_MEM 0x100000
#define pcibios_assign_all_busses() 1 #define pcibios_assign_all_busses() 1
/*
* The PCI address space does equal the physical memory address space.
* The networking and block device layers use this boolean for bounce
* buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS 1
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
......
...@@ -16,13 +16,12 @@ ...@@ -16,13 +16,12 @@
* The default DMA address == Phy address which is 0x8000_0000 based. * The default DMA address == Phy address which is 0x8000_0000 based.
*/ */
#include <linux/dma-mapping.h> #include <linux/dma-noncoherent.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
static void *arc_dma_alloc(struct device *dev, size_t size, gfp_t gfp, unsigned long attrs)
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
{ {
unsigned long order = get_order(size); unsigned long order = get_order(size);
struct page *page; struct page *page;
...@@ -89,7 +88,7 @@ static void *arc_dma_alloc(struct device *dev, size_t size, ...@@ -89,7 +88,7 @@ static void *arc_dma_alloc(struct device *dev, size_t size,
return kvaddr; return kvaddr;
} }
static void arc_dma_free(struct device *dev, size_t size, void *vaddr, void arch_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs) dma_addr_t dma_handle, unsigned long attrs)
{ {
phys_addr_t paddr = dma_handle; phys_addr_t paddr = dma_handle;
...@@ -105,9 +104,9 @@ static void arc_dma_free(struct device *dev, size_t size, void *vaddr, ...@@ -105,9 +104,9 @@ static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
__free_pages(page, get_order(size)); __free_pages(page, get_order(size));
} }
static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma, int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size, void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs) unsigned long attrs)
{ {
unsigned long user_count = vma_pages(vma); unsigned long user_count = vma_pages(vma);
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
...@@ -130,149 +129,14 @@ static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma, ...@@ -130,149 +129,14 @@ static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma,
return ret; return ret;
} }
/* void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
* streaming DMA Mapping API... size_t size, enum dma_data_direction dir)
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/
static void _dma_cache_sync(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{ {
switch (dir) { dma_cache_wback(paddr, size);
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %pa[p]\n", dir, &paddr);
}
} }
/* void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
* arc_dma_map_page - map a portion of a page for streaming DMA size_t size, enum dma_data_direction dir)
*
* Ensure that any data held in the cache is appropriately discarded
* or written back.
*
* The device owns this memory once this call has completed. The CPU
* can regain ownership by calling dma_unmap_page().
*
* Note: while it takes struct page as arg, caller can "abuse" it to pass
* a region larger than PAGE_SIZE, provided it is physically contiguous
* and this still works correctly
*/
static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
{ {
phys_addr_t paddr = page_to_phys(page) + offset; dma_cache_inv(paddr, size);
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
return paddr;
}
/*
* arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
*
* After this call, reads by the CPU to the buffer are guaranteed to see
* whatever the device wrote there.
*
* Note: historically this routine was not implemented for ARC
*/
static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
phys_addr_t paddr = handle;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
} }
static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir,
unsigned long attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir,
attrs);
}
static void arc_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
}
static void arc_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
}
static void arc_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync(sg_phys(sg), sg->length, dir);
}
static void arc_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync(sg_phys(sg), sg->length, dir);
}
static int arc_dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
const struct dma_map_ops arc_dma_ops = {
.alloc = arc_dma_alloc,
.free = arc_dma_free,
.mmap = arc_dma_mmap,
.map_page = arc_dma_map_page,
.unmap_page = arc_dma_unmap_page,
.map_sg = arc_dma_map_sg,
.unmap_sg = arc_dma_unmap_sg,
.sync_single_for_device = arc_dma_sync_single_for_device,
.sync_single_for_cpu = arc_dma_sync_single_for_cpu,
.sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
.sync_sg_for_device = arc_dma_sync_sg_for_device,
.dma_supported = arc_dma_supported,
};
EXPORT_SYMBOL(arc_dma_ops);
...@@ -60,7 +60,6 @@ config ARM ...@@ -60,7 +60,6 @@ config ARM
select HAVE_CONTEXT_TRACKING select HAVE_CONTEXT_TRACKING
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS if MMU select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
...@@ -96,6 +95,7 @@ config ARM ...@@ -96,6 +95,7 @@ config ARM
select HAVE_VIRT_CPU_ACCOUNTING_GEN select HAVE_VIRT_CPU_ACCOUNTING_GEN
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select MODULES_USE_ELF_REL select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select NO_BOOTMEM select NO_BOOTMEM
select OF_EARLY_FLATTREE if OF select OF_EARLY_FLATTREE if OF
select OF_RESERVED_MEM if OF select OF_RESERVED_MEM if OF
...@@ -119,9 +119,6 @@ config ARM_HAS_SG_CHAIN ...@@ -119,9 +119,6 @@ config ARM_HAS_SG_CHAIN
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
bool bool
config NEED_SG_DMA_LENGTH
bool
config ARM_DMA_USE_IOMMU config ARM_DMA_USE_IOMMU
bool bool
select ARM_HAS_SG_CHAIN select ARM_HAS_SG_CHAIN
...@@ -224,9 +221,6 @@ config ARCH_MAY_HAVE_PC_FDC ...@@ -224,9 +221,6 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA config ZONE_DMA
bool bool
config NEED_DMA_MAP_STATE
def_bool y
config ARCH_SUPPORTS_UPROBES config ARCH_SUPPORTS_UPROBES
def_bool y def_bool y
...@@ -1778,12 +1772,6 @@ config SECCOMP ...@@ -1778,12 +1772,6 @@ config SECCOMP
and the task is only allowed to execute a few safe syscalls and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode. defined by each seccomp mode.
config SWIOTLB
def_bool y
config IOMMU_HELPER
def_bool SWIOTLB
config PARAVIRT config PARAVIRT
bool "Enable paravirtualization code" bool "Enable paravirtualization code"
help help
...@@ -1815,6 +1803,7 @@ config XEN ...@@ -1815,6 +1803,7 @@ config XEN
depends on MMU depends on MMU
select ARCH_DMA_ADDR_T_64BIT select ARCH_DMA_ADDR_T_64BIT
select ARM_PSCI select ARM_PSCI
select SWIOTLB
select SWIOTLB_XEN select SWIOTLB_XEN
select PARAVIRT select PARAVIRT
help help
......
...@@ -19,13 +19,6 @@ static inline int pci_proc_domain(struct pci_bus *bus) ...@@ -19,13 +19,6 @@ static inline int pci_proc_domain(struct pci_bus *bus)
} }
#endif /* CONFIG_PCI_DOMAINS */ #endif /* CONFIG_PCI_DOMAINS */
/*
* The PCI address space does equal the physical memory address space.
* The networking and block device layers use this boolean for bounce
* buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#define HAVE_PCI_MMAP #define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE #define ARCH_GENERIC_PCI_MMAP_RESOURCE
......
...@@ -754,7 +754,7 @@ int __init arm_add_memory(u64 start, u64 size) ...@@ -754,7 +754,7 @@ int __init arm_add_memory(u64 start, u64 size)
else else
size -= aligned_start - start; size -= aligned_start - start;
#ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT #ifndef CONFIG_PHYS_ADDR_T_64BIT
if (aligned_start > ULONG_MAX) { if (aligned_start > ULONG_MAX) {
pr_crit("Ignoring memory at 0x%08llx outside 32-bit physical address space\n", pr_crit("Ignoring memory at 0x%08llx outside 32-bit physical address space\n",
(long long)start); (long long)start);
......
...@@ -2,7 +2,6 @@ ...@@ -2,7 +2,6 @@
config ARCH_AXXIA config ARCH_AXXIA
bool "LSI Axxia platforms" bool "LSI Axxia platforms"
depends on ARCH_MULTI_V7 && ARM_LPAE depends on ARCH_MULTI_V7 && ARM_LPAE
select ARCH_DMA_ADDR_T_64BIT
select ARM_AMBA select ARM_AMBA
select ARM_GIC select ARM_GIC
select ARM_TIMER_SP804 select ARM_TIMER_SP804
......
...@@ -211,7 +211,6 @@ config ARCH_BRCMSTB ...@@ -211,7 +211,6 @@ config ARCH_BRCMSTB
select BRCMSTB_L2_IRQ select BRCMSTB_L2_IRQ
select BCM7120_L2_IRQ select BCM7120_L2_IRQ
select ARCH_HAS_HOLES_MEMORYMODEL select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ZONE_DMA if ARM_LPAE select ZONE_DMA if ARM_LPAE
select SOC_BRCMSTB select SOC_BRCMSTB
select SOC_BUS select SOC_BUS
......
...@@ -112,7 +112,6 @@ config SOC_EXYNOS5440 ...@@ -112,7 +112,6 @@ config SOC_EXYNOS5440
bool "SAMSUNG EXYNOS5440" bool "SAMSUNG EXYNOS5440"
default y default y
depends on ARCH_EXYNOS5 depends on ARCH_EXYNOS5
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select HAVE_ARM_ARCH_TIMER select HAVE_ARM_ARCH_TIMER
select AUTO_ZRELADDR select AUTO_ZRELADDR
select PINCTRL_EXYNOS5440 select PINCTRL_EXYNOS5440
......
config ARCH_HIGHBANK config ARCH_HIGHBANK
bool "Calxeda ECX-1000/2000 (Highbank/Midway)" bool "Calxeda ECX-1000/2000 (Highbank/Midway)"
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_HAS_HOLES_MEMORYMODEL select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_SUPPORTS_BIG_ENDIAN select ARCH_SUPPORTS_BIG_ENDIAN
select ARM_AMBA select ARM_AMBA
......
...@@ -3,7 +3,6 @@ config ARCH_ROCKCHIP ...@@ -3,7 +3,6 @@ config ARCH_ROCKCHIP
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select PINCTRL select PINCTRL
select PINCTRL_ROCKCHIP select PINCTRL_ROCKCHIP
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_HAS_RESET_CONTROLLER select ARCH_HAS_RESET_CONTROLLER
select ARM_AMBA select ARM_AMBA
select ARM_GIC select ARM_GIC
......
...@@ -29,7 +29,6 @@ config ARCH_RMOBILE ...@@ -29,7 +29,6 @@ config ARCH_RMOBILE
menuconfig ARCH_RENESAS menuconfig ARCH_RENESAS
bool "Renesas ARM SoCs" bool "Renesas ARM SoCs"
depends on ARCH_MULTI_V7 && MMU depends on ARCH_MULTI_V7 && MMU
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_SHMOBILE select ARCH_SHMOBILE
select ARM_GIC select ARM_GIC
select GPIOLIB select GPIOLIB
......
...@@ -15,6 +15,5 @@ menuconfig ARCH_TEGRA ...@@ -15,6 +15,5 @@ menuconfig ARCH_TEGRA
select RESET_CONTROLLER select RESET_CONTROLLER
select SOC_BUS select SOC_BUS
select ZONE_DMA if ARM_LPAE select ZONE_DMA if ARM_LPAE
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
help help
This enables support for NVIDIA Tegra based systems. This enables support for NVIDIA Tegra based systems.
...@@ -661,6 +661,7 @@ config ARM_LPAE ...@@ -661,6 +661,7 @@ config ARM_LPAE
bool "Support for the Large Physical Address Extension" bool "Support for the Large Physical Address Extension"
depends on MMU && CPU_32v7 && !CPU_32v6 && !CPU_32v5 && \ depends on MMU && CPU_32v7 && !CPU_32v6 && !CPU_32v5 && \
!CPU_32v4 && !CPU_32v3 !CPU_32v4 && !CPU_32v3
select PHYS_ADDR_T_64BIT
help help
Say Y if you have an ARMv7 processor supporting the LPAE page Say Y if you have an ARMv7 processor supporting the LPAE page
table format and you would like to access memory beyond the table format and you would like to access memory beyond the
...@@ -673,12 +674,6 @@ config ARM_PV_FIXUP ...@@ -673,12 +674,6 @@ config ARM_PV_FIXUP
def_bool y def_bool y
depends on ARM_LPAE && ARM_PATCH_PHYS_VIRT && ARCH_KEYSTONE depends on ARM_LPAE && ARM_PATCH_PHYS_VIRT && ARCH_KEYSTONE
config ARCH_PHYS_ADDR_T_64BIT
def_bool ARM_LPAE
config ARCH_DMA_ADDR_T_64BIT
bool
config ARM_THUMB config ARM_THUMB
bool "Support Thumb user binaries" if !CPU_THUMBONLY && EXPERT bool "Support Thumb user binaries" if !CPU_THUMBONLY && EXPERT
depends on CPU_THUMB_CAPABLE depends on CPU_THUMB_CAPABLE
......
...@@ -241,12 +241,3 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, ...@@ -241,12 +241,3 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
void arch_teardown_dma_ops(struct device *dev) void arch_teardown_dma_ops(struct device *dev)
{ {
} }
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
core_initcall(dma_debug_do_init);
...@@ -1151,15 +1151,6 @@ int arm_dma_supported(struct device *dev, u64 mask) ...@@ -1151,15 +1151,6 @@ int arm_dma_supported(struct device *dev, u64 mask)
return __dma_supported(dev, mask, false); return __dma_supported(dev, mask, false);
} }
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
core_initcall(dma_debug_do_init);
#ifdef CONFIG_ARM_DMA_USE_IOMMU #ifdef CONFIG_ARM_DMA_USE_IOMMU
static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs) static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs)
......
...@@ -105,7 +105,6 @@ config ARM64 ...@@ -105,7 +105,6 @@ config ARM64
select HAVE_CONTEXT_TRACKING select HAVE_CONTEXT_TRACKING
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EFFICIENT_UNALIGNED_ACCESS
...@@ -133,6 +132,8 @@ config ARM64 ...@@ -133,6 +132,8 @@ config ARM64
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select MULTI_IRQ_HANDLER select MULTI_IRQ_HANDLER
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
select NO_BOOTMEM select NO_BOOTMEM
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
...@@ -142,6 +143,7 @@ config ARM64 ...@@ -142,6 +143,7 @@ config ARM64
select POWER_SUPPLY select POWER_SUPPLY
select REFCOUNT_FULL select REFCOUNT_FULL
select SPARSE_IRQ select SPARSE_IRQ
select SWIOTLB
select SYSCTL_EXCEPTION_TRACE select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK select THREAD_INFO_IN_TASK
help help
...@@ -150,9 +152,6 @@ config ARM64 ...@@ -150,9 +152,6 @@ config ARM64
config 64BIT config 64BIT
def_bool y def_bool y
config ARCH_PHYS_ADDR_T_64BIT
def_bool y
config MMU config MMU
def_bool y def_bool y
...@@ -237,24 +236,9 @@ config ZONE_DMA32 ...@@ -237,24 +236,9 @@ config ZONE_DMA32
config HAVE_GENERIC_GUP config HAVE_GENERIC_GUP
def_bool y def_bool y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config SMP config SMP
def_bool y def_bool y
config SWIOTLB
def_bool y
config IOMMU_HELPER
def_bool SWIOTLB
config KERNEL_MODE_NEON config KERNEL_MODE_NEON
def_bool y def_bool y
......
...@@ -18,11 +18,6 @@ ...@@ -18,11 +18,6 @@
#define pcibios_assign_all_busses() \ #define pcibios_assign_all_busses() \
(pci_has_flag(PCI_REASSIGN_ALL_BUS)) (pci_has_flag(PCI_REASSIGN_ALL_BUS))
/*
* PCI address space differs from physical memory address space
*/
#define PCI_DMA_BUS_IS_PHYS (0)
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
extern int isa_dma_bridge_buggy; extern int isa_dma_bridge_buggy;
......
...@@ -508,16 +508,6 @@ static int __init arm64_dma_init(void) ...@@ -508,16 +508,6 @@ static int __init arm64_dma_init(void)
} }
arch_initcall(arm64_dma_init); arch_initcall(arm64_dma_init);
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_debug_do_init);
#ifdef CONFIG_IOMMU_DMA #ifdef CONFIG_IOMMU_DMA
#include <linux/dma-iommu.h> #include <linux/dma-iommu.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
......
...@@ -6,11 +6,13 @@ ...@@ -6,11 +6,13 @@
config C6X config C6X
def_bool y def_bool y
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select CLKDEV_LOOKUP select CLKDEV_LOOKUP
select DMA_NONCOHERENT_OPS
select GENERIC_ATOMIC64 select GENERIC_ATOMIC64
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select SPARSE_IRQ select SPARSE_IRQ
select IRQ_DOMAIN select IRQ_DOMAIN
......
...@@ -5,6 +5,7 @@ generic-y += current.h ...@@ -5,6 +5,7 @@ generic-y += current.h
generic-y += device.h generic-y += device.h
generic-y += div64.h generic-y += div64.h
generic-y += dma.h generic-y += dma.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h generic-y += emergency-restart.h
generic-y += exec.h generic-y += exec.h
generic-y += extable.h generic-y += extable.h
......
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot <aurelien.jacquiot@ti.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#ifndef _ASM_C6X_DMA_MAPPING_H
#define _ASM_C6X_DMA_MAPPING_H
extern const struct dma_map_ops c6x_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &c6x_dma_ops;
}
extern void coherent_mem_init(u32 start, u32 size);
void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp_t gfp, unsigned long attrs);
void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs);
#endif /* _ASM_C6X_DMA_MAPPING_H */
...@@ -28,5 +28,7 @@ extern unsigned char c6x_fuse_mac[6]; ...@@ -28,5 +28,7 @@ extern unsigned char c6x_fuse_mac[6];
extern void machine_init(unsigned long dt_ptr); extern void machine_init(unsigned long dt_ptr);
extern void time_init(void); extern void time_init(void);
extern void coherent_mem_init(u32 start, u32 size);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif /* _ASM_C6X_SETUP_H */ #endif /* _ASM_C6X_SETUP_H */
...@@ -8,6 +8,6 @@ extra-y := head.o vmlinux.lds ...@@ -8,6 +8,6 @@ extra-y := head.o vmlinux.lds
obj-y := process.o traps.o irq.o signal.o ptrace.o obj-y := process.o traps.o irq.o signal.o ptrace.o
obj-y += setup.o sys_c6x.o time.o devicetree.o obj-y += setup.o sys_c6x.o time.o devicetree.o
obj-y += switch_to.o entry.o vectors.o c6x_ksyms.o obj-y += switch_to.o entry.o vectors.o c6x_ksyms.o
obj-y += soc.o dma.o obj-y += soc.o
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += module.o
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/scatterlist.h>
#include <asm/cacheflush.h>
static void c6x_dma_sync(dma_addr_t handle, size_t size,
enum dma_data_direction dir)
{
unsigned long paddr = handle;
BUG_ON(!valid_dma_direction(dir));
switch (dir) {
case DMA_FROM_DEVICE:
L2_cache_block_invalidate(paddr, paddr + size);
break;
case DMA_TO_DEVICE:
L2_cache_block_writeback(paddr, paddr + size);
break;
case DMA_BIDIRECTIONAL:
L2_cache_block_writeback_invalidate(paddr, paddr + size);
break;
default:
break;
}
}
static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
dma_addr_t handle = virt_to_phys(page_address(page) + offset);
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
c6x_dma_sync(handle, size, dir);
return handle;
}
static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
c6x_dma_sync(handle, size, dir);
}
static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i) {
sg->dma_address = sg_phys(sg);
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
c6x_dma_sync(sg->dma_address, sg->length, dir);
}
return nents;
}
static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *sg;
int i;
if (attrs & DMA_ATTR_SKIP_CPU_SYNC)
return;
for_each_sg(sglist, sg, nents, i)
c6x_dma_sync(sg_dma_address(sg), sg->length, dir);
}
static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
c6x_dma_sync(handle, size, dir);
}
static void c6x_dma_sync_single_for_device(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
c6x_dma_sync(handle, size, dir);
}
static void c6x_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nents,
enum dma_data_direction dir)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i)
c6x_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
sg->length, dir);
}
static void c6x_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nents,
enum dma_data_direction dir)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i)
c6x_dma_sync_single_for_device(dev, sg_dma_address(sg),
sg->length, dir);
}
const struct dma_map_ops c6x_dma_ops = {
.alloc = c6x_dma_alloc,
.free = c6x_dma_free,
.map_page = c6x_dma_map_page,
.unmap_page = c6x_dma_unmap_page,
.map_sg = c6x_dma_map_sg,
.unmap_sg = c6x_dma_unmap_sg,
.sync_single_for_device = c6x_dma_sync_single_for_device,
.sync_single_for_cpu = c6x_dma_sync_single_for_cpu,
.sync_sg_for_device = c6x_dma_sync_sg_for_device,
.sync_sg_for_cpu = c6x_dma_sync_sg_for_cpu,
};
EXPORT_SYMBOL(c6x_dma_ops);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
...@@ -19,10 +19,12 @@ ...@@ -19,10 +19,12 @@
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/dma-mapping.h> #include <linux/dma-noncoherent.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <asm/cacheflush.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/setup.h>
/* /*
* DMA coherent memory management, can be redefined using the memdma= * DMA coherent memory management, can be redefined using the memdma=
...@@ -73,7 +75,7 @@ static void __free_dma_pages(u32 addr, int order) ...@@ -73,7 +75,7 @@ static void __free_dma_pages(u32 addr, int order)
* Allocate DMA coherent memory space and return both the kernel * Allocate DMA coherent memory space and return both the kernel
* virtual and DMA address for that space. * virtual and DMA address for that space.
*/ */
void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp_t gfp, unsigned long attrs) gfp_t gfp, unsigned long attrs)
{ {
u32 paddr; u32 paddr;
...@@ -98,7 +100,7 @@ void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, ...@@ -98,7 +100,7 @@ void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
/* /*
* Free DMA coherent memory as defined by the above mapping. * Free DMA coherent memory as defined by the above mapping.
*/ */
void c6x_dma_free(struct device *dev, size_t size, void *vaddr, void arch_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs) dma_addr_t dma_handle, unsigned long attrs)
{ {
int order; int order;
...@@ -139,3 +141,35 @@ void __init coherent_mem_init(phys_addr_t start, u32 size) ...@@ -139,3 +141,35 @@ void __init coherent_mem_init(phys_addr_t start, u32 size)
dma_bitmap = phys_to_virt(bitmap_phys); dma_bitmap = phys_to_virt(bitmap_phys);
memset(dma_bitmap, 0, dma_pages * PAGE_SIZE); memset(dma_bitmap, 0, dma_pages * PAGE_SIZE);
} }
static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
switch (dir) {
case DMA_FROM_DEVICE:
L2_cache_block_invalidate(paddr, paddr + size);
break;
case DMA_TO_DEVICE:
L2_cache_block_writeback(paddr, paddr + size);
break;
case DMA_BIDIRECTIONAL:
L2_cache_block_writeback_invalidate(paddr, paddr + size);
break;
default:
break;
}
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
return c6x_dma_sync(dev, paddr, size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
return c6x_dma_sync(dev, paddr, size, dir);
}
...@@ -15,6 +15,4 @@ static inline void pcibios_penalize_isa_irq(int irq, int active) ...@@ -15,6 +15,4 @@ static inline void pcibios_penalize_isa_irq(int irq, int active)
/* We don't do dynamic PCI IRQ allocation */ /* We don't do dynamic PCI IRQ allocation */
} }
#define PCI_DMA_BUS_IS_PHYS (1)
#endif /* _ASM_H8300_PCI_H */ #endif /* _ASM_H8300_PCI_H */
...@@ -19,6 +19,7 @@ config HEXAGON ...@@ -19,6 +19,7 @@ config HEXAGON
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select NEED_SG_DMA_LENGTH
select NO_IOPORT_MAP select NO_IOPORT_MAP
select GENERIC_IOMAP select GENERIC_IOMAP
select GENERIC_SMP_IDLE_THREAD select GENERIC_SMP_IDLE_THREAD
...@@ -63,9 +64,6 @@ config GENERIC_CSUM ...@@ -63,9 +64,6 @@ config GENERIC_CSUM
config GENERIC_IRQ_PROBE config GENERIC_IRQ_PROBE
def_bool y def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
def_bool n def_bool n
......
...@@ -208,7 +208,6 @@ const struct dma_map_ops hexagon_dma_ops = { ...@@ -208,7 +208,6 @@ const struct dma_map_ops hexagon_dma_ops = {
.sync_single_for_cpu = hexagon_sync_single_for_cpu, .sync_single_for_cpu = hexagon_sync_single_for_cpu,
.sync_single_for_device = hexagon_sync_single_for_device, .sync_single_for_device = hexagon_sync_single_for_device,
.mapping_error = hexagon_mapping_error, .mapping_error = hexagon_mapping_error,
.is_phys = 1,
}; };
void __init hexagon_dma_init(void) void __init hexagon_dma_init(void)
......
...@@ -29,7 +29,6 @@ config IA64 ...@@ -29,7 +29,6 @@ config IA64
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select TTY select TTY
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select HAVE_VIRT_CPU_ACCOUNTING select HAVE_VIRT_CPU_ACCOUNTING
...@@ -54,6 +53,8 @@ config IA64 ...@@ -54,6 +53,8 @@ config IA64
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_AUDITSYSCALL
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
default y default y
help help
The Itanium Processor Family is Intel's 64-bit successor to The Itanium Processor Family is Intel's 64-bit successor to
...@@ -78,18 +79,6 @@ config MMU ...@@ -78,18 +79,6 @@ config MMU
bool bool
default y default y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config SWIOTLB
bool
config STACKTRACE_SUPPORT config STACKTRACE_SUPPORT
def_bool y def_bool y
...@@ -146,7 +135,6 @@ config IA64_GENERIC ...@@ -146,7 +135,6 @@ config IA64_GENERIC
bool "generic" bool "generic"
select NUMA select NUMA
select ACPI_NUMA select ACPI_NUMA
select DMA_DIRECT_OPS
select SWIOTLB select SWIOTLB
select PCI_MSI select PCI_MSI
help help
...@@ -167,7 +155,6 @@ config IA64_GENERIC ...@@ -167,7 +155,6 @@ config IA64_GENERIC
config IA64_DIG config IA64_DIG
bool "DIG-compliant" bool "DIG-compliant"
select DMA_DIRECT_OPS
select SWIOTLB select SWIOTLB
config IA64_DIG_VTD config IA64_DIG_VTD
...@@ -183,7 +170,6 @@ config IA64_HP_ZX1 ...@@ -183,7 +170,6 @@ config IA64_HP_ZX1
config IA64_HP_ZX1_SWIOTLB config IA64_HP_ZX1_SWIOTLB
bool "HP-zx1/sx1000 with software I/O TLB" bool "HP-zx1/sx1000 with software I/O TLB"
select DMA_DIRECT_OPS
select SWIOTLB select SWIOTLB
help help
Build a kernel that runs on HP zx1 and sx1000 systems even when they Build a kernel that runs on HP zx1 and sx1000 systems even when they
...@@ -207,7 +193,6 @@ config IA64_SGI_UV ...@@ -207,7 +193,6 @@ config IA64_SGI_UV
bool "SGI-UV" bool "SGI-UV"
select NUMA select NUMA
select ACPI_NUMA select ACPI_NUMA
select DMA_DIRECT_OPS
select SWIOTLB select SWIOTLB
help help
Selecting this option will optimize the kernel for use on UV based Selecting this option will optimize the kernel for use on UV based
...@@ -218,7 +203,6 @@ config IA64_SGI_UV ...@@ -218,7 +203,6 @@ config IA64_SGI_UV
config IA64_HP_SIM config IA64_HP_SIM
bool "Ski-simulator" bool "Ski-simulator"
select DMA_DIRECT_OPS
select SWIOTLB select SWIOTLB
depends on !PM depends on !PM
...@@ -613,6 +597,3 @@ source "security/Kconfig" ...@@ -613,6 +597,3 @@ source "security/Kconfig"
source "crypto/Kconfig" source "crypto/Kconfig"
source "lib/Kconfig" source "lib/Kconfig"
config IOMMU_HELPER
def_bool (IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_GENERIC || SWIOTLB)
...@@ -1845,9 +1845,6 @@ static void ioc_init(unsigned long hpa, struct ioc *ioc) ...@@ -1845,9 +1845,6 @@ static void ioc_init(unsigned long hpa, struct ioc *ioc)
ioc_resource_init(ioc); ioc_resource_init(ioc);
ioc_sac_init(ioc); ioc_sac_init(ioc);
if ((long) ~iovp_mask > (long) ia64_max_iommu_merge_mask)
ia64_max_iommu_merge_mask = ~iovp_mask;
printk(KERN_INFO PFX printk(KERN_INFO PFX
"%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n", "%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n",
ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF, ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF,
......
...@@ -30,23 +30,6 @@ struct pci_vector_struct { ...@@ -30,23 +30,6 @@ struct pci_vector_struct {
#define PCIBIOS_MIN_IO 0x1000 #define PCIBIOS_MIN_IO 0x1000
#define PCIBIOS_MIN_MEM 0x10000000 #define PCIBIOS_MIN_MEM 0x10000000
/*
* PCI_DMA_BUS_IS_PHYS should be set to 1 if there is _necessarily_ a direct
* correspondence between device bus addresses and CPU physical addresses.
* Platforms with a hardware I/O MMU _must_ turn this off to suppress the
* bounce buffer handling code in the block and network device layers.
* Platforms with separate bus address spaces _must_ turn this off and provide
* a device DMA mapping implementation that takes care of the necessary
* address translation.
*
* For now, the ia64 platforms which may have separate/multiple bus address
* spaces all have I/O MMUs which support the merging of physically
* discontiguous buffers, so we can use that as the sole factor to determine
* the setting of PCI_DMA_BUS_IS_PHYS.
*/
extern unsigned long ia64_max_iommu_merge_mask;
#define PCI_DMA_BUS_IS_PHYS (ia64_max_iommu_merge_mask == ~0UL)
#define HAVE_PCI_MMAP #define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE #define ARCH_GENERIC_PCI_MMAP_RESOURCE
#define arch_can_pci_mmap_wc() 1 #define arch_can_pci_mmap_wc() 1
......
...@@ -9,16 +9,6 @@ int iommu_detected __read_mostly; ...@@ -9,16 +9,6 @@ int iommu_detected __read_mostly;
const struct dma_map_ops *dma_ops; const struct dma_map_ops *dma_ops;
EXPORT_SYMBOL(dma_ops); EXPORT_SYMBOL(dma_ops);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
const struct dma_map_ops *dma_get_ops(struct device *dev) const struct dma_map_ops *dma_get_ops(struct device *dev)
{ {
return dma_ops; return dma_ops;
......
...@@ -123,18 +123,6 @@ unsigned long ia64_i_cache_stride_shift = ~0; ...@@ -123,18 +123,6 @@ unsigned long ia64_i_cache_stride_shift = ~0;
#define CACHE_STRIDE_SHIFT 5 #define CACHE_STRIDE_SHIFT 5
unsigned long ia64_cache_stride_shift = ~0; unsigned long ia64_cache_stride_shift = ~0;
/*
* The merge_mask variable needs to be set to (max(iommu_page_size(iommu)) - 1). This
* mask specifies a mask of address bits that must be 0 in order for two buffers to be
* mergeable by the I/O MMU (i.e., the end address of the first buffer and the start
* address of the second buffer must be aligned to (merge_mask+1) in order to be
* mergeable). By default, we assume there is no I/O MMU which can merge physically
* discontiguous buffers, so we set the merge_mask to ~0UL, which corresponds to a iommu
* page-size of 2^64.
*/
unsigned long ia64_max_iommu_merge_mask = ~0UL;
EXPORT_SYMBOL(ia64_max_iommu_merge_mask);
/* /*
* We use a special marker for the end of memory and it uses the extra (+1) slot * We use a special marker for the end of memory and it uses the extra (+1) slot
*/ */
......
...@@ -480,11 +480,6 @@ sn_io_early_init(void) ...@@ -480,11 +480,6 @@ sn_io_early_init(void)
tioca_init_provider(); tioca_init_provider();
tioce_init_provider(); tioce_init_provider();
/*
* This is needed to avoid bounce limit checks in the blk layer
*/
ia64_max_iommu_merge_mask = ~PAGE_MASK;
sn_irq_lh_init(); sn_irq_lh_init();
INIT_LIST_HEAD(&sn_sysdata_list); INIT_LIST_HEAD(&sn_sysdata_list);
sn_init_cpei_timer(); sn_init_cpei_timer();
......
...@@ -4,12 +4,6 @@ ...@@ -4,12 +4,6 @@
#include <asm-generic/pci.h> #include <asm-generic/pci.h>
/* The PCI address space does equal the physical memory
* address space. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#define pcibios_assign_all_busses() 1 #define pcibios_assign_all_busses() 1
#define PCIBIOS_MIN_IO 0x00000100 #define PCIBIOS_MIN_IO 0x00000100
......
...@@ -19,7 +19,6 @@ config MICROBLAZE ...@@ -19,7 +19,6 @@ config MICROBLAZE
select HAVE_ARCH_HASH select HAVE_ARCH_HASH
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
......
...@@ -62,12 +62,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus, ...@@ -62,12 +62,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
#define HAVE_PCI_LEGACY 1 #define HAVE_PCI_LEGACY 1
/* The PCI address space does equal the physical memory
* address space (no IOMMU). The IDE and SCSI device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
extern void pcibios_claim_one_bus(struct pci_bus *b); extern void pcibios_claim_one_bus(struct pci_bus *b);
extern void pcibios_finish_adding_to_bus(struct pci_bus *bus); extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);
......
...@@ -184,14 +184,3 @@ const struct dma_map_ops dma_nommu_ops = { ...@@ -184,14 +184,3 @@ const struct dma_map_ops dma_nommu_ops = {
.sync_sg_for_device = dma_nommu_sync_sg_for_device, .sync_sg_for_device = dma_nommu_sync_sg_for_device,
}; };
EXPORT_SYMBOL(dma_nommu_ops); EXPORT_SYMBOL(dma_nommu_ops);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
...@@ -42,7 +42,6 @@ config MIPS ...@@ -42,7 +42,6 @@ config MIPS
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_EXIT_THREAD select HAVE_EXIT_THREAD
...@@ -132,7 +131,7 @@ config MIPS_GENERIC ...@@ -132,7 +131,7 @@ config MIPS_GENERIC
config MIPS_ALCHEMY config MIPS_ALCHEMY
bool "Alchemy processor based machines" bool "Alchemy processor based machines"
select ARCH_PHYS_ADDR_T_64BIT select PHYS_ADDR_T_64BIT
select CEVT_R4K select CEVT_R4K
select CSRC_R4K select CSRC_R4K
select IRQ_MIPS_CPU select IRQ_MIPS_CPU
...@@ -890,7 +889,7 @@ config CAVIUM_OCTEON_SOC ...@@ -890,7 +889,7 @@ config CAVIUM_OCTEON_SOC
bool "Cavium Networks Octeon SoC based boards" bool "Cavium Networks Octeon SoC based boards"
select CEVT_R4K select CEVT_R4K
select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PHYS_TO_DMA
select ARCH_PHYS_ADDR_T_64BIT select PHYS_ADDR_T_64BIT
select DMA_COHERENT select DMA_COHERENT
select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_64BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_BIG_ENDIAN
...@@ -912,6 +911,7 @@ config CAVIUM_OCTEON_SOC ...@@ -912,6 +911,7 @@ config CAVIUM_OCTEON_SOC
select MIPS_NR_CPU_NR_MAP_1024 select MIPS_NR_CPU_NR_MAP_1024
select BUILTIN_DTB select BUILTIN_DTB
select MTD_COMPLEX_MAPPINGS select MTD_COMPLEX_MAPPINGS
select SWIOTLB
select SYS_SUPPORTS_RELOCATABLE select SYS_SUPPORTS_RELOCATABLE
help help
This option supports all of the Octeon reference boards from Cavium This option supports all of the Octeon reference boards from Cavium
...@@ -936,7 +936,7 @@ config NLM_XLR_BOARD ...@@ -936,7 +936,7 @@ config NLM_XLR_BOARD
select SWAP_IO_SPACE select SWAP_IO_SPACE
select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_64BIT_KERNEL
select ARCH_PHYS_ADDR_T_64BIT select PHYS_ADDR_T_64BIT
select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_HIGHMEM select SYS_SUPPORTS_HIGHMEM
select DMA_COHERENT select DMA_COHERENT
...@@ -962,7 +962,7 @@ config NLM_XLP_BOARD ...@@ -962,7 +962,7 @@ config NLM_XLP_BOARD
select HW_HAS_PCI select HW_HAS_PCI
select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_64BIT_KERNEL
select ARCH_PHYS_ADDR_T_64BIT select PHYS_ADDR_T_64BIT
select GPIOLIB select GPIOLIB
select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_LITTLE_ENDIAN select SYS_SUPPORTS_LITTLE_ENDIAN
...@@ -1101,9 +1101,6 @@ config GPIO_TXX9 ...@@ -1101,9 +1101,6 @@ config GPIO_TXX9
config FW_CFE config FW_CFE
bool bool
config ARCH_DMA_ADDR_T_64BIT
def_bool (HIGHMEM && ARCH_PHYS_ADDR_T_64BIT) || 64BIT
config ARCH_SUPPORTS_UPROBES config ARCH_SUPPORTS_UPROBES
bool bool
...@@ -1122,9 +1119,6 @@ config DMA_NONCOHERENT ...@@ -1122,9 +1119,6 @@ config DMA_NONCOHERENT
bool bool
select NEED_DMA_MAP_STATE select NEED_DMA_MAP_STATE
config NEED_DMA_MAP_STATE
bool
config SYS_HAS_EARLY_PRINTK config SYS_HAS_EARLY_PRINTK
bool bool
...@@ -1373,6 +1367,7 @@ config CPU_LOONGSON3 ...@@ -1373,6 +1367,7 @@ config CPU_LOONGSON3
select MIPS_PGD_C0_CONTEXT select MIPS_PGD_C0_CONTEXT
select MIPS_L1_CACHE_SHIFT_6 select MIPS_L1_CACHE_SHIFT_6
select GPIOLIB select GPIOLIB
select SWIOTLB
help help
The Loongson 3 processor implements the MIPS64R2 instruction The Loongson 3 processor implements the MIPS64R2 instruction
set with many extensions. set with many extensions.
...@@ -1770,7 +1765,7 @@ config CPU_MIPS32_R5_XPA ...@@ -1770,7 +1765,7 @@ config CPU_MIPS32_R5_XPA
depends on SYS_SUPPORTS_HIGHMEM depends on SYS_SUPPORTS_HIGHMEM
select XPA select XPA
select HIGHMEM select HIGHMEM
select ARCH_PHYS_ADDR_T_64BIT select PHYS_ADDR_T_64BIT
default n default n
help help
Choose this option if you want to enable the Extended Physical Choose this option if you want to enable the Extended Physical
...@@ -2402,9 +2397,6 @@ config SB1_PASS_2_1_WORKAROUNDS ...@@ -2402,9 +2397,6 @@ config SB1_PASS_2_1_WORKAROUNDS
default y default y
config ARCH_PHYS_ADDR_T_64BIT
bool
choice choice
prompt "SmartMIPS or microMIPS ASE support" prompt "SmartMIPS or microMIPS ASE support"
......
...@@ -67,18 +67,6 @@ config CAVIUM_OCTEON_LOCK_L2_MEMCPY ...@@ -67,18 +67,6 @@ config CAVIUM_OCTEON_LOCK_L2_MEMCPY
help help
Lock the kernel's implementation of memcpy() into L2. Lock the kernel's implementation of memcpy() into L2.
config IOMMU_HELPER
bool
config NEED_SG_DMA_LENGTH
bool
config SWIOTLB
def_bool y
select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
config OCTEON_ILM config OCTEON_ILM
tristate "Module to measure interrupt latency using Octeon CIU Timer" tristate "Module to measure interrupt latency using Octeon CIU Timer"
help help
......
...@@ -121,13 +121,6 @@ extern unsigned long PCIBIOS_MIN_MEM; ...@@ -121,13 +121,6 @@ extern unsigned long PCIBIOS_MIN_MEM;
#include <linux/string.h> #include <linux/string.h>
#include <asm/io.h> #include <asm/io.h>
/*
* The PCI address space does equal the physical memory address space.
* The networking and block device layers use this boolean for bounce
* buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#ifdef CONFIG_PCI_DOMAINS_GENERIC #ifdef CONFIG_PCI_DOMAINS_GENERIC
static inline int pci_proc_domain(struct pci_bus *bus) static inline int pci_proc_domain(struct pci_bus *bus)
{ {
......
...@@ -130,21 +130,6 @@ config LOONGSON_UART_BASE ...@@ -130,21 +130,6 @@ config LOONGSON_UART_BASE
default y default y
depends on EARLY_PRINTK || SERIAL_8250 depends on EARLY_PRINTK || SERIAL_8250
config IOMMU_HELPER
bool
config NEED_SG_DMA_LENGTH
bool
config SWIOTLB
bool "Soft IOMMU Support for All-Memory DMA"
default y
depends on CPU_LOONGSON3
select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
select NEED_DMA_MAP_STATE
config PHYS48_TO_HT40 config PHYS48_TO_HT40
bool bool
default y if CPU_LOONGSON3 default y if CPU_LOONGSON3
......
...@@ -402,13 +402,3 @@ static const struct dma_map_ops mips_default_dma_map_ops = { ...@@ -402,13 +402,3 @@ static const struct dma_map_ops mips_default_dma_map_ops = {
const struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops; const struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops;
EXPORT_SYMBOL(mips_dma_map_ops); EXPORT_SYMBOL(mips_dma_map_ops);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init mips_dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(mips_dma_init);
...@@ -83,10 +83,4 @@ endif ...@@ -83,10 +83,4 @@ endif
config NLM_COMMON config NLM_COMMON
bool bool
config IOMMU_HELPER
bool
config NEED_SG_DMA_LENGTH
bool
endif endif
...@@ -5,10 +5,13 @@ ...@@ -5,10 +5,13 @@
config NDS32 config NDS32
def_bool y def_bool y
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_WANT_FRAME_POINTERS if FTRACE select ARCH_WANT_FRAME_POINTERS if FTRACE
select CLKSRC_MMIO select CLKSRC_MMIO
select CLONE_BACKWARDS select CLONE_BACKWARDS
select COMMON_CLK select COMMON_CLK
select DMA_NONCOHERENT_OPS
select GENERIC_ASHLDI3 select GENERIC_ASHLDI3
select GENERIC_ASHRDI3 select GENERIC_ASHRDI3
select GENERIC_LSHRDI3 select GENERIC_LSHRDI3
......
...@@ -13,6 +13,7 @@ generic-y += cputime.h ...@@ -13,6 +13,7 @@ generic-y += cputime.h
generic-y += device.h generic-y += device.h
generic-y += div64.h generic-y += div64.h
generic-y += dma.h generic-y += dma.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h generic-y += emergency-restart.h
generic-y += errno.h generic-y += errno.h
generic-y += exec.h generic-y += exec.h
......
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef ASMNDS32_DMA_MAPPING_H
#define ASMNDS32_DMA_MAPPING_H
extern struct dma_map_ops nds32_dma_ops;
static inline struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &nds32_dma_ops;
}
#endif
...@@ -3,17 +3,14 @@ ...@@ -3,17 +3,14 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/export.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/scatterlist.h> #include <linux/dma-noncoherent.h>
#include <linux/dma-mapping.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/cache.h> #include <linux/cache.h>
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/dma-mapping.h>
#include <asm/proc-fns.h> #include <asm/proc-fns.h>
/* /*
...@@ -22,11 +19,6 @@ ...@@ -22,11 +19,6 @@
static pte_t *consistent_pte; static pte_t *consistent_pte;
static DEFINE_RAW_SPINLOCK(consistent_lock); static DEFINE_RAW_SPINLOCK(consistent_lock);
enum master_type {
FOR_CPU = 0,
FOR_DEVICE = 1,
};
/* /*
* VM region handling support. * VM region handling support.
* *
...@@ -124,10 +116,8 @@ static struct arch_vm_region *vm_region_find(struct arch_vm_region *head, ...@@ -124,10 +116,8 @@ static struct arch_vm_region *vm_region_find(struct arch_vm_region *head,
return c; return c;
} }
/* FIXME: attrs is not used. */ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
static void *nds32_dma_alloc_coherent(struct device *dev, size_t size, gfp_t gfp, unsigned long attrs)
dma_addr_t * handle, gfp_t gfp,
unsigned long attrs)
{ {
struct page *page; struct page *page;
struct arch_vm_region *c; struct arch_vm_region *c;
...@@ -232,8 +222,8 @@ static void *nds32_dma_alloc_coherent(struct device *dev, size_t size, ...@@ -232,8 +222,8 @@ static void *nds32_dma_alloc_coherent(struct device *dev, size_t size,
return NULL; return NULL;
} }
static void nds32_dma_free(struct device *dev, size_t size, void *cpu_addr, void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t handle, unsigned long attrs) dma_addr_t handle, unsigned long attrs)
{ {
struct arch_vm_region *c; struct arch_vm_region *c;
unsigned long flags, addr; unsigned long flags, addr;
...@@ -333,145 +323,69 @@ static int __init consistent_init(void) ...@@ -333,145 +323,69 @@ static int __init consistent_init(void)
} }
core_initcall(consistent_init); core_initcall(consistent_init);
static void consistent_sync(void *vaddr, size_t size, int direction, int master_type);
static dma_addr_t nds32_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
unsigned long attrs)
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
consistent_sync((void *)(page_address(page) + offset), size, dir, FOR_DEVICE);
return page_to_phys(page) + offset;
}
static void nds32_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
consistent_sync(phys_to_virt(handle), size, dir, FOR_CPU);
}
/*
* Make an area consistent for devices.
*/
static void consistent_sync(void *vaddr, size_t size, int direction, int master_type)
{
unsigned long start = (unsigned long)vaddr;
unsigned long end = start + size;
if (master_type == FOR_CPU) {
switch (direction) {
case DMA_TO_DEVICE:
break;
case DMA_FROM_DEVICE:
case DMA_BIDIRECTIONAL:
cpu_dma_inval_range(start, end);
break;
default:
BUG();
}
} else {
/* FOR_DEVICE */
switch (direction) {
case DMA_FROM_DEVICE:
break;
case DMA_TO_DEVICE:
case DMA_BIDIRECTIONAL:
cpu_dma_wb_range(start, end);
break;
default:
BUG();
}
}
}
static int nds32_dma_map_sg(struct device *dev, struct scatterlist *sg, static inline void cache_op(phys_addr_t paddr, size_t size,
int nents, enum dma_data_direction dir, void (*fn)(unsigned long start, unsigned long end))
unsigned long attrs)
{ {
int i; struct page *page = pfn_to_page(paddr >> PAGE_SHIFT);
unsigned offset = paddr & ~PAGE_MASK;
size_t left = size;
unsigned long start;
for (i = 0; i < nents; i++, sg++) { do {
void *virt; size_t len = left;
unsigned long pfn;
struct page *page = sg_page(sg);
sg->dma_address = sg_phys(sg);
pfn = page_to_pfn(page) + sg->offset / PAGE_SIZE;
page = pfn_to_page(pfn);
if (PageHighMem(page)) { if (PageHighMem(page)) {
virt = kmap_atomic(page); void *addr;
consistent_sync(virt, sg->length, dir, FOR_CPU);
kunmap_atomic(virt); if (offset + len > PAGE_SIZE) {
if (offset >= PAGE_SIZE) {
page += offset >> PAGE_SHIFT;
offset &= ~PAGE_MASK;
}
len = PAGE_SIZE - offset;
}
addr = kmap_atomic(page);
start = (unsigned long)(addr + offset);
fn(start, start + len);
kunmap_atomic(addr);
} else { } else {
if (sg->offset > PAGE_SIZE) start = (unsigned long)phys_to_virt(paddr);
panic("sg->offset:%08x > PAGE_SIZE\n", fn(start, start + size);
sg->offset);
virt = page_address(page) + sg->offset;
consistent_sync(virt, sg->length, dir, FOR_CPU);
} }
} offset = 0;
return nents; page++;
left -= len;
} while (left);
} }
static void nds32_dma_unmap_sg(struct device *dev, struct scatterlist *sg, void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
int nhwentries, enum dma_data_direction dir, size_t size, enum dma_data_direction dir)
unsigned long attrs)
{ {
} switch (dir) {
case DMA_FROM_DEVICE:
static void break;
nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, case DMA_TO_DEVICE:
size_t size, enum dma_data_direction dir) case DMA_BIDIRECTIONAL:
{ cache_op(paddr, size, cpu_dma_wb_range);
consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_CPU); break;
} default:
BUG();
static void
nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_DEVICE);
}
static void
nds32_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir)
{
int i;
for (i = 0; i < nents; i++, sg++) {
char *virt =
page_address((struct page *)sg->page_link) + sg->offset;
consistent_sync(virt, sg->length, dir, FOR_CPU);
} }
} }
static void void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
nds32_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, size_t size, enum dma_data_direction dir)
int nents, enum dma_data_direction dir)
{ {
int i; switch (dir) {
case DMA_TO_DEVICE:
for (i = 0; i < nents; i++, sg++) { break;
char *virt = case DMA_FROM_DEVICE:
page_address((struct page *)sg->page_link) + sg->offset; case DMA_BIDIRECTIONAL:
consistent_sync(virt, sg->length, dir, FOR_DEVICE); cache_op(paddr, size, cpu_dma_inval_range);
break;
default:
BUG();
} }
} }
struct dma_map_ops nds32_dma_ops = {
.alloc = nds32_dma_alloc_coherent,
.free = nds32_dma_free,
.map_page = nds32_dma_map_page,
.unmap_page = nds32_dma_unmap_page,
.map_sg = nds32_dma_map_sg,
.unmap_sg = nds32_dma_unmap_sg,
.sync_single_for_device = nds32_dma_sync_single_for_device,
.sync_single_for_cpu = nds32_dma_sync_single_for_cpu,
.sync_sg_for_cpu = nds32_dma_sync_sg_for_cpu,
.sync_sg_for_device = nds32_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(nds32_dma_ops);
...@@ -247,14 +247,3 @@ const struct dma_map_ops or1k_dma_map_ops = { ...@@ -247,14 +247,3 @@ const struct dma_map_ops or1k_dma_map_ops = {
.sync_single_for_device = or1k_sync_single_for_device, .sync_single_for_device = or1k_sync_single_for_device,
}; };
EXPORT_SYMBOL(or1k_dma_map_ops); EXPORT_SYMBOL(or1k_dma_map_ops);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
...@@ -51,6 +51,8 @@ config PARISC ...@@ -51,6 +51,8 @@ config PARISC
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select ARCH_NO_COHERENT_DMA_MMAP select ARCH_NO_COHERENT_DMA_MMAP
select CPU_NO_EFFICIENT_FFS select CPU_NO_EFFICIENT_FFS
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
help help
The PA-RISC microprocessor is designed by Hewlett-Packard and used The PA-RISC microprocessor is designed by Hewlett-Packard and used
...@@ -111,12 +113,6 @@ config PM ...@@ -111,12 +113,6 @@ config PM
config STACKTRACE_SUPPORT config STACKTRACE_SUPPORT
def_bool y def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config ISA_DMA_API config ISA_DMA_API
bool bool
......
...@@ -87,29 +87,6 @@ struct pci_hba_data { ...@@ -87,29 +87,6 @@ struct pci_hba_data {
#define PCI_F_EXTEND 0UL #define PCI_F_EXTEND 0UL
#endif /* !CONFIG_64BIT */ #endif /* !CONFIG_64BIT */
/*
* If the PCI device's view of memory is the same as the CPU's view of memory,
* PCI_DMA_BUS_IS_PHYS is true. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#ifdef CONFIG_PA20
/* All PA-2.0 machines have an IOMMU. */
#define PCI_DMA_BUS_IS_PHYS 0
#define parisc_has_iommu() do { } while (0)
#else
#if defined(CONFIG_IOMMU_CCIO) || defined(CONFIG_IOMMU_SBA)
extern int parisc_bus_is_phys; /* in arch/parisc/kernel/setup.c */
#define PCI_DMA_BUS_IS_PHYS parisc_bus_is_phys
#define parisc_has_iommu() do { parisc_bus_is_phys = 0; } while (0)
#else
#define PCI_DMA_BUS_IS_PHYS 1
#define parisc_has_iommu() do { } while (0)
#endif
#endif /* !CONFIG_PA20 */
/* /*
** Most PCI devices (eg Tulip, NCR720) also export the same registers ** Most PCI devices (eg Tulip, NCR720) also export the same registers
** to both MMIO and I/O port space. Due to poor performance of I/O Port ** to both MMIO and I/O port space. Due to poor performance of I/O Port
......
...@@ -58,11 +58,6 @@ struct proc_dir_entry * proc_runway_root __read_mostly = NULL; ...@@ -58,11 +58,6 @@ struct proc_dir_entry * proc_runway_root __read_mostly = NULL;
struct proc_dir_entry * proc_gsc_root __read_mostly = NULL; struct proc_dir_entry * proc_gsc_root __read_mostly = NULL;
struct proc_dir_entry * proc_mckinley_root __read_mostly = NULL; struct proc_dir_entry * proc_mckinley_root __read_mostly = NULL;
#if !defined(CONFIG_PA20) && (defined(CONFIG_IOMMU_CCIO) || defined(CONFIG_IOMMU_SBA))
int parisc_bus_is_phys __read_mostly = 1; /* Assume no IOMMU is present */
EXPORT_SYMBOL(parisc_bus_is_phys);
#endif
void __init setup_cmdline(char **cmdline_p) void __init setup_cmdline(char **cmdline_p)
{ {
extern unsigned int boot_args[]; extern unsigned int boot_args[];
......
...@@ -13,12 +13,6 @@ config 64BIT ...@@ -13,12 +13,6 @@ config 64BIT
bool bool
default y if PPC64 default y if PPC64
config ARCH_PHYS_ADDR_T_64BIT
def_bool PPC64 || PHYS_64BIT
config ARCH_DMA_ADDR_T_64BIT
def_bool ARCH_PHYS_ADDR_T_64BIT
config MMU config MMU
bool bool
default y default y
...@@ -187,7 +181,6 @@ config PPC ...@@ -187,7 +181,6 @@ config PPC
select HAVE_CONTEXT_TRACKING if PPC64 select HAVE_CONTEXT_TRACKING if PPC64
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL
select HAVE_EBPF_JIT if PPC64 select HAVE_EBPF_JIT if PPC64
...@@ -223,9 +216,11 @@ config PPC ...@@ -223,9 +216,11 @@ config PPC
select HAVE_SYSCALL_TRACEPOINTS select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING select HAVE_VIRT_CPU_ACCOUNTING
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select IOMMU_HELPER if PPC64
select IRQ_DOMAIN select IRQ_DOMAIN
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select NEED_SG_DMA_LENGTH
select NO_BOOTMEM select NO_BOOTMEM
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
...@@ -478,19 +473,6 @@ config MPROFILE_KERNEL ...@@ -478,19 +473,6 @@ config MPROFILE_KERNEL
depends on PPC64 && CPU_LITTLE_ENDIAN depends on PPC64 && CPU_LITTLE_ENDIAN
def_bool !DISABLE_MPROFILE_KERNEL def_bool !DISABLE_MPROFILE_KERNEL
config IOMMU_HELPER
def_bool PPC64
config SWIOTLB
bool "SWIOTLB support"
default n
select IOMMU_HELPER
---help---
Support for IO bounce buffering for systems without an IOMMU.
This allows us to DMA to the full physical address space on
platforms where the size of a physical address is larger
than the bus address. Not all platforms support this.
config HOTPLUG_CPU config HOTPLUG_CPU
bool "Support for enabling/disabling CPUs" bool "Support for enabling/disabling CPUs"
depends on SMP && (PPC_PSERIES || \ depends on SMP && (PPC_PSERIES || \
...@@ -913,9 +895,6 @@ config ZONE_DMA ...@@ -913,9 +895,6 @@ config ZONE_DMA
config NEED_DMA_MAP_STATE config NEED_DMA_MAP_STATE
def_bool (PPC64 || NOT_COHERENT_CACHE) def_bool (PPC64 || NOT_COHERENT_CACHE)
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool bool
depends on ISA_DMA_API depends on ISA_DMA_API
......
...@@ -92,24 +92,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus, ...@@ -92,24 +92,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
#define HAVE_PCI_LEGACY 1 #define HAVE_PCI_LEGACY 1
#ifdef CONFIG_PPC64
/* The PCI address space does not equal the physical memory address
* space (we have an IOMMU). The IDE and SCSI device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (0)
#else /* 32-bit */
/* The PCI address space does equal the physical memory
* address space (no IOMMU). The IDE and SCSI device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#endif /* CONFIG_PPC64 */
extern void pcibios_claim_one_bus(struct pci_bus *b); extern void pcibios_claim_one_bus(struct pci_bus *b);
extern void pcibios_finish_adding_to_bus(struct pci_bus *bus); extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);
......
...@@ -309,8 +309,6 @@ int dma_set_coherent_mask(struct device *dev, u64 mask) ...@@ -309,8 +309,6 @@ int dma_set_coherent_mask(struct device *dev, u64 mask)
} }
EXPORT_SYMBOL(dma_set_coherent_mask); EXPORT_SYMBOL(dma_set_coherent_mask);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
int dma_set_mask(struct device *dev, u64 dma_mask) int dma_set_mask(struct device *dev, u64 dma_mask)
{ {
if (ppc_md.dma_set_mask) if (ppc_md.dma_set_mask)
...@@ -361,7 +359,6 @@ EXPORT_SYMBOL_GPL(dma_get_required_mask); ...@@ -361,7 +359,6 @@ EXPORT_SYMBOL_GPL(dma_get_required_mask);
static int __init dma_init(void) static int __init dma_init(void)
{ {
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
dma_debug_add_bus(&pci_bus_type); dma_debug_add_bus(&pci_bus_type);
#endif #endif
......
...@@ -222,6 +222,7 @@ config PTE_64BIT ...@@ -222,6 +222,7 @@ config PTE_64BIT
config PHYS_64BIT config PHYS_64BIT
bool 'Large physical address support' if E500 || PPC_86xx bool 'Large physical address support' if E500 || PPC_86xx
depends on (44x || E500 || PPC_86xx) && !PPC_83xx && !PPC_82xx depends on (44x || E500 || PPC_86xx) && !PPC_83xx && !PPC_82xx
select PHYS_ADDR_T_64BIT
---help--- ---help---
This option enables kernel support for larger than 32-bit physical This option enables kernel support for larger than 32-bit physical
addresses. This feature may not be available on all cores. addresses. This feature may not be available on all cores.
......
...@@ -3,8 +3,16 @@ ...@@ -3,8 +3,16 @@
# see Documentation/kbuild/kconfig-language.txt. # see Documentation/kbuild/kconfig-language.txt.
# #
config 64BIT
bool
config 32BIT
bool
config RISCV config RISCV
def_bool y def_bool y
# even on 32-bit, physical (and DMA) addresses are > 32-bits
select PHYS_ADDR_T_64BIT
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_IRQ select OF_IRQ
...@@ -22,7 +30,6 @@ config RISCV ...@@ -22,7 +30,6 @@ config RISCV
select GENERIC_ATOMIC64 if !64BIT || !RISCV_ISA_A select GENERIC_ATOMIC64 if !64BIT || !RISCV_ISA_A
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select IRQ_DOMAIN select IRQ_DOMAIN
...@@ -39,16 +46,9 @@ config RISCV ...@@ -39,16 +46,9 @@ config RISCV
config MMU config MMU
def_bool y def_bool y
# even on 32-bit, physical (and DMA) addresses are > 32-bits
config ARCH_PHYS_ADDR_T_64BIT
def_bool y
config ZONE_DMA32 config ZONE_DMA32
bool bool
default y default y if 64BIT
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config PAGE_OFFSET config PAGE_OFFSET
hex hex
...@@ -101,7 +101,6 @@ choice ...@@ -101,7 +101,6 @@ choice
config ARCH_RV32I config ARCH_RV32I
bool "RV32I" bool "RV32I"
select CPU_SUPPORTS_32BIT_KERNEL
select 32BIT select 32BIT
select GENERIC_ASHLDI3 select GENERIC_ASHLDI3
select GENERIC_ASHRDI3 select GENERIC_ASHRDI3
...@@ -109,13 +108,13 @@ config ARCH_RV32I ...@@ -109,13 +108,13 @@ config ARCH_RV32I
config ARCH_RV64I config ARCH_RV64I
bool "RV64I" bool "RV64I"
select CPU_SUPPORTS_64BIT_KERNEL
select 64BIT select 64BIT
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_REGS
select SWIOTLB
endchoice endchoice
...@@ -171,11 +170,6 @@ config NR_CPUS ...@@ -171,11 +170,6 @@ config NR_CPUS
depends on SMP depends on SMP
default "8" default "8"
config CPU_SUPPORTS_32BIT_KERNEL
bool
config CPU_SUPPORTS_64BIT_KERNEL
bool
choice choice
prompt "CPU Tuning" prompt "CPU Tuning"
default TUNE_GENERIC default TUNE_GENERIC
...@@ -202,24 +196,6 @@ endmenu ...@@ -202,24 +196,6 @@ endmenu
menu "Kernel type" menu "Kernel type"
choice
prompt "Kernel code model"
default 64BIT
config 32BIT
bool "32-bit kernel"
depends on CPU_SUPPORTS_32BIT_KERNEL
help
Select this option to build a 32-bit kernel.
config 64BIT
bool "64-bit kernel"
depends on CPU_SUPPORTS_64BIT_KERNEL
help
Select this option to build a 64-bit kernel.
endchoice
source "mm/Kconfig" source "mm/Kconfig"
source "kernel/Kconfig.preempt" source "kernel/Kconfig.preempt"
......
// SPDX-License-Identifier: GPL-2.0
#ifndef _RISCV_ASM_DMA_MAPPING_H
#define _RISCV_ASM_DMA_MAPPING_H 1
#ifdef CONFIG_SWIOTLB
#include <linux/swiotlb.h>
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &swiotlb_dma_ops;
}
#else
#include <asm-generic/dma-mapping.h>
#endif /* CONFIG_SWIOTLB */
#endif /* _RISCV_ASM_DMA_MAPPING_H */
...@@ -26,9 +26,6 @@ ...@@ -26,9 +26,6 @@
/* RISC-V shim does not initialize PCI bus */ /* RISC-V shim does not initialize PCI bus */
#define pcibios_assign_all_busses() 1 #define pcibios_assign_all_busses() 1
/* We do not have an IOMMU */
#define PCI_DMA_BUS_IS_PHYS 1
extern int isa_dma_bridge_buggy; extern int isa_dma_bridge_buggy;
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/sched/task.h> #include <linux/sched/task.h>
#include <linux/swiotlb.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/sections.h> #include <asm/sections.h>
...@@ -206,6 +207,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -206,6 +207,7 @@ void __init setup_arch(char **cmdline_p)
setup_bootmem(); setup_bootmem();
paging_init(); paging_init();
unflatten_device_tree(); unflatten_device_tree();
swiotlb_init(1);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
setup_smp(); setup_smp();
......
...@@ -35,9 +35,6 @@ config GENERIC_BUG ...@@ -35,9 +35,6 @@ config GENERIC_BUG
config GENERIC_BUG_RELATIVE_POINTERS config GENERIC_BUG_RELATIVE_POINTERS
def_bool y def_bool y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config GENERIC_LOCKBREAK config GENERIC_LOCKBREAK
def_bool y if SMP && PREEMPT def_bool y if SMP && PREEMPT
...@@ -133,7 +130,6 @@ config S390 ...@@ -133,7 +130,6 @@ config S390
select HAVE_CMPXCHG_LOCAL select HAVE_CMPXCHG_LOCAL
select HAVE_COPY_THREAD_TLS select HAVE_COPY_THREAD_TLS
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select DMA_DIRECT_OPS select DMA_DIRECT_OPS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
...@@ -709,7 +705,11 @@ config QDIO ...@@ -709,7 +705,11 @@ config QDIO
menuconfig PCI menuconfig PCI
bool "PCI support" bool "PCI support"
select PCI_MSI select PCI_MSI
select IOMMU_HELPER
select IOMMU_SUPPORT select IOMMU_SUPPORT
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
help help
Enable PCI support. Enable PCI support.
...@@ -733,15 +733,6 @@ config PCI_DOMAINS ...@@ -733,15 +733,6 @@ config PCI_DOMAINS
config HAS_IOMEM config HAS_IOMEM
def_bool PCI def_bool PCI
config IOMMU_HELPER
def_bool PCI
config NEED_SG_DMA_LENGTH
def_bool PCI
config NEED_DMA_MAP_STATE
def_bool PCI
config CHSC_SCH config CHSC_SCH
def_tristate m def_tristate m
prompt "Support for CHSC subchannels" prompt "Support for CHSC subchannels"
......
...@@ -2,8 +2,6 @@ ...@@ -2,8 +2,6 @@
#ifndef __ASM_S390_PCI_H #ifndef __ASM_S390_PCI_H
#define __ASM_S390_PCI_H #define __ASM_S390_PCI_H
/* must be set before including asm-generic/pci.h */
#define PCI_DMA_BUS_IS_PHYS (0)
/* must be set before including pci_clp.h */ /* must be set before including pci_clp.h */
#define PCI_BAR_COUNT 6 #define PCI_BAR_COUNT 6
......
...@@ -668,15 +668,6 @@ void zpci_dma_exit(void) ...@@ -668,15 +668,6 @@ void zpci_dma_exit(void)
kmem_cache_destroy(dma_region_table_cache); kmem_cache_destroy(dma_region_table_cache);
} }
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_debug_do_init);
const struct dma_map_ops s390_pci_dma_ops = { const struct dma_map_ops s390_pci_dma_ops = {
.alloc = s390_dma_alloc, .alloc = s390_dma_alloc,
.free = s390_dma_free, .free = s390_dma_free,
...@@ -685,8 +676,6 @@ const struct dma_map_ops s390_pci_dma_ops = { ...@@ -685,8 +676,6 @@ const struct dma_map_ops s390_pci_dma_ops = {
.map_page = s390_dma_map_pages, .map_page = s390_dma_map_pages,
.unmap_page = s390_dma_unmap_pages, .unmap_page = s390_dma_unmap_pages,
.mapping_error = s390_mapping_error, .mapping_error = s390_mapping_error,
/* if we support direct DMA this must be conditional */
.is_phys = 0,
/* dma_supported is unconditionally true without a callback */ /* dma_supported is unconditionally true without a callback */
}; };
EXPORT_SYMBOL_GPL(s390_pci_dma_ops); EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
......
...@@ -14,7 +14,6 @@ config SUPERH ...@@ -14,7 +14,6 @@ config SUPERH
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_CUSTOM_GPIO_H
...@@ -51,6 +50,9 @@ config SUPERH ...@@ -51,6 +50,9 @@ config SUPERH
select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_AUDITSYSCALL
select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_NMI select HAVE_NMI
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
help help
The SuperH is a RISC processor targeted for use in embedded systems The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast and consumer electronics; it was also used in the Sega Dreamcast
...@@ -161,12 +163,6 @@ config DMA_COHERENT ...@@ -161,12 +163,6 @@ config DMA_COHERENT
config DMA_NONCOHERENT config DMA_NONCOHERENT
def_bool !DMA_COHERENT def_bool !DMA_COHERENT
config NEED_DMA_MAP_STATE
def_bool DMA_NONCOHERENT
config NEED_SG_DMA_LENGTH
def_bool y
config PGTABLE_LEVELS config PGTABLE_LEVELS
default 3 if X2TLB default 3 if X2TLB
default 2 default 2
......
...@@ -71,12 +71,6 @@ extern unsigned long PCIBIOS_MIN_IO, PCIBIOS_MIN_MEM; ...@@ -71,12 +71,6 @@ extern unsigned long PCIBIOS_MIN_IO, PCIBIOS_MIN_MEM;
* SuperH has everything mapped statically like x86. * SuperH has everything mapped statically like x86.
*/ */
/* The PCI address space does equal the physical memory
* address space. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (dma_ops->is_phys)
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
/* /*
* None of the SH PCI controllers support MWI, it is always treated as a * None of the SH PCI controllers support MWI, it is always treated as a
......
...@@ -78,7 +78,6 @@ const struct dma_map_ops nommu_dma_ops = { ...@@ -78,7 +78,6 @@ const struct dma_map_ops nommu_dma_ops = {
.sync_single_for_device = nommu_sync_single_for_device, .sync_single_for_device = nommu_sync_single_for_device,
.sync_sg_for_device = nommu_sync_sg_for_device, .sync_sg_for_device = nommu_sync_sg_for_device,
#endif #endif
.is_phys = 1,
}; };
void __init no_iommu_init(void) void __init no_iommu_init(void)
......
...@@ -20,18 +20,9 @@ ...@@ -20,18 +20,9 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/addrspace.h> #include <asm/addrspace.h>
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
const struct dma_map_ops *dma_ops; const struct dma_map_ops *dma_ops;
EXPORT_SYMBOL(dma_ops); EXPORT_SYMBOL(dma_ops);
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
void *dma_generic_alloc_coherent(struct device *dev, size_t size, void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, dma_addr_t *dma_handle, gfp_t gfp,
unsigned long attrs) unsigned long attrs)
......
...@@ -25,7 +25,6 @@ config SPARC ...@@ -25,7 +25,6 @@ config SPARC
select RTC_CLASS select RTC_CLASS
select RTC_DRV_M48T59 select RTC_DRV_M48T59
select RTC_SYSTOHC select RTC_SYSTOHC
select HAVE_DMA_API_DEBUG
select HAVE_ARCH_JUMP_LABEL if SPARC64 select HAVE_ARCH_JUMP_LABEL if SPARC64
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
...@@ -44,6 +43,8 @@ config SPARC ...@@ -44,6 +43,8 @@ config SPARC
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
select CPU_NO_EFFICIENT_FFS select CPU_NO_EFFICIENT_FFS
select LOCKDEP_SMALL if LOCKDEP select LOCKDEP_SMALL if LOCKDEP
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
config SPARC32 config SPARC32
def_bool !64BIT def_bool !64BIT
...@@ -67,6 +68,7 @@ config SPARC64 ...@@ -67,6 +68,7 @@ config SPARC64
select HAVE_SYSCALL_TRACEPOINTS select HAVE_SYSCALL_TRACEPOINTS
select HAVE_CONTEXT_TRACKING select HAVE_CONTEXT_TRACKING
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select IOMMU_HELPER
select SPARSE_IRQ select SPARSE_IRQ
select RTC_DRV_CMOS select RTC_DRV_CMOS
select RTC_DRV_BQ4802 select RTC_DRV_BQ4802
...@@ -102,14 +104,6 @@ config ARCH_ATU ...@@ -102,14 +104,6 @@ config ARCH_ATU
bool bool
default y if SPARC64 default y if SPARC64
config ARCH_DMA_ADDR_T_64BIT
bool
default y if ARCH_ATU
config IOMMU_HELPER
bool
default y if SPARC64
config STACKTRACE_SUPPORT config STACKTRACE_SUPPORT
bool bool
default y if SPARC64 default y if SPARC64
...@@ -146,12 +140,6 @@ config ZONE_DMA ...@@ -146,12 +140,6 @@ config ZONE_DMA
bool bool
default y if SPARC32 default y if SPARC32
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool bool
default y if SPARC32 default y if SPARC32
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
#define IOPTE_WRITE 0x0000000000000002UL #define IOPTE_WRITE 0x0000000000000002UL
#define IOMMU_NUM_CTXS 4096 #define IOMMU_NUM_CTXS 4096
#include <linux/iommu-common.h> #include <asm/iommu-common.h>
struct iommu_arena { struct iommu_arena {
unsigned long *map; unsigned long *map;
......
...@@ -17,10 +17,6 @@ ...@@ -17,10 +17,6 @@
#define PCI_IRQ_NONE 0xffffffff #define PCI_IRQ_NONE 0xffffffff
/* Dynamic DMA mapping stuff.
*/
#define PCI_DMA_BUS_IS_PHYS (0)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#ifndef CONFIG_LEON_PCI #ifndef CONFIG_LEON_PCI
......
...@@ -17,12 +17,6 @@ ...@@ -17,12 +17,6 @@
#define PCI_IRQ_NONE 0xffffffff #define PCI_IRQ_NONE 0xffffffff
/* The PCI address space does not equal the physical memory
* address space. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (0)
/* PCI IOMMU mapping bypass support. */ /* PCI IOMMU mapping bypass support. */
/* PCI 64-bit addressing works for all slots on all controller /* PCI 64-bit addressing works for all slots on all controller
......
...@@ -59,7 +59,7 @@ obj-$(CONFIG_SPARC32) += leon_pmc.o ...@@ -59,7 +59,7 @@ obj-$(CONFIG_SPARC32) += leon_pmc.o
obj-$(CONFIG_SPARC64) += reboot.o obj-$(CONFIG_SPARC64) += reboot.o
obj-$(CONFIG_SPARC64) += sysfs.o obj-$(CONFIG_SPARC64) += sysfs.o
obj-$(CONFIG_SPARC64) += iommu.o obj-$(CONFIG_SPARC64) += iommu.o iommu-common.o
obj-$(CONFIG_SPARC64) += central.o obj-$(CONFIG_SPARC64) += central.o
obj-$(CONFIG_SPARC64) += starfire.o obj-$(CONFIG_SPARC64) += starfire.o
obj-$(CONFIG_SPARC64) += power.o obj-$(CONFIG_SPARC64) += power.o
...@@ -74,8 +74,6 @@ obj-$(CONFIG_SPARC64) += pcr.o ...@@ -74,8 +74,6 @@ obj-$(CONFIG_SPARC64) += pcr.o
obj-$(CONFIG_SPARC64) += nmi.o obj-$(CONFIG_SPARC64) += nmi.o
obj-$(CONFIG_SPARC64_SMP) += cpumap.o obj-$(CONFIG_SPARC64_SMP) += cpumap.o
obj-y += dma.o
obj-$(CONFIG_PCIC_PCI) += pcic.o obj-$(CONFIG_PCIC_PCI) += pcic.o
obj-$(CONFIG_LEON_PCI) += leon_pci.o obj-$(CONFIG_LEON_PCI) += leon_pci.o
obj-$(CONFIG_SPARC_GRPCI2)+= leon_pci_grpci2.o obj-$(CONFIG_SPARC_GRPCI2)+= leon_pci_grpci2.o
......
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/dma-mapping.h>
#include <linux/dma-debug.h>
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 15)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
...@@ -8,9 +8,9 @@ ...@@ -8,9 +8,9 @@
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/iommu-helper.h> #include <linux/iommu-helper.h>
#include <linux/iommu-common.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/hash.h> #include <linux/hash.h>
#include <asm/iommu-common.h>
static unsigned long iommu_large_alloc = 15; static unsigned long iommu_large_alloc = 15;
...@@ -93,7 +93,6 @@ void iommu_tbl_pool_init(struct iommu_map_table *iommu, ...@@ -93,7 +93,6 @@ void iommu_tbl_pool_init(struct iommu_map_table *iommu,
p->hint = p->start; p->hint = p->start;
p->end = num_entries; p->end = num_entries;
} }
EXPORT_SYMBOL(iommu_tbl_pool_init);
unsigned long iommu_tbl_range_alloc(struct device *dev, unsigned long iommu_tbl_range_alloc(struct device *dev,
struct iommu_map_table *iommu, struct iommu_map_table *iommu,
...@@ -224,7 +223,6 @@ unsigned long iommu_tbl_range_alloc(struct device *dev, ...@@ -224,7 +223,6 @@ unsigned long iommu_tbl_range_alloc(struct device *dev,
return n; return n;
} }
EXPORT_SYMBOL(iommu_tbl_range_alloc);
static struct iommu_pool *get_pool(struct iommu_map_table *tbl, static struct iommu_pool *get_pool(struct iommu_map_table *tbl,
unsigned long entry) unsigned long entry)
...@@ -264,4 +262,3 @@ void iommu_tbl_range_free(struct iommu_map_table *iommu, u64 dma_addr, ...@@ -264,4 +262,3 @@ void iommu_tbl_range_free(struct iommu_map_table *iommu, u64 dma_addr,
bitmap_clear(iommu->map, entry, npages); bitmap_clear(iommu->map, entry, npages);
spin_unlock_irqrestore(&(pool->lock), flags); spin_unlock_irqrestore(&(pool->lock), flags);
} }
EXPORT_SYMBOL(iommu_tbl_range_free);
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/iommu-helper.h> #include <linux/iommu-helper.h>
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <linux/iommu-common.h> #include <asm/iommu-common.h>
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
#include <linux/pci.h> #include <linux/pci.h>
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <linux/iommu-common.h> #include <asm/iommu-common.h>
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
#include <asm/iommu.h> #include <asm/iommu.h>
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/iommu-common.h> #include <asm/iommu-common.h>
#include <asm/iommu.h> #include <asm/iommu.h>
#include <asm/irq.h> #include <asm/irq.h>
......
...@@ -19,6 +19,8 @@ config UNICORE32 ...@@ -19,6 +19,8 @@ config UNICORE32
select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_FRAME_POINTERS
select GENERIC_IOMAP select GENERIC_IOMAP
select MODULES_USE_ELF_REL select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select SWIOTLB
help help
UniCore-32 is 32-bit Instruction Set Architecture, UniCore-32 is 32-bit Instruction Set Architecture,
including a series of low-power-consumption RISC chip including a series of low-power-consumption RISC chip
...@@ -61,9 +63,6 @@ config ARCH_MAY_HAVE_PC_FDC ...@@ -61,9 +63,6 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA config ZONE_DMA
def_bool y def_bool y
config NEED_DMA_MAP_STATE
def_bool y
source "init/Kconfig" source "init/Kconfig"
source "kernel/Kconfig.freezer" source "kernel/Kconfig.freezer"
......
...@@ -39,14 +39,3 @@ config CPU_TLB_SINGLE_ENTRY_DISABLE ...@@ -39,14 +39,3 @@ config CPU_TLB_SINGLE_ENTRY_DISABLE
default y default y
help help
Say Y here to disable the TLB single entry operations. Say Y here to disable the TLB single entry operations.
config SWIOTLB
def_bool y
select DMA_DIRECT_OPS
config IOMMU_HELPER
def_bool SWIOTLB
config NEED_SG_DMA_LENGTH
def_bool SWIOTLB
...@@ -28,6 +28,8 @@ config X86_64 ...@@ -28,6 +28,8 @@ config X86_64
select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_SOFT_DIRTY select HAVE_ARCH_SOFT_DIRTY
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select SWIOTLB
select X86_DEV_DMA_OPS select X86_DEV_DMA_OPS
select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_SYSCALL_WRAPPER
...@@ -134,7 +136,6 @@ config X86 ...@@ -134,7 +136,6 @@ config X86
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_REGS
...@@ -184,6 +185,7 @@ config X86 ...@@ -184,6 +185,7 @@ config X86
select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_USER_RETURN_NOTIFIER select HAVE_USER_RETURN_NOTIFIER
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select NEED_SG_DMA_LENGTH
select PCI_LOCKLESS_CONFIG select PCI_LOCKLESS_CONFIG
select PERF_EVENTS select PERF_EVENTS
select RTC_LIB select RTC_LIB
...@@ -236,13 +238,6 @@ config ARCH_MMAP_RND_COMPAT_BITS_MAX ...@@ -236,13 +238,6 @@ config ARCH_MMAP_RND_COMPAT_BITS_MAX
config SBUS config SBUS
bool bool
config NEED_DMA_MAP_STATE
def_bool y
depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG || SWIOTLB
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
def_bool y def_bool y
depends on ISA_DMA_API depends on ISA_DMA_API
...@@ -875,6 +870,7 @@ config DMI ...@@ -875,6 +870,7 @@ config DMI
config GART_IOMMU config GART_IOMMU
bool "Old AMD GART IOMMU support" bool "Old AMD GART IOMMU support"
select IOMMU_HELPER
select SWIOTLB select SWIOTLB
depends on X86_64 && PCI && AMD_NB depends on X86_64 && PCI && AMD_NB
---help--- ---help---
...@@ -896,6 +892,7 @@ config GART_IOMMU ...@@ -896,6 +892,7 @@ config GART_IOMMU
config CALGARY_IOMMU config CALGARY_IOMMU
bool "IBM Calgary IOMMU support" bool "IBM Calgary IOMMU support"
select IOMMU_HELPER
select SWIOTLB select SWIOTLB
depends on X86_64 && PCI depends on X86_64 && PCI
---help--- ---help---
...@@ -923,20 +920,6 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT ...@@ -923,20 +920,6 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT
Calgary anyway, pass 'iommu=calgary' on the kernel command line. Calgary anyway, pass 'iommu=calgary' on the kernel command line.
If unsure, say Y. If unsure, say Y.
# need this always selected by IOMMU for the VIA workaround
config SWIOTLB
def_bool y if X86_64
---help---
Support for software bounce buffers used on x86-64 systems
which don't have a hardware IOMMU. Using this PCI devices
which can only access 32-bits of memory can be used on systems
with more than 3 GB of memory.
If unsure, say Y.
config IOMMU_HELPER
def_bool y
depends on CALGARY_IOMMU || GART_IOMMU || SWIOTLB || AMD_IOMMU
config MAXSMP config MAXSMP
bool "Enable Maximum number of SMP Processors and NUMA Nodes" bool "Enable Maximum number of SMP Processors and NUMA Nodes"
depends on X86_64 && SMP && DEBUG_KERNEL depends on X86_64 && SMP && DEBUG_KERNEL
...@@ -1458,6 +1441,7 @@ config HIGHMEM ...@@ -1458,6 +1441,7 @@ config HIGHMEM
config X86_PAE config X86_PAE
bool "PAE (Physical Address Extension) Support" bool "PAE (Physical Address Extension) Support"
depends on X86_32 && !HIGHMEM4G depends on X86_32 && !HIGHMEM4G
select PHYS_ADDR_T_64BIT
select SWIOTLB select SWIOTLB
---help--- ---help---
PAE is required for NX support, and furthermore enables PAE is required for NX support, and furthermore enables
...@@ -1485,14 +1469,6 @@ config X86_5LEVEL ...@@ -1485,14 +1469,6 @@ config X86_5LEVEL
Say N if unsure. Say N if unsure.
config ARCH_PHYS_ADDR_T_64BIT
def_bool y
depends on X86_64 || X86_PAE
config ARCH_DMA_ADDR_T_64BIT
def_bool y
depends on X86_64 || HIGHMEM64G
config X86_DIRECT_GBPAGES config X86_DIRECT_GBPAGES
def_bool y def_bool y
depends on X86_64 && !DEBUG_PAGEALLOC depends on X86_64 && !DEBUG_PAGEALLOC
......
...@@ -30,10 +30,7 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) ...@@ -30,10 +30,7 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops; return dma_ops;
} }
int arch_dma_supported(struct device *dev, u64 mask); bool arch_dma_alloc_attrs(struct device **dev);
#define arch_dma_supported arch_dma_supported
bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
#define arch_dma_alloc_attrs arch_dma_alloc_attrs #define arch_dma_alloc_attrs arch_dma_alloc_attrs
#endif #endif
...@@ -117,9 +117,6 @@ void native_restore_msi_irqs(struct pci_dev *dev); ...@@ -117,9 +117,6 @@ void native_restore_msi_irqs(struct pci_dev *dev);
#define native_setup_msi_irqs NULL #define native_setup_msi_irqs NULL
#define native_teardown_msi_irq NULL #define native_teardown_msi_irq NULL
#endif #endif
#define PCI_DMA_BUS_IS_PHYS (dma_ops->is_phys)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
......
...@@ -15,13 +15,11 @@ ...@@ -15,13 +15,11 @@
#include <asm/x86_init.h> #include <asm/x86_init.h>
#include <asm/iommu_table.h> #include <asm/iommu_table.h>
static int forbid_dac __read_mostly; static bool disable_dac_quirk __read_mostly;
const struct dma_map_ops *dma_ops = &dma_direct_ops; const struct dma_map_ops *dma_ops = &dma_direct_ops;
EXPORT_SYMBOL(dma_ops); EXPORT_SYMBOL(dma_ops);
static int iommu_sac_force __read_mostly;
#ifdef CONFIG_IOMMU_DEBUG #ifdef CONFIG_IOMMU_DEBUG
int panic_on_overflow __read_mostly = 1; int panic_on_overflow __read_mostly = 1;
int force_iommu __read_mostly = 1; int force_iommu __read_mostly = 1;
...@@ -55,9 +53,6 @@ struct device x86_dma_fallback_dev = { ...@@ -55,9 +53,6 @@ struct device x86_dma_fallback_dev = {
}; };
EXPORT_SYMBOL(x86_dma_fallback_dev); EXPORT_SYMBOL(x86_dma_fallback_dev);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES 65536
void __init pci_iommu_alloc(void) void __init pci_iommu_alloc(void)
{ {
struct iommu_table_entry *p; struct iommu_table_entry *p;
...@@ -76,7 +71,7 @@ void __init pci_iommu_alloc(void) ...@@ -76,7 +71,7 @@ void __init pci_iommu_alloc(void)
} }
} }
bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp) bool arch_dma_alloc_attrs(struct device **dev)
{ {
if (!*dev) if (!*dev)
*dev = &x86_dma_fallback_dev; *dev = &x86_dma_fallback_dev;
...@@ -125,13 +120,13 @@ static __init int iommu_setup(char *p) ...@@ -125,13 +120,13 @@ static __init int iommu_setup(char *p)
if (!strncmp(p, "nomerge", 7)) if (!strncmp(p, "nomerge", 7))
iommu_merge = 0; iommu_merge = 0;
if (!strncmp(p, "forcesac", 8)) if (!strncmp(p, "forcesac", 8))
iommu_sac_force = 1; pr_warn("forcesac option ignored.\n");
if (!strncmp(p, "allowdac", 8)) if (!strncmp(p, "allowdac", 8))
forbid_dac = 0; pr_warn("allowdac option ignored.\n");
if (!strncmp(p, "nodac", 5)) if (!strncmp(p, "nodac", 5))
forbid_dac = 1; pr_warn("nodac option ignored.\n");
if (!strncmp(p, "usedac", 6)) { if (!strncmp(p, "usedac", 6)) {
forbid_dac = -1; disable_dac_quirk = true;
return 1; return 1;
} }
#ifdef CONFIG_SWIOTLB #ifdef CONFIG_SWIOTLB
...@@ -156,40 +151,9 @@ static __init int iommu_setup(char *p) ...@@ -156,40 +151,9 @@ static __init int iommu_setup(char *p)
} }
early_param("iommu", iommu_setup); early_param("iommu", iommu_setup);
int arch_dma_supported(struct device *dev, u64 mask)
{
#ifdef CONFIG_PCI
if (mask > 0xffffffff && forbid_dac > 0) {
dev_info(dev, "PCI: Disallowing DAC for device\n");
return 0;
}
#endif
/* Tell the device to use SAC when IOMMU force is on. This
allows the driver to use cheaper accesses in some cases.
Problem with this is that if we overflow the IOMMU area and
return DAC as fallback address the device may not handle it
correctly.
As a special case some controllers have a 39bit address
mode that is as efficient as 32bit (aic79xx). Don't force
SAC for these. Assume all masks <= 40 bits are of this
type. Normally this doesn't make any difference, but gives
more gentle handling of IOMMU overflow. */
if (iommu_sac_force && (mask >= DMA_BIT_MASK(40))) {
dev_info(dev, "Force SAC with mask %Lx\n", mask);
return 0;
}
return 1;
}
EXPORT_SYMBOL(arch_dma_supported);
static int __init pci_iommu_init(void) static int __init pci_iommu_init(void)
{ {
struct iommu_table_entry *p; struct iommu_table_entry *p;
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
dma_debug_add_bus(&pci_bus_type); dma_debug_add_bus(&pci_bus_type);
...@@ -209,11 +173,17 @@ rootfs_initcall(pci_iommu_init); ...@@ -209,11 +173,17 @@ rootfs_initcall(pci_iommu_init);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
/* Many VIA bridges seem to corrupt data for DAC. Disable it here */ /* Many VIA bridges seem to corrupt data for DAC. Disable it here */
static int via_no_dac_cb(struct pci_dev *pdev, void *data)
{
pdev->dev.dma_32bit_limit = true;
return 0;
}
static void via_no_dac(struct pci_dev *dev) static void via_no_dac(struct pci_dev *dev)
{ {
if (forbid_dac == 0) { if (!disable_dac_quirk) {
dev_info(&dev->dev, "disabling DAC on VIA PCI bridge\n"); dev_info(&dev->dev, "disabling DAC on VIA PCI bridge\n");
forbid_dac = 1; pci_walk_bus(dev->subordinate, via_no_dac_cb, NULL);
} }
} }
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_VIA, PCI_ANY_ID, DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_VIA, PCI_ANY_ID,
......
...@@ -19,7 +19,6 @@ config XTENSA ...@@ -19,7 +19,6 @@ config XTENSA
select HAVE_ARCH_KASAN if MMU select HAVE_ARCH_KASAN if MMU
select HAVE_CC_STACKPROTECTOR select HAVE_CC_STACKPROTECTOR
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_EXIT_THREAD select HAVE_EXIT_THREAD
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
......
...@@ -42,8 +42,6 @@ extern struct pci_controller* pcibios_alloc_controller(void); ...@@ -42,8 +42,6 @@ extern struct pci_controller* pcibios_alloc_controller(void);
* decisions. * decisions.
*/ */
#define PCI_DMA_BUS_IS_PHYS (1)
/* Tell PCI code what kind of PCI resource mappings we support */ /* Tell PCI code what kind of PCI resource mappings we support */
#define HAVE_PCI_MMAP 1 #define HAVE_PCI_MMAP 1
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
......
...@@ -261,12 +261,3 @@ const struct dma_map_ops xtensa_dma_map_ops = { ...@@ -261,12 +261,3 @@ const struct dma_map_ops xtensa_dma_map_ops = {
.mapping_error = xtensa_dma_mapping_error, .mapping_error = xtensa_dma_mapping_error,
}; };
EXPORT_SYMBOL(xtensa_dma_map_ops); EXPORT_SYMBOL(xtensa_dma_map_ops);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init xtensa_dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(xtensa_dma_init);
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/sizes.h> #include <linux/sizes.h>
#include <linux/limits.h> #include <linux/limits.h>
#include <linux/clk/clk-conf.h> #include <linux/clk/clk-conf.h>
#include <linux/platform_device.h>
#include <asm/irq.h> #include <asm/irq.h>
...@@ -193,14 +194,16 @@ static const struct dev_pm_ops amba_pm = { ...@@ -193,14 +194,16 @@ static const struct dev_pm_ops amba_pm = {
/* /*
* Primecells are part of the Advanced Microcontroller Bus Architecture, * Primecells are part of the Advanced Microcontroller Bus Architecture,
* so we call the bus "amba". * so we call the bus "amba".
* DMA configuration for platform and AMBA bus is same. So here we reuse
* platform's DMA config routine.
*/ */
struct bus_type amba_bustype = { struct bus_type amba_bustype = {
.name = "amba", .name = "amba",
.dev_groups = amba_dev_groups, .dev_groups = amba_dev_groups,
.match = amba_match, .match = amba_match,
.uevent = amba_uevent, .uevent = amba_uevent,
.dma_configure = platform_dma_configure,
.pm = &amba_pm, .pm = &amba_pm,
.force_dma = true,
}; };
static int __init amba_init(void) static int __init amba_init(void)
......
...@@ -329,36 +329,13 @@ void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags) ...@@ -329,36 +329,13 @@ void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
#endif #endif
/* /*
* Common configuration to enable DMA API use for a device * enables DMA API use for a device
*/ */
#include <linux/pci.h>
int dma_configure(struct device *dev) int dma_configure(struct device *dev)
{ {
struct device *bridge = NULL, *dma_dev = dev; if (dev->bus->dma_configure)
enum dev_dma_attr attr; return dev->bus->dma_configure(dev);
int ret = 0; return 0;
if (dev_is_pci(dev)) {
bridge = pci_get_host_bridge_device(to_pci_dev(dev));
dma_dev = bridge;
if (IS_ENABLED(CONFIG_OF) && dma_dev->parent &&
dma_dev->parent->of_node)
dma_dev = dma_dev->parent;
}
if (dma_dev->of_node) {
ret = of_dma_configure(dev, dma_dev->of_node);
} else if (has_acpi_companion(dma_dev)) {
attr = acpi_get_dma_attr(to_acpi_device_node(dma_dev->fwnode));
if (attr != DEV_DMA_NOT_SUPPORTED)
ret = acpi_dma_configure(dev, attr);
}
if (bridge)
pci_put_host_bridge_device(bridge);
return ret;
} }
void dma_deconfigure(struct device *dev) void dma_deconfigure(struct device *dev)
......
...@@ -1130,6 +1130,22 @@ int platform_pm_restore(struct device *dev) ...@@ -1130,6 +1130,22 @@ int platform_pm_restore(struct device *dev)
#endif /* CONFIG_HIBERNATE_CALLBACKS */ #endif /* CONFIG_HIBERNATE_CALLBACKS */
int platform_dma_configure(struct device *dev)
{
enum dev_dma_attr attr;
int ret = 0;
if (dev->of_node) {
ret = of_dma_configure(dev, dev->of_node, true);
} else if (has_acpi_companion(dev)) {
attr = acpi_get_dma_attr(to_acpi_device_node(dev->fwnode));
if (attr != DEV_DMA_NOT_SUPPORTED)
ret = acpi_dma_configure(dev, attr);
}
return ret;
}
static const struct dev_pm_ops platform_dev_pm_ops = { static const struct dev_pm_ops platform_dev_pm_ops = {
.runtime_suspend = pm_generic_runtime_suspend, .runtime_suspend = pm_generic_runtime_suspend,
.runtime_resume = pm_generic_runtime_resume, .runtime_resume = pm_generic_runtime_resume,
...@@ -1141,8 +1157,8 @@ struct bus_type platform_bus_type = { ...@@ -1141,8 +1157,8 @@ struct bus_type platform_bus_type = {
.dev_groups = platform_dev_groups, .dev_groups = platform_dev_groups,
.match = platform_match, .match = platform_match,
.uevent = platform_uevent, .uevent = platform_uevent,
.dma_configure = platform_dma_configure,
.pm = &platform_dev_pm_ops, .pm = &platform_dev_pm_ops,
.force_dma = true,
}; };
EXPORT_SYMBOL_GPL(platform_bus_type); EXPORT_SYMBOL_GPL(platform_bus_type);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment