Commit 1e74f300 authored by FUJITA Tomonori's avatar FUJITA Tomonori Committed by Ingo Molnar

swiotlb: use coherent_dma_mask in alloc_coherent

Impact: fix DMA buffer allocation coherency bug in certain configs

This patch fixes swiotlb to use dev->coherent_dma_mask in
swiotlb_alloc_coherent().

coherent_dma_mask is a subset of dma_mask (equal to it most of
the time), enumerating the address range that a given device
is able to DMA to/from in a cache-coherent way.

But currently, swiotlb uses dev->dma_mask in alloc_coherent()
implicitly via address_needs_mapping(), but alloc_coherent is really
supposed to use coherent_dma_mask.

This bug could break drivers that uses smaller coherent_dma_mask than
dma_mask (though the current code works for the majority that use the
same mask for coherent_dma_mask and dma_mask).
Signed-off-by: default avatarFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: tony.luck@intel.com
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent e47411b1
...@@ -467,9 +467,13 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size, ...@@ -467,9 +467,13 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t dev_addr; dma_addr_t dev_addr;
void *ret; void *ret;
int order = get_order(size); int order = get_order(size);
u64 dma_mask = DMA_32BIT_MASK;
if (hwdev && hwdev->coherent_dma_mask)
dma_mask = hwdev->coherent_dma_mask;
ret = (void *)__get_free_pages(flags, order); ret = (void *)__get_free_pages(flags, order);
if (ret && address_needs_mapping(hwdev, virt_to_bus(ret), size)) { if (ret && !is_buffer_dma_capable(dma_mask, virt_to_bus(ret), size)) {
/* /*
* The allocated memory isn't reachable by the device. * The allocated memory isn't reachable by the device.
* Fall back on swiotlb_map_single(). * Fall back on swiotlb_map_single().
...@@ -493,9 +497,9 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size, ...@@ -493,9 +497,9 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dev_addr = virt_to_bus(ret); dev_addr = virt_to_bus(ret);
/* Confirm address can be DMA'd by device */ /* Confirm address can be DMA'd by device */
if (address_needs_mapping(hwdev, dev_addr, size)) { if (!is_buffer_dma_capable(dma_mask, dev_addr, size)) {
printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016Lx\n", printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016Lx\n",
(unsigned long long)*hwdev->dma_mask, (unsigned long long)dma_mask,
(unsigned long long)dev_addr); (unsigned long long)dev_addr);
/* DMA_TO_DEVICE to avoid memcpy in unmap_single */ /* DMA_TO_DEVICE to avoid memcpy in unmap_single */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment