Commit 061a785a authored by Dave Jiang's avatar Dave Jiang Committed by Jon Mason

ntb: Force physically contiguous allocation of rx ring buffers

Physical addresses under IOVA on x86 platform are mapped contiguously
as a side effect before the patch that removed CONFIG_DMA_REMAP. The
NTB rx buffer ring is a single chunk DMA buffer that is allocated
against the NTB PCI device. If the receive side is using a DMA device,
then the buffers are remapped against the DMA device before being
submitted via the dmaengine API. This scheme becomes a problem when
the physical memory is discontiguous. When dma_map_page() is called
on the kernel virtual address from the dma_alloc_coherent() call, the
new IOVA mapping no longer points to all the physical memory allocated
due to being discontiguous. Change dma_alloc_coherent() to dma_alloc_attrs()
in order to force DMA_ATTR_FORCE_CONTIGUOUS attribute. This is the best
fix for the circumstance. A potential future solution may be having the DMA
mapping API providing a way to alias an existing IOVA mapping to a new
device perhaps.

This fix is not to fix the patch pointed to by the fixes tag, but to fix
the issue arised in the ntb_transport driver on x86 platforms after the
said patch is applied.
Reported-by: default avatarJerry Dai <jerry.dai@intel.com>
Fixes: f5ff79fd ("dma-mapping: remove CONFIG_DMA_REMAP")
Tested-by: default avatarJerry Dai <jerry.dai@intel.com>
Signed-off-by: default avatarDave Jiang <dave.jiang@intel.com>
Signed-off-by: default avatarJon Mason <jdmason@kudzu.us>
parent e51aded9
...@@ -809,16 +809,29 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw) ...@@ -809,16 +809,29 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw)
} }
static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw, static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
struct device *dma_dev, size_t align) struct device *ntb_dev, size_t align)
{ {
dma_addr_t dma_addr; dma_addr_t dma_addr;
void *alloc_addr, *virt_addr; void *alloc_addr, *virt_addr;
int rc; int rc;
alloc_addr = dma_alloc_coherent(dma_dev, mw->alloc_size, /*
&dma_addr, GFP_KERNEL); * The buffer here is allocated against the NTB device. The reason to
* use dma_alloc_*() call is to allocate a large IOVA contiguous buffer
* backing the NTB BAR for the remote host to write to. During receive
* processing, the data is being copied out of the receive buffer to
* the kernel skbuff. When a DMA device is being used, dma_map_page()
* is called on the kvaddr of the receive buffer (from dma_alloc_*())
* and remapped against the DMA device. It appears to be a double
* DMA mapping of buffers, but first is mapped to the NTB device and
* second is to the DMA device. DMA_ATTR_FORCE_CONTIGUOUS is necessary
* in order for the later dma_map_page() to not fail.
*/
alloc_addr = dma_alloc_attrs(ntb_dev, mw->alloc_size,
&dma_addr, GFP_KERNEL,
DMA_ATTR_FORCE_CONTIGUOUS);
if (!alloc_addr) { if (!alloc_addr) {
dev_err(dma_dev, "Unable to alloc MW buff of size %zu\n", dev_err(ntb_dev, "Unable to alloc MW buff of size %zu\n",
mw->alloc_size); mw->alloc_size);
return -ENOMEM; return -ENOMEM;
} }
...@@ -847,7 +860,7 @@ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw, ...@@ -847,7 +860,7 @@ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
return 0; return 0;
err: err:
dma_free_coherent(dma_dev, mw->alloc_size, alloc_addr, dma_addr); dma_free_coherent(ntb_dev, mw->alloc_size, alloc_addr, dma_addr);
return rc; return rc;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment