Commit 4573027b authored by Robin Murphy's avatar Robin Murphy Committed by Mauro Carvalho Chehab

media: videobuf-dma-sg: Fix dma_{sync,unmap}_sg() calls

This reverts commit fc7f8fd4.

Whilst the rationale for the above commit was in general correct, i.e.
that users *consuming* the DMA addresses should rely on sglen rather
than num_pages, it has always been the case that the DMA API itself
still requires that dma_{sync,unmap}_sg() are called with the original
number of entries as passed to dma_map_sg(), not the number of mapped
entries it returned. Thus the particular changes made in that patch
were erroneous.

At worst this might lead to data loss at the tail end of mapped buffers
on non-coherent hardware, while at best it's an example of incorrect
DMA API usage which has proven to mislead readers.
Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+samsung@kernel.org>
parent c5913584
......@@ -334,7 +334,7 @@ int videobuf_dma_unmap(struct device *dev, struct videobuf_dmabuf *dma)
if (!dma->sglen)
return 0;
dma_unmap_sg(dev, dma->sglist, dma->sglen, dma->direction);
dma_unmap_sg(dev, dma->sglist, dma->nr_pages, dma->direction);
vfree(dma->sglist);
dma->sglist = NULL;
......@@ -581,7 +581,7 @@ static int __videobuf_sync(struct videobuf_queue *q,
MAGIC_CHECK(mem->dma.magic, MAGIC_DMABUF);
dma_sync_sg_for_cpu(q->dev, mem->dma.sglist,
mem->dma.sglen, mem->dma.direction);
mem->dma.nr_pages, mem->dma.direction);
return 0;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment