Commit 80b4f81a authored by Hong Xu's avatar Hong Xu Committed by Artem Bityutskiy

mtd: atmel_nand: use CPU I/O when buffer is in vmalloc(ed) region

The previous way of dealing with vmalloc(ed) region by walking
though the pages can not work well actually. We just fall back
to CPU I/O when the buffer address is higher than `high_memory'.
Signed-off-by: default avatarNicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: default avatarHong Xu <hong.xu@atmel.com>
Signed-off-by: default avatarArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
parent 9d51567e
......@@ -209,22 +209,8 @@ static int atmel_nand_dma_op(struct mtd_info *mtd, void *buf, int len,
int err = -EIO;
enum dma_data_direction dir = is_read ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
if (buf >= high_memory) {
struct page *pg;
if (((size_t)buf & PAGE_MASK) !=
((size_t)(buf + len - 1) & PAGE_MASK)) {
dev_warn(host->dev, "Buffer not fit in one page\n");
goto err_buf;
}
pg = vmalloc_to_page(buf);
if (pg == 0) {
dev_err(host->dev, "Failed to vmalloc_to_page\n");
goto err_buf;
}
p = page_address(pg) + ((size_t)buf & ~PAGE_MASK);
}
if (buf >= high_memory)
goto err_buf;
dma_dev = host->dma_chan->device;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment