Commit 2cbea1d3 authored by Wu Fengguang's avatar Wu Fengguang Committed by Linus Torvalds

readahead: trigger mmap sequential readahead on PG_readahead

Previously the mmap sequential readahead is triggered by updating
ra->prev_pos on each page fault and compare it with current page offset.

It costs dirtying the cache line on each _minor_ page fault.  So remove
the ra->prev_pos recording, and instead tag PG_readahead to trigger the
possible sequential readahead.  It's not only more simple, but also will
work more reliably and reduce cache line bouncing on concurrent page
faults on shared struct file.

In the mosbench exim benchmark which does multi-threaded page faults on
shared struct file, the ra->mmap_miss and ra->prev_pos updates are found
to cause excessive cache line bouncing on tmpfs, which actually disabled
readahead totally (shmem_backing_dev_info.ra_pages == 0).

So remove the ra->prev_pos recording, and instead tag PG_readahead to
trigger the possible sequential readahead.  It's not only more simple, but
also will work more reliably on concurrent reads on shared struct file.
Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
Tested-by: default avatarTim Chen <tim.c.chen@intel.com>
Reported-by: default avatarAndi Kleen <ak@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 207d04ba
...@@ -1559,8 +1559,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, ...@@ -1559,8 +1559,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
if (!ra->ra_pages) if (!ra->ra_pages)
return; return;
if (VM_SequentialReadHint(vma) || if (VM_SequentialReadHint(vma)) {
offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) {
page_cache_sync_readahead(mapping, ra, file, offset, page_cache_sync_readahead(mapping, ra, file, offset,
ra->ra_pages); ra->ra_pages);
return; return;
...@@ -1583,7 +1582,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, ...@@ -1583,7 +1582,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
ra_pages = max_sane_readahead(ra->ra_pages); ra_pages = max_sane_readahead(ra->ra_pages);
ra->start = max_t(long, 0, offset - ra_pages / 2); ra->start = max_t(long, 0, offset - ra_pages / 2);
ra->size = ra_pages; ra->size = ra_pages;
ra->async_size = 0; ra->async_size = ra_pages / 4;
ra_submit(ra, mapping, file); ra_submit(ra, mapping, file);
} }
...@@ -1689,7 +1688,6 @@ int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf) ...@@ -1689,7 +1688,6 @@ int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
return VM_FAULT_SIGBUS; return VM_FAULT_SIGBUS;
} }
ra->prev_pos = (loff_t)offset << PAGE_CACHE_SHIFT;
vmf->page = page; vmf->page = page;
return ret | VM_FAULT_LOCKED; return ret | VM_FAULT_LOCKED;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment