Commit ba1f08f1 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] readpage-vs-invalidate fix

A while ago we merged a patch which tried to solve a problem wherein a
concurrent read() and invalidate_inode_pages() would cause the read() to
return -EIO because invalidate cleared PageUptodate() at the wrong time.

That patch tests for (page_count(page) != 2) in invalidate_complete_page() and
bales out if false.

Problem is, the page may be in the per-cpu LRU front-ends over in
lru_cache_add.  This elevates the refcount pending spillage of the page onto
the LRU for real.  That causes a false positive in invalidate_complete_page(),
causing the page to not get invalidated.  This screws up the logic in my new
O_DIRECT-vs-buffered coherency fix.

So let's solve the invalidate-vs-read in a different manner.  Over on the
read() side, add an explicit check to see if the page was invalidated.  If so,
just drop it on the floor and redo the read from scratch.

Note that only do_generic_mapping_read() needs treatment.  filemap_nopage(),
filemap_getpage() and read_cache_page() are already doing the
oh-it-was-invalidated-so-try-again thing.
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 7d2b8702
...@@ -802,11 +802,21 @@ void do_generic_mapping_read(struct address_space *mapping, ...@@ -802,11 +802,21 @@ void do_generic_mapping_read(struct address_space *mapping,
goto readpage_error; goto readpage_error;
if (!PageUptodate(page)) { if (!PageUptodate(page)) {
wait_on_page_locked(page); lock_page(page);
if (!PageUptodate(page)) { if (!PageUptodate(page)) {
if (page->mapping == NULL) {
/*
* invalidate_inode_pages got it
*/
unlock_page(page);
page_cache_release(page);
goto find_page;
}
unlock_page(page);
error = -EIO; error = -EIO;
goto readpage_error; goto readpage_error;
} }
unlock_page(page);
} }
/* /*
......
...@@ -82,10 +82,6 @@ invalidate_complete_page(struct address_space *mapping, struct page *page) ...@@ -82,10 +82,6 @@ invalidate_complete_page(struct address_space *mapping, struct page *page)
} }
BUG_ON(PagePrivate(page)); BUG_ON(PagePrivate(page));
if (page_count(page) != 2) {
spin_unlock_irq(&mapping->tree_lock);
return 0;
}
__remove_from_page_cache(page); __remove_from_page_cache(page);
spin_unlock_irq(&mapping->tree_lock); spin_unlock_irq(&mapping->tree_lock);
ClearPageUptodate(page); ClearPageUptodate(page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment