Commit c484d410 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

[PATCH] mm: free_pages_and_swap_cache opt

Minor optimization (though it doesn't help in the PREEMPT case, severely
constrained by small ZAP_BLOCK_SIZE).  free_pages_and_swap_cache works in
chunks of 16, calling release_pages which works in chunks of PAGEVEC_SIZE.
But PAGEVEC_SIZE was dropped from 16 to 14 in 2.6.10, so we're now doing more
spin_lock_irq'ing than necessary: use PAGEVEC_SIZE throughout.
Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
Signed-off-by: default avatarNick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 161599ff
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/buffer_head.h> #include <linux/buffer_head.h>
#include <linux/backing-dev.h> #include <linux/backing-dev.h>
#include <linux/pagevec.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -272,12 +273,11 @@ void free_page_and_swap_cache(struct page *page) ...@@ -272,12 +273,11 @@ void free_page_and_swap_cache(struct page *page)
*/ */
void free_pages_and_swap_cache(struct page **pages, int nr) void free_pages_and_swap_cache(struct page **pages, int nr)
{ {
int chunk = 16;
struct page **pagep = pages; struct page **pagep = pages;
lru_add_drain(); lru_add_drain();
while (nr) { while (nr) {
int todo = min(chunk, nr); int todo = min(nr, PAGEVEC_SIZE);
int i; int i;
for (i = 0; i < todo; i++) for (i = 0; i < todo; i++)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment