Commit fb5b4abe authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] vm: balance inactive zone refill rates

The current refill logic in refill_inactive_zone() takes an arbitrarily large
number of pages and chops it down to SWAP_CLUSTER_MAX*4, regardless of the
size of the zone.

This has the effect of reducing the amount of refilling of large zones
proportionately much more than of small zones.

We made this change in may 2003 and I'm damned if I remember why.  let's put
it back so we don't truncate the refill count and see what happens.
parent 07a25779
...@@ -756,17 +756,10 @@ shrink_zone(struct zone *zone, int max_scan, unsigned int gfp_mask, ...@@ -756,17 +756,10 @@ shrink_zone(struct zone *zone, int max_scan, unsigned int gfp_mask,
*/ */
ratio = (unsigned long)SWAP_CLUSTER_MAX * zone->nr_active / ratio = (unsigned long)SWAP_CLUSTER_MAX * zone->nr_active /
((zone->nr_inactive | 1) * 2); ((zone->nr_inactive | 1) * 2);
atomic_add(ratio+1, &zone->nr_scan_active); atomic_add(ratio+1, &zone->nr_scan_active);
count = atomic_read(&zone->nr_scan_active); count = atomic_read(&zone->nr_scan_active);
if (count >= SWAP_CLUSTER_MAX) { if (count >= SWAP_CLUSTER_MAX) {
/*
* Don't try to bring down too many pages in one attempt.
* If this fails, the caller will increase `priority' and
* we'll try again, with an increased chance of reclaiming
* mapped memory.
*/
if (count > SWAP_CLUSTER_MAX * 4)
count = SWAP_CLUSTER_MAX * 4;
atomic_set(&zone->nr_scan_active, 0); atomic_set(&zone->nr_scan_active, 0);
refill_inactive_zone(zone, count, ps); refill_inactive_zone(zone, count, ps);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment