Commit 26eecbf3 authored by Marcelo Tosatti's avatar Marcelo Tosatti Committed by Linus Torvalds

[PATCH] vm: pageout throttling

With silly pageout testcases it is possible to place huge amounts of memory
under I/O.  With a large request queue (CFQ uses 8192 requests) it is
possible to place _all_ memory under I/O at the same time.

This means that all memory is pinned and unreclaimable and the VM gets
upset and goes oom.

The patch limits the amount of memory which is under pageout writeout to be
a little more than the amount of memory at which balance_dirty_pages()
callers will synchronously throttle.

This means that heavy pageout activity can starve heavy writeback activity
completely, but heavy writeback activity will not cause starvation of
pageout.  Because we don't want a simple `dd' to be causing excessive
latencies in page reclaim.
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent cf904163
...@@ -86,6 +86,7 @@ static inline void wait_on_inode(struct inode *inode) ...@@ -86,6 +86,7 @@ static inline void wait_on_inode(struct inode *inode)
int wakeup_bdflush(long nr_pages); int wakeup_bdflush(long nr_pages);
void laptop_io_completion(void); void laptop_io_completion(void);
void laptop_sync_completion(void); void laptop_sync_completion(void);
void throttle_vm_writeout(void);
/* These are exported to sysctl. */ /* These are exported to sysctl. */
extern int dirty_background_ratio; extern int dirty_background_ratio;
......
...@@ -289,6 +289,28 @@ void balance_dirty_pages_ratelimited(struct address_space *mapping) ...@@ -289,6 +289,28 @@ void balance_dirty_pages_ratelimited(struct address_space *mapping)
} }
EXPORT_SYMBOL(balance_dirty_pages_ratelimited); EXPORT_SYMBOL(balance_dirty_pages_ratelimited);
void throttle_vm_writeout(void)
{
struct writeback_state wbs;
long background_thresh;
long dirty_thresh;
for ( ; ; ) {
get_dirty_limits(&wbs, &background_thresh, &dirty_thresh, NULL);
/*
* Boost the allowable dirty threshold a bit for page
* allocators so they don't get DoS'ed by heavy writers
*/
dirty_thresh += dirty_thresh / 10; /* wheeee... */
if (wbs.nr_unstable + wbs.nr_writeback <= dirty_thresh)
break;
blk_congestion_wait(WRITE, HZ/10);
}
}
/* /*
* writeback at least _min_pages, and keep writing until the amount of dirty * writeback at least _min_pages, and keep writing until the amount of dirty
* memory is less than the background threshold, or until we're all clean. * memory is less than the background threshold, or until we're all clean.
......
...@@ -828,6 +828,8 @@ shrink_zone(struct zone *zone, struct scan_control *sc) ...@@ -828,6 +828,8 @@ shrink_zone(struct zone *zone, struct scan_control *sc)
break; break;
} }
} }
throttle_vm_writeout();
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment