Commit 22a7f12b authored by Cody P Schafer's avatar Cody P Schafer Committed by Linus Torvalds

mm/page_alloc: when handling percpu_pagelist_fraction, don't unneedly recalulate high

Simply moves calculation of the new 'high' value outside the
for_each_possible_cpu() loop, as it does not depend on the cpu.
Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0a647f38
......@@ -5575,7 +5575,6 @@ int lowmem_reserve_ratio_sysctl_handler(ctl_table *table, int write,
* cpu. It is the fraction of total pages in each zone that a hot per cpu pagelist
* can have before it gets flushed back to buddy allocator.
*/
int percpu_pagelist_fraction_sysctl_handler(ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
{
......@@ -5589,13 +5588,12 @@ int percpu_pagelist_fraction_sysctl_handler(ctl_table *table, int write,
mutex_lock(&pcp_batch_high_lock);
for_each_populated_zone(zone) {
for_each_possible_cpu(cpu) {
unsigned long high;
high = zone->managed_pages / percpu_pagelist_fraction;
for_each_possible_cpu(cpu)
setup_pagelist_highmark(
per_cpu_ptr(zone->pageset, cpu), high);
}
}
mutex_unlock(&pcp_batch_high_lock);
return 0;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment