Commit c2bdee59 authored by Yinghai Lu's avatar Yinghai Lu Committed by H. Peter Anvin

x86, 64bit, mm: Make pgd next calculation consistent with pud/pmd

Just like the way we calculate next for pud and pmd, aka round down and
add size.

Also, do not do boundary-checking with 'next', and just pass 'end' down
to phys_pud_init() instead. Because the loop in phys_pud_init() stops at
PTRS_PER_PUD and thus can handle a possibly bigger 'end' properly.
Signed-off-by: default avatarYinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-6-git-send-email-yinghai@kernel.orgSigned-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
parent b422a309
...@@ -530,9 +530,7 @@ kernel_physical_mapping_init(unsigned long start, ...@@ -530,9 +530,7 @@ kernel_physical_mapping_init(unsigned long start,
pgd_t *pgd = pgd_offset_k(start); pgd_t *pgd = pgd_offset_k(start);
pud_t *pud; pud_t *pud;
next = (start + PGDIR_SIZE) & PGDIR_MASK; next = (start & PGDIR_MASK) + PGDIR_SIZE;
if (next > end)
next = end;
if (pgd_val(*pgd)) { if (pgd_val(*pgd)) {
pud = (pud_t *)pgd_page_vaddr(*pgd); pud = (pud_t *)pgd_page_vaddr(*pgd);
...@@ -542,7 +540,7 @@ kernel_physical_mapping_init(unsigned long start, ...@@ -542,7 +540,7 @@ kernel_physical_mapping_init(unsigned long start,
} }
pud = alloc_low_page(); pud = alloc_low_page();
last_map_addr = phys_pud_init(pud, __pa(start), __pa(next), last_map_addr = phys_pud_init(pud, __pa(start), __pa(end),
page_size_mask); page_size_mask);
spin_lock(&init_mm.page_table_lock); spin_lock(&init_mm.page_table_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment