Commit 6b392717 authored by Rusty Russell's avatar Rusty Russell

lguest: map Switcher below fixmap.

Now we've adjusted all the code, we can simply set switcher_addr to
wherever it needs to go below the fixmaps, rather than asserting that
it should be so.

With large NR_CPUS and PAE, people were hitting the "mapping switcher
would thwack fixmap" message.
Reported-by: default avatarPaul Bolle <pebolle@tiscali.nl>
Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
parent 6d0cda93
...@@ -14,12 +14,6 @@ ...@@ -14,12 +14,6 @@
/* Page for Switcher text itself, then two pages per cpu */ /* Page for Switcher text itself, then two pages per cpu */
#define TOTAL_SWITCHER_PAGES (1 + 2 * nr_cpu_ids) #define TOTAL_SWITCHER_PAGES (1 + 2 * nr_cpu_ids)
/* We map at -4M (-2M for PAE) for ease of mapping (one PTE page). */
#ifdef CONFIG_X86_PAE
#define SWITCHER_ADDR 0xFFE00000
#else
#define SWITCHER_ADDR 0xFFC00000
#endif
/* Where we map the Switcher, in both Host and Guest. */ /* Where we map the Switcher, in both Host and Guest. */
extern unsigned long switcher_addr; extern unsigned long switcher_addr;
......
...@@ -83,18 +83,13 @@ static __init int map_switcher(void) ...@@ -83,18 +83,13 @@ static __init int map_switcher(void)
} }
} }
switcher_addr = SWITCHER_ADDR;
/* /*
* First we check that the Switcher won't overlap the fixmap area at * We place the Switcher underneath the fixmap area, which is the
* the top of memory. It's currently nowhere near, but it could have * highest virtual address we can get. This is important, since we
* very strange effects if it ever happened. * tell the Guest it can't access this memory, so we want its ceiling
* as high as possible.
*/ */
if (switcher_addr + (TOTAL_SWITCHER_PAGES+1)*PAGE_SIZE > FIXADDR_START){ switcher_addr = FIXADDR_START - (TOTAL_SWITCHER_PAGES+1)*PAGE_SIZE;
err = -ENOMEM;
printk("lguest: mapping switcher would thwack fixmap\n");
goto free_pages;
}
/* /*
* Now we reserve the "virtual memory area" we want. We might * Now we reserve the "virtual memory area" we want. We might
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment