Commit 91ed140d authored by Borislav Petkov's avatar Borislav Petkov Committed by Ingo Molnar

x86/asm: Make sure verify_cpu() has a good stack

04633df0 ("x86/cpu: Call verify_cpu() after having entered long mode too")
added the call to verify_cpu() for sanitizing CPU configuration.

The latter uses the stack minimally and it can happen that we land in
startup_64() directly from a 64-bit bootloader. Then we want to use our
own, known good stack.

Do that.

APs don't need this as the trampoline sets up a stack for them.
Reported-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mika Penttilä <mika.penttila@nextfour.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1459434062-31055-1-git-send-email-bp@alien8.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 60a0e203
......@@ -65,6 +65,14 @@ startup_64:
* tables and then reload them.
*/
/*
* Setup stack for verify_cpu(). "-8" because stack_start is defined
* this way, see below. Our best guess is a NULL ptr for stack
* termination heuristics and we don't want to break anything which
* might depend on it (kgdb, ...).
*/
leaq (__end_init_task - 8)(%rip), %rsp
/* Sanitize CPU configuration */
call verify_cpu
......
......@@ -245,7 +245,9 @@
#define INIT_TASK_DATA(align) \
. = ALIGN(align); \
*(.data..init_task)
VMLINUX_SYMBOL(__start_init_task) = .; \
*(.data..init_task) \
VMLINUX_SYMBOL(__end_init_task) = .;
/*
* Read only Data
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment