Commit 520b9617 authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'x86/core' into x86/generalize-visws

parents f57e9168 f87f38ec
What: /sys/firmware/memmap/
Date: June 2008
Contact: Bernhard Walle <bwalle@suse.de>
Description:
On all platforms, the firmware provides a memory map which the
kernel reads. The resources from that memory map are registered
in the kernel resource tree and exposed to userspace via
/proc/iomem (together with other resources).
However, on most architectures that firmware-provided memory
map is modified afterwards by the kernel itself, either because
the kernel merges that memory map with other information or
just because the user overwrites that memory map via command
line.
kexec needs the raw firmware-provided memory map to setup the
parameter segment of the kernel that should be booted with
kexec. Also, the raw memory map is useful for debugging. For
that reason, /sys/firmware/memmap is an interface that provides
the raw memory map to userspace.
The structure is as follows: Under /sys/firmware/memmap there
are subdirectories with the number of the entry as their name:
/sys/firmware/memmap/0
/sys/firmware/memmap/1
/sys/firmware/memmap/2
/sys/firmware/memmap/3
...
The maximum depends on the number of memory map entries provided
by the firmware. The order is just the order that the firmware
provides.
Each directory contains three files:
start : The start address (as hexadecimal number with the
'0x' prefix).
end : The end address, inclusive (regardless whether the
firmware provides inclusive or exclusive ranges).
type : Type of the entry as string. See below for a list of
valid types.
So, for example:
/sys/firmware/memmap/0/start
/sys/firmware/memmap/0/end
/sys/firmware/memmap/0/type
/sys/firmware/memmap/1/start
...
Currently following types exist:
- System RAM
- ACPI Tables
- ACPI Non-volatile Storage
- reserved
Following shell snippet can be used to display that memory
map in a human-readable format:
-------------------- 8< ----------------------------------------
#!/bin/bash
cd /sys/firmware/memmap
for dir in * ; do
start=$(cat $dir/start)
end=$(cat $dir/end)
type=$(cat $dir/type)
printf "%016x-%016x (%s)\n" $start $[ $end +1] "$type"
done
-------------------- >8 ----------------------------------------
......@@ -109,7 +109,7 @@ There are two possible methods of using Kdump.
2) Or use the system kernel binary itself as dump-capture kernel and there is
no need to build a separate dump-capture kernel. This is possible
only with the architecutres which support a relocatable kernel. As
of today i386 and ia64 architectures support relocatable kernel.
of today, i386, x86_64 and ia64 architectures support relocatable kernel.
Building a relocatable kernel is advantageous from the point of view that
one does not have to build a second kernel for capturing the dump. But
......
......@@ -271,6 +271,17 @@ and is between 256 and 4096 characters. It is defined in the file
aic79xx= [HW,SCSI]
See Documentation/scsi/aic79xx.txt.
amd_iommu= [HW,X86-84]
Pass parameters to the AMD IOMMU driver in the system.
Possible values are:
isolate - enable device isolation (each device, as far
as possible, will get its own protection
domain)
amd_iommu_size= [HW,X86-64]
Define the size of the aperture for the AMD IOMMU
driver. Possible values are:
'32M', '64M' (default), '128M', '256M', '512M', '1G'
amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT
Format: <a>,<b>
......@@ -599,6 +610,29 @@ and is between 256 and 4096 characters. It is defined in the file
See drivers/char/README.epca and
Documentation/digiepca.txt.
disable_mtrr_cleanup [X86]
enable_mtrr_cleanup [X86]
The kernel tries to adjust MTRR layout from continuous
to discrete, to make X server driver able to add WB
entry later. This parameter enables/disables that.
mtrr_chunk_size=nn[KMG] [X86]
used for mtrr cleanup. It is largest continous chunk
that could hold holes aka. UC entries.
mtrr_gran_size=nn[KMG] [X86]
Used for mtrr cleanup. It is granularity of mtrr block.
Default is 1.
Large value could prevent small alignment from
using up MTRRs.
mtrr_spare_reg_nr=n [X86]
Format: <integer>
Range: 0,7 : spare reg number
Default : 1
Used for mtrr cleanup. It is spare mtrr entries number.
Set to 2 or more if your graphical card needs more.
disable_mtrr_trim [X86, Intel and AMD only]
By default the kernel will trim any uncacheable
memory out of your available memory pool based on
......@@ -2116,6 +2150,9 @@ and is between 256 and 4096 characters. It is defined in the file
usbhid.mousepoll=
[USBHID] The interval which mice are to be polled at.
add_efi_memmap [EFI; x86-32,X86-64] Include EFI memory map in
kernel's map of available physical RAM.
vdso= [X86-32,SH,x86-64]
vdso=2: enable compat VDSO (default with COMPAT_VDSO)
vdso=1: enable VDSO (default)
......
......@@ -22,8 +22,7 @@ CONFIG_X86_UP_IOAPIC is for uniprocessor with an IO-APIC. [Note: certain
kernel debugging options, such as Kernel Stack Meter or Kernel Tracer,
may implicitly disable the NMI watchdog.]
For x86-64, the needed APIC is always compiled in, and the NMI watchdog is
always enabled with I/O-APIC mode (nmi_watchdog=1).
For x86-64, the needed APIC is always compiled in.
Using local APIC (nmi_watchdog=2) needs the first performance register, so
you can't use it for other purposes (such as high precision performance
......@@ -67,12 +66,11 @@ time. The I/O APIC watchdog is driven externally and has no such shortcoming.
But its NMI frequency is much higher, resulting in a more significant hit
to the overall system performance.
NOTE: starting with 2.4.2-ac18 the NMI-oopser is disabled by default,
you have to enable it with a boot time parameter. Prior to 2.4.2-ac18
the NMI-oopser is enabled unconditionally on x86 SMP boxes.
On x86 nmi_watchdog is disabled by default so you have to enable it with
a boot time parameter.
On x86-64 the NMI oopser is on by default. On 64bit Intel CPUs
it uses IO-APIC by default and on AMD it uses local APIC.
NOTE: In kernels prior to 2.4.2-ac18 the NMI-oopser is enabled unconditionally
on x86 SMP boxes.
[ feel free to send bug reports, suggestions and patches to
Ingo Molnar <mingo@redhat.com> or the Linux SMP mailing
......
THE LINUX/I386 BOOT PROTOCOL
----------------------------
THE LINUX/x86 BOOT PROTOCOL
---------------------------
H. Peter Anvin <hpa@zytor.com>
Last update 2007-05-23
On the i386 platform, the Linux kernel uses a rather complicated boot
On the x86 platform, the Linux kernel uses a rather complicated boot
convention. This has evolved partially due to historical aspects, as
well as the desire in the early days to have the kernel itself be a
bootable image, the complicated PC memory model and due to changed
expectations in the PC industry caused by the effective demise of
real-mode DOS as a mainstream operating system.
Currently, the following versions of the Linux/i386 boot protocol exist.
Currently, the following versions of the Linux/x86 boot protocol exist.
Old kernels: zImage/Image support only. Some very early kernels
may not even support a command line.
......@@ -372,10 +369,17 @@ Protocol: 2.00+
- If 0, the protected-mode code is loaded at 0x10000.
- If 1, the protected-mode code is loaded at 0x100000.
Bit 5 (write): QUIET_FLAG
- If 0, print early messages.
- If 1, suppress early messages.
This requests to the kernel (decompressor and early
kernel) to not write early messages that require
accessing the display hardware directly.
Bit 6 (write): KEEP_SEGMENTS
Protocol: 2.07+
- if 0, reload the segment registers in the 32bit entry point.
- if 1, do not reload the segment registers in the 32bit entry point.
- If 0, reload the segment registers in the 32bit entry point.
- If 1, do not reload the segment registers in the 32bit entry point.
Assume that %cs %ds %ss %es are all set to flat segments with
a base of 0 (or the equivalent for their environment).
......@@ -504,7 +508,7 @@ Protocol: 2.06+
maximum size was 255.
Field name: hardware_subarch
Type: write
Type: write (optional, defaults to x86/PC)
Offset/size: 0x23c/4
Protocol: 2.07+
......@@ -520,11 +524,13 @@ Protocol: 2.07+
0x00000002 Xen
Field name: hardware_subarch_data
Type: write
Type: write (subarch-dependent)
Offset/size: 0x240/8
Protocol: 2.07+
A pointer to data that is specific to hardware subarch
This field is currently unused for the default x86/PC environment,
do not modify.
Field name: payload_offset
Type: read
......@@ -545,6 +551,34 @@ Protocol: 2.08+
The length of the payload.
Field name: setup_data
Type: write (special)
Offset/size: 0x250/8
Protocol: 2.09+
The 64-bit physical pointer to NULL terminated single linked list of
struct setup_data. This is used to define a more extensible boot
parameters passing mechanism. The definition of struct setup_data is
as follow:
struct setup_data {
u64 next;
u32 type;
u32 len;
u8 data[0];
};
Where, the next is a 64-bit physical pointer to the next node of
linked list, the next field of the last node is 0; the type is used
to identify the contents of data; the len is the length of data
field; the data holds the real payload.
This list may be modified at a number of points during the bootup
process. Therefore, when modifying this list one should always make
sure to consider the case where the linked list already contains
entries.
**** THE IMAGE CHECKSUM
From boot protocol version 2.08 onwards the CRC-32 is calculated over
......@@ -553,6 +587,7 @@ initial remainder of 0xffffffff. The checksum is appended to the
file; therefore the CRC of the file up to the limit specified in the
syssize field of the header is always 0.
**** THE KERNEL COMMAND LINE
The kernel command line has become an important way for the boot
......@@ -584,28 +619,6 @@ command line is entered using the following protocol:
covered by setup_move_size, so you may need to adjust this
field.
Field name: setup_data
Type: write (obligatory)
Offset/size: 0x250/8
Protocol: 2.09+
The 64-bit physical pointer to NULL terminated single linked list of
struct setup_data. This is used to define a more extensible boot
parameters passing mechanism. The definition of struct setup_data is
as follow:
struct setup_data {
u64 next;
u32 type;
u32 len;
u8 data[0];
};
Where, the next is a 64-bit physical pointer to the next node of
linked list, the next field of the last node is 0; the type is used
to identify the contents of data; the len is the length of data
field; the data holds the real payload.
**** MEMORY LAYOUT OF THE REAL-MODE CODE
......
......@@ -11,9 +11,8 @@ ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
ffffe20000000000 - ffffe2ffffffffff (=40 bits) virtual memory map (1TB)
... unused hole ...
ffffffff80000000 - ffffffff82800000 (=40 MB) kernel text mapping, from phys 0
... unused hole ...
ffffffff88000000 - fffffffffff00000 (=1919 MB) module mapping space
ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0
ffffffffa0000000 - fffffffffff00000 (=1536 MB) module mapping space
The direct mapping covers all memory in the system up to the highest
memory address (this means in some cases it can also include PCI memory
......
......@@ -36,3 +36,7 @@ Mechanics:
services.
noefi turn off all EFI runtime services
reboot_type=k turn off EFI reboot runtime service
- If the EFI memory map has additional entries not in the E820 map,
you can include those entries in the kernels memory map of available
physical RAM by using the following kernel command line parameter.
add_efi_memmap include EFI memory map of available physical RAM
......@@ -376,6 +376,12 @@ L: linux-geode@lists.infradead.org (moderated for non-subscribers)
W: http://www.amd.com/us-en/ConnectivitySolutions/TechnicalResources/0,,50_2334_2452_11363,00.html
S: Supported
AMD IOMMU (AMD-VI)
P: Joerg Roedel
M: joerg.roedel@amd.com
L: iommu@lists.linux-foundation.org
S: Supported
AMS (Apple Motion Sensor) DRIVER
P: Stelian Pop
M: stelian@popies.net
......
This diff is collapsed.
......@@ -344,7 +344,7 @@ config X86_F00F_BUG
config X86_WP_WORKS_OK
def_bool y
depends on X86_32 && !M386
depends on !M386
config X86_INVLPG
def_bool y
......@@ -399,6 +399,10 @@ config X86_TSC
def_bool y
depends on ((MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64
config X86_CMPXCHG64
def_bool y
depends on X86_PAE || X86_64
# this should be set for all -march=.. options where the compiler
# generates cmov.
config X86_CMOV
......
......@@ -20,6 +20,14 @@ config NONPROMISC_DEVMEM
If in doubt, say Y.
config X86_VERBOSE_BOOTUP
bool "Enable verbose x86 bootup info messages"
default y
help
Enables the informational output from the decompression stage
(e.g. bzImage) of the boot. If you disable this you will still
see errors. Disable this if you want silent bootup.
config EARLY_PRINTK
bool "Early printk" if EMBEDDED
default y
......@@ -60,7 +68,7 @@ config DEBUG_PAGEALLOC
config DEBUG_PER_CPU_MAPS
bool "Debug access to per_cpu maps"
depends on DEBUG_KERNEL
depends on X86_64_SMP
depends on X86_SMP
default n
help
Say Y to verify that the per_cpu map being accessed has
......@@ -129,15 +137,6 @@ config 4KSTACKS
on the VM subsystem for higher order allocations. This option
will also use IRQ stacks to compensate for the reduced stackspace.
config X86_FIND_SMP_CONFIG
def_bool y
depends on X86_LOCAL_APIC || X86_VOYAGER
depends on X86_32
config X86_MPPARSE
def_bool y
depends on (X86_32 && (X86_LOCAL_APIC && !X86_VISWS)) || X86_64
config DOUBLEFAULT
default y
bool "Enable doublefault exception handler" if EMBEDDED
......
......@@ -117,29 +117,11 @@ mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/
mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws
mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws/
# NUMAQ subarch support
mflags-$(CONFIG_X86_NUMAQ) := -Iinclude/asm-x86/mach-numaq
mcore-$(CONFIG_X86_NUMAQ) := arch/x86/mach-default/
# BIGSMP subarch support
mflags-$(CONFIG_X86_BIGSMP) := -Iinclude/asm-x86/mach-bigsmp
mcore-$(CONFIG_X86_BIGSMP) := arch/x86/mach-default/
#Summit subarch support
mflags-$(CONFIG_X86_SUMMIT) := -Iinclude/asm-x86/mach-summit
mcore-$(CONFIG_X86_SUMMIT) := arch/x86/mach-default/
# generic subarchitecture
mflags-$(CONFIG_X86_GENERICARCH):= -Iinclude/asm-x86/mach-generic
fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/
# ES7000 subarch support
mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-x86/mach-es7000
fcore-$(CONFIG_X86_ES7000) := arch/x86/mach-es7000/
mcore-$(CONFIG_X86_ES7000) := arch/x86/mach-default/
# RDC R-321x subarch support
mflags-$(CONFIG_X86_RDC321X) := -Iinclude/asm-x86/mach-rdc321x
mcore-$(CONFIG_X86_RDC321X) := arch/x86/mach-default/
......@@ -160,6 +142,7 @@ KBUILD_AFLAGS += $(mflags-y)
head-y := arch/x86/kernel/head_$(BITS).o
head-y += arch/x86/kernel/head$(BITS).o
head-y += arch/x86/kernel/head.o
head-y += arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/
......@@ -210,12 +193,12 @@ all: bzImage
# KBUILD_IMAGE specify target image being built
KBUILD_IMAGE := $(boot)/bzImage
zImage zlilo zdisk: KBUILD_IMAGE := arch/x86/boot/zImage
zImage zlilo zdisk: KBUILD_IMAGE := $(boot)/zImage
zImage bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/bzImage
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
compressed: zImage
......
/* -*- linux-c -*- ------------------------------------------------------- *
*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright 2007 rPath, Inc. - All Rights Reserved
* Copyright 2007-2008 rPath, Inc. - All Rights Reserved
*
* This file is part of the Linux kernel, and is made available under
* the terms of the GNU General Public License version 2.
......@@ -95,6 +95,9 @@ static void enable_a20_kbc(void)
outb(0xdf, 0x60); /* A20 on */
empty_8042();
outb(0xff, 0x64); /* Null command, but UHCI wants it */
empty_8042();
}
static void enable_a20_fast(void)
......
......@@ -30,6 +30,7 @@
#include <asm/page.h>
#include <asm/boot.h>
#include <asm/msr.h>
#include <asm/processor-flags.h>
#include <asm/asm-offsets.h>
.section ".text.head"
......@@ -109,7 +110,7 @@ startup_32:
/* Enable PAE mode */
xorl %eax, %eax
orl $(1 << 5), %eax
orl $(X86_CR4_PAE), %eax
movl %eax, %cr4
/*
......@@ -170,7 +171,7 @@ startup_32:
pushl %eax
/* Enter paged protected Mode, activating Long Mode */
movl $0x80000001, %eax /* Enable Paging and Protected mode */
movl $(X86_CR0_PG | X86_CR0_PE), %eax /* Enable Paging and Protected mode */
movl %eax, %cr0
/* Jump from 32bit compatibility mode into 64bit mode. */
......
......@@ -30,6 +30,7 @@
#include <asm/io.h>
#include <asm/page.h>
#include <asm/boot.h>
#include <asm/bootparam.h>
/* WARNING!!
* This code is compiled with -fPIC and it is relocated dynamically
......@@ -187,13 +188,8 @@ static void gzip_release(void **);
/*
* This is set up by the setup-routine at boot-time
*/
static unsigned char *real_mode; /* Pointer to real-mode data */
#define RM_EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
#ifndef STANDARD_MEMORY_BIOS_CALL
#define RM_ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
#endif
#define RM_SCREEN_INFO (*(struct screen_info *)(real_mode+0))
static struct boot_params *real_mode; /* Pointer to real-mode data */
static int quiet;
extern unsigned char input_data[];
extern int input_len;
......@@ -206,7 +202,8 @@ static void free(void *where);
static void *memset(void *s, int c, unsigned n);
static void *memcpy(void *dest, const void *src, unsigned n);
static void putstr(const char *);
static void __putstr(int, const char *);
#define putstr(__x) __putstr(0, __x)
#ifdef CONFIG_X86_64
#define memptr long
......@@ -221,10 +218,6 @@ static char *vidmem;
static int vidport;
static int lines, cols;
#ifdef CONFIG_X86_NUMAQ
void *xquad_portio;
#endif
#include "../../../../lib/inflate.c"
static void *malloc(int size)
......@@ -270,18 +263,24 @@ static void scroll(void)
vidmem[i] = ' ';
}
static void putstr(const char *s)
static void __putstr(int error, const char *s)
{
int x, y, pos;
char c;
#ifndef CONFIG_X86_VERBOSE_BOOTUP
if (!error)
return;
#endif
#ifdef CONFIG_X86_32
if (RM_SCREEN_INFO.orig_video_mode == 0 && lines == 0 && cols == 0)
if (real_mode->screen_info.orig_video_mode == 0 &&
lines == 0 && cols == 0)
return;
#endif
x = RM_SCREEN_INFO.orig_x;
y = RM_SCREEN_INFO.orig_y;
x = real_mode->screen_info.orig_x;
y = real_mode->screen_info.orig_y;
while ((c = *s++) != '\0') {
if (c == '\n') {
......@@ -302,8 +301,8 @@ static void putstr(const char *s)
}
}
RM_SCREEN_INFO.orig_x = x;
RM_SCREEN_INFO.orig_y = y;
real_mode->screen_info.orig_x = x;
real_mode->screen_info.orig_y = y;
pos = (x + cols * y) * 2; /* Update cursor position */
outb(14, vidport);
......@@ -366,9 +365,9 @@ static void flush_window(void)
static void error(char *x)
{
putstr("\n\n");
putstr(x);
putstr("\n\n -- System halted");
__putstr(1, "\n\n");
__putstr(1, x);
__putstr(1, "\n\n -- System halted");
while (1)
asm("hlt");
......@@ -395,6 +394,7 @@ static void parse_elf(void *output)
return;
}
if (!quiet)
putstr("Parsing ELF... ");
phdrs = malloc(sizeof(*phdrs) * ehdr.e_phnum);
......@@ -430,7 +430,10 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
{
real_mode = rmode;
if (RM_SCREEN_INFO.orig_video_mode == 7) {
if (real_mode->hdr.loadflags & QUIET_FLAG)
quiet = 1;
if (real_mode->screen_info.orig_video_mode == 7) {
vidmem = (char *) 0xb0000;
vidport = 0x3b4;
} else {
......@@ -438,8 +441,8 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
vidport = 0x3d4;
}
lines = RM_SCREEN_INFO.orig_video_lines;
cols = RM_SCREEN_INFO.orig_video_cols;
lines = real_mode->screen_info.orig_video_lines;
cols = real_mode->screen_info.orig_video_cols;
window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = heap; /* Heap */
......@@ -465,9 +468,11 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
#endif
makecrc();
if (!quiet)
putstr("\nDecompressing Linux... ");
gunzip();
parse_elf(output);
if (!quiet)
putstr("done.\nBooting the kernel.\n");
return;
}
This diff is collapsed.
......@@ -28,6 +28,8 @@ static char *cpu_name(int level)
if (level == 64) {
return "x86-64";
} else {
if (level == 15)
level = 6;
sprintf(buf, "i%d86", level);
return buf;
}
......
......@@ -165,6 +165,10 @@ void main(void)
/* Set the video mode */
set_video();
/* Parse command line for 'quiet' and pass it to decompressor. */
if (cmdline_find_option_bool("quiet"))
boot_params.hdr.loadflags |= QUIET_FLAG;
/* Do the last things and invoke protected mode */
go_to_protected_mode();
}
......@@ -13,6 +13,7 @@
*/
#include "boot.h"
#include <linux/kernel.h>
#define SMAP 0x534d4150 /* ASCII "SMAP" */
......@@ -53,7 +54,7 @@ static int detect_memory_e820(void)
count++;
desc++;
} while (next && count < E820MAX);
} while (next && count < ARRAY_SIZE(boot_params.e820_map));
return boot_params.e820_entries = count;
}
......
......@@ -33,6 +33,8 @@ protected_mode_jump:
movw %cs, %bx
shll $4, %ebx
addl %ebx, 2f
jmp 1f # Short jump to serialize on 386/486
1:
movw $__BOOT_DS, %cx
movw $__BOOT_TSS, %di
......@@ -40,8 +42,6 @@ protected_mode_jump:
movl %cr0, %edx
orb $X86_CR0_PE, %dl # Protected mode
movl %edx, %cr0
jmp 1f # Short jump to serialize on 386/486
1:
# Transition to 32-bit mode
.byte 0x66, 0xea # ljmpl opcode
......
......@@ -259,8 +259,7 @@ static int vga_probe(void)
return mode_count[adapter];
}
__videocard video_vga =
{
__videocard video_vga = {
.card_name = "VGA",
.probe = vga_probe,
.set_mode = vga_set_mode,
......
This diff is collapsed.
This diff is collapsed.
......@@ -61,6 +61,19 @@
CFI_UNDEFINED r15
.endm
#ifdef CONFIG_PARAVIRT
ENTRY(native_usergs_sysret32)
swapgs
sysretl
ENDPROC(native_usergs_sysret32)
ENTRY(native_irq_enable_sysexit)
swapgs
sti
sysexit
ENDPROC(native_irq_enable_sysexit)
#endif
/*
* 32bit SYSENTER instruction entry.
*
......@@ -85,14 +98,14 @@ ENTRY(ia32_sysenter_target)
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,0
CFI_REGISTER rsp,rbp
swapgs
SWAPGS_UNSAFE_STACK
movq %gs:pda_kernelstack, %rsp
addq $(PDA_STACKOFFSET),%rsp
/*
* No need to follow this irqs on/off section: the syscall
* disabled irqs, here we enable it straight after entry:
*/
sti
ENABLE_INTERRUPTS(CLBR_NONE)
movl %ebp,%ebp /* zero extension */
pushq $__USER32_DS
CFI_ADJUST_CFA_OFFSET 8
......@@ -103,7 +116,7 @@ ENTRY(ia32_sysenter_target)
pushfq
CFI_ADJUST_CFA_OFFSET 8
/*CFI_REL_OFFSET rflags,0*/
movl 8*3-THREAD_SIZE+threadinfo_sysenter_return(%rsp), %r10d
movl 8*3-THREAD_SIZE+TI_sysenter_return(%rsp), %r10d
CFI_REGISTER rip,r10
pushq $__USER32_CS
CFI_ADJUST_CFA_OFFSET 8
......@@ -123,8 +136,9 @@ ENTRY(ia32_sysenter_target)
.quad 1b,ia32_badarg
.previous
GET_THREAD_INFO(%r10)
orl $TS_COMPAT,threadinfo_status(%r10)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SECCOMP),threadinfo_flags(%r10)
orl $TS_COMPAT,TI_status(%r10)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SECCOMP), \
TI_flags(%r10)
CFI_REMEMBER_STATE
jnz sysenter_tracesys
sysenter_do_call:
......@@ -134,11 +148,11 @@ sysenter_do_call:
call *ia32_sys_call_table(,%rax,8)
movq %rax,RAX-ARGOFFSET(%rsp)
GET_THREAD_INFO(%r10)
cli
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF
testl $_TIF_ALLWORK_MASK,threadinfo_flags(%r10)
testl $_TIF_ALLWORK_MASK,TI_flags(%r10)
jnz int_ret_from_sys_call
andl $~TS_COMPAT,threadinfo_status(%r10)
andl $~TS_COMPAT,TI_status(%r10)
/* clear IF, that popfq doesn't enable interrupts early */
andl $~0x200,EFLAGS-R11(%rsp)
movl RIP-R11(%rsp),%edx /* User %eip */
......@@ -151,10 +165,7 @@ sysenter_do_call:
CFI_ADJUST_CFA_OFFSET -8
CFI_REGISTER rsp,rcx
TRACE_IRQS_ON
swapgs
sti /* sti only takes effect after the next instruction */
/* sysexit */
.byte 0xf, 0x35
ENABLE_INTERRUPTS_SYSEXIT32
sysenter_tracesys:
CFI_RESTORE_STATE
......@@ -200,7 +211,7 @@ ENTRY(ia32_cstar_target)
CFI_DEF_CFA rsp,PDA_STACKOFFSET
CFI_REGISTER rip,rcx
/*CFI_REGISTER rflags,r11*/
swapgs
SWAPGS_UNSAFE_STACK
movl %esp,%r8d
CFI_REGISTER rsp,r8
movq %gs:pda_kernelstack,%rsp
......@@ -208,7 +219,7 @@ ENTRY(ia32_cstar_target)
* No need to follow this irqs on/off section: the syscall
* disabled irqs and here we enable it straight after entry:
*/
sti
ENABLE_INTERRUPTS(CLBR_NONE)
SAVE_ARGS 8,1,1
movl %eax,%eax /* zero extension */
movq %rax,ORIG_RAX-ARGOFFSET(%rsp)
......@@ -230,8 +241,9 @@ ENTRY(ia32_cstar_target)
.quad 1b,ia32_badarg
.previous
GET_THREAD_INFO(%r10)
orl $TS_COMPAT,threadinfo_status(%r10)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SECCOMP),threadinfo_flags(%r10)
orl $TS_COMPAT,TI_status(%r10)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SECCOMP), \
TI_flags(%r10)
CFI_REMEMBER_STATE
jnz cstar_tracesys
cstar_do_call:
......@@ -241,11 +253,11 @@ cstar_do_call:
call *ia32_sys_call_table(,%rax,8)
movq %rax,RAX-ARGOFFSET(%rsp)
GET_THREAD_INFO(%r10)
cli
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF
testl $_TIF_ALLWORK_MASK,threadinfo_flags(%r10)
testl $_TIF_ALLWORK_MASK,TI_flags(%r10)
jnz int_ret_from_sys_call
andl $~TS_COMPAT,threadinfo_status(%r10)
andl $~TS_COMPAT,TI_status(%r10)
RESTORE_ARGS 1,-ARG_SKIP,1,1,1
movl RIP-ARGOFFSET(%rsp),%ecx
CFI_REGISTER rip,rcx
......@@ -254,8 +266,7 @@ cstar_do_call:
TRACE_IRQS_ON
movl RSP-ARGOFFSET(%rsp),%esp
CFI_RESTORE rsp
swapgs
sysretl
USERGS_SYSRET32
cstar_tracesys:
CFI_RESTORE_STATE
......@@ -310,12 +321,12 @@ ENTRY(ia32_syscall)
/*CFI_REL_OFFSET rflags,EFLAGS-RIP*/
/*CFI_REL_OFFSET cs,CS-RIP*/
CFI_REL_OFFSET rip,RIP-RIP
swapgs
SWAPGS
/*
* No need to follow this irqs on/off section: the syscall
* disabled irqs and here we enable it straight after entry:
*/
sti
ENABLE_INTERRUPTS(CLBR_NONE)
movl %eax,%eax
pushq %rax
CFI_ADJUST_CFA_OFFSET 8
......@@ -324,8 +335,9 @@ ENTRY(ia32_syscall)
this could be a problem. */
SAVE_ARGS 0,0,1
GET_THREAD_INFO(%r10)
orl $TS_COMPAT,threadinfo_status(%r10)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SECCOMP),threadinfo_flags(%r10)
orl $TS_COMPAT,TI_status(%r10)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SECCOMP), \
TI_flags(%r10)
jnz ia32_tracesys
ia32_do_syscall:
cmpl $(IA32_NR_syscalls-1),%eax
......@@ -370,13 +382,11 @@ quiet_ni_syscall:
PTREGSCALL stub32_rt_sigreturn, sys32_rt_sigreturn, %rdi
PTREGSCALL stub32_sigreturn, sys32_sigreturn, %rdi
PTREGSCALL stub32_sigaltstack, sys32_sigaltstack, %rdx
PTREGSCALL stub32_sigsuspend, sys32_sigsuspend, %rcx
PTREGSCALL stub32_execve, sys32_execve, %rcx
PTREGSCALL stub32_fork, sys_fork, %rdi
PTREGSCALL stub32_clone, sys32_clone, %rdx
PTREGSCALL stub32_vfork, sys_vfork, %rdi
PTREGSCALL stub32_iopl, sys_iopl, %rsi
PTREGSCALL stub32_rt_sigsuspend, sys_rt_sigsuspend, %rdx
ENTRY(ia32_ptregs_common)
popq %r11
......@@ -476,7 +486,7 @@ ia32_sys_call_table:
.quad sys_ssetmask
.quad sys_setreuid16 /* 70 */
.quad sys_setregid16
.quad stub32_sigsuspend
.quad sys32_sigsuspend
.quad compat_sys_sigpending
.quad sys_sethostname
.quad compat_sys_setrlimit /* 75 */
......@@ -583,7 +593,7 @@ ia32_sys_call_table:
.quad sys32_rt_sigpending
.quad compat_sys_rt_sigtimedwait
.quad sys32_rt_sigqueueinfo
.quad stub32_rt_sigsuspend
.quad sys_rt_sigsuspend
.quad sys32_pread /* 180 */
.quad sys32_pwrite
.quad sys_chown16
......
......@@ -2,7 +2,7 @@
# Makefile for the linux kernel.
#
extra-y := head_$(BITS).o head$(BITS).o init_task.o vmlinux.lds
extra-y := head_$(BITS).o head$(BITS).o head.o init_task.o vmlinux.lds
CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
......@@ -18,15 +18,15 @@ CFLAGS_tsc_64.o := $(nostackp)
obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o
obj-y += traps_$(BITS).o irq_$(BITS).o
obj-y += time_$(BITS).o ioport.o ldt.o
obj-y += setup_$(BITS).o i8259_$(BITS).o setup.o
obj-y += setup.o i8259.o irqinit_$(BITS).o setup_percpu.o
obj-$(CONFIG_X86_32) += probe_roms_32.o
obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o
obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o
obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o
obj-y += bootflag.o e820_$(BITS).o
obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o
obj-y += bootflag.o e820.o
obj-y += pci-dma.o quirks.o i8237.o topology.o kdebugfs.o
obj-y += alternative.o i8253.o pci-nommu.o
obj-$(CONFIG_X86_64) += bugs_64.o
obj-y += tsc_$(BITS).o io_delay.o rtc.o
obj-y += tsc.o io_delay.o rtc.o
obj-$(CONFIG_X86_TRAMPOLINE) += trampoline.o
obj-y += process.o
......@@ -53,7 +53,7 @@ obj-$(CONFIG_X86_32_SMP) += smpcommon.o
obj-$(CONFIG_X86_64_SMP) += tsc_sync.o smpcommon.o
obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_$(BITS).o
obj-$(CONFIG_X86_MPPARSE) += mpparse.o
obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi_$(BITS).o
obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi.o
obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
......@@ -64,7 +64,6 @@ obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o
obj-y += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_ACPI_SRAT) += srat_32.o
obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o
obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o
obj-$(CONFIG_KGDB) += kgdb.o
......@@ -94,12 +93,13 @@ obj-$(CONFIG_OLPC) += olpc.o
###
# 64 bit specific files
ifeq ($(CONFIG_X86_64),y)
obj-y += genapic_64.o genapic_flat_64.o genx2apic_uv_x.o
obj-y += genapic_64.o genapic_flat_64.o genx2apic_uv_x.o tlb_uv.o
obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
obj-$(CONFIG_AUDIT) += audit_64.o
obj-$(CONFIG_GART_IOMMU) += pci-gart_64.o aperture_64.o
obj-$(CONFIG_CALGARY_IOMMU) += pci-calgary_64.o tce_64.o
obj-$(CONFIG_AMD_IOMMU) += amd_iommu_init.o amd_iommu.o
obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o
......
This diff is collapsed.
......@@ -86,7 +86,9 @@ int acpi_save_state_mem(void)
saved_magic = 0x12345678;
#else /* CONFIG_64BIT */
header->trampoline_segment = setup_trampoline() >> 4;
init_rsp = (unsigned long)temp_stack + 4096;
#ifdef CONFIG_SMP
stack_start.sp = temp_stack + 4096;
#endif
initial_code = (unsigned long)wakeup_long64;
saved_magic = 0x123456789abcdef0;
#endif /* CONFIG_64BIT */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -52,30 +52,41 @@
unsigned long mp_lapic_addr;
DEFINE_PER_CPU(u16, x86_bios_cpu_apicid) = BAD_APICID;
EXPORT_PER_CPU_SYMBOL(x86_bios_cpu_apicid);
/*
* Knob to control our willingness to enable the local APIC.
*
* -1=force-disable, +1=force-enable
* +1=force-enable
*/
static int enable_local_apic __initdata;
static int force_enable_local_apic;
int disable_apic;
/* Local APIC timer verification ok */
static int local_apic_timer_verify_ok;
/* Disable local APIC timer from the kernel commandline or via dmi quirk
or using CPU MSR check */
int local_apic_timer_disabled;
/* Disable local APIC timer from the kernel commandline or via dmi quirk */
static int local_apic_timer_disabled;
/* Local APIC timer works in C2 */
int local_apic_timer_c2_ok;
EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
int first_system_vector = 0xfe;
char system_vectors[NR_VECTORS] = { [0 ... NR_VECTORS-1] = SYS_VECTOR_FREE};
/*
* Debug level, exported for io_apic.c
*/
int apic_verbosity;
int pic_mode;
/* Have we found an MP table */
int smp_found_config;
static struct resource lapic_resource = {
.name = "Local APIC",
.flags = IORESOURCE_MEM | IORESOURCE_BUSY,
};
static unsigned int calibration_result;
static int lapic_next_event(unsigned long delta,
......@@ -545,7 +556,7 @@ void __init setup_boot_APIC_clock(void)
lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
else
printk(KERN_WARNING "APIC timer registered as dummy,"
" due to nmi_watchdog=1!\n");
" due to nmi_watchdog=%d!\n", nmi_watchdog);
}
/* Setup the lapic or request the broadcast */
......@@ -1094,7 +1105,7 @@ static int __init detect_init_APIC(void)
u32 h, l, features;
/* Disabled by kernel option? */
if (enable_local_apic < 0)
if (disable_apic)
return -1;
switch (boot_cpu_data.x86_vendor) {
......@@ -1117,7 +1128,7 @@ static int __init detect_init_APIC(void)
* Over-ride BIOS and try to enable the local APIC only if
* "lapic" specified.
*/
if (enable_local_apic <= 0) {
if (!force_enable_local_apic) {
printk(KERN_INFO "Local APIC disabled by BIOS -- "
"you can enable it with \"lapic\"\n");
return -1;
......@@ -1154,9 +1165,6 @@ static int __init detect_init_APIC(void)
if (l & MSR_IA32_APICBASE_ENABLE)
mp_lapic_addr = l & MSR_IA32_APICBASE_BASE;
if (nmi_watchdog != NMI_NONE && nmi_watchdog != NMI_DISABLED)
nmi_watchdog = NMI_LOCAL_APIC;
printk(KERN_INFO "Found and enabled local APIC!\n");
apic_pm_activate();
......@@ -1195,36 +1203,6 @@ void __init init_apic_mappings(void)
if (boot_cpu_physical_apicid == -1U)
boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
#ifdef CONFIG_X86_IO_APIC
{
unsigned long ioapic_phys, idx = FIX_IO_APIC_BASE_0;
int i;
for (i = 0; i < nr_ioapics; i++) {
if (smp_found_config) {
ioapic_phys = mp_ioapics[i].mpc_apicaddr;
if (!ioapic_phys) {
printk(KERN_ERR
"WARNING: bogus zero IO-APIC "
"address found in MPTABLE, "
"disabling IO/APIC support!\n");
smp_found_config = 0;
skip_ioapic_setup = 1;
goto fake_ioapic_page;
}
} else {
fake_ioapic_page:
ioapic_phys = (unsigned long)
alloc_bootmem_pages(PAGE_SIZE);
ioapic_phys = __pa(ioapic_phys);
}
set_fixmap_nocache(idx, ioapic_phys);
printk(KERN_DEBUG "mapped IOAPIC to %08lx (%08lx)\n",
__fix_to_virt(idx), ioapic_phys);
idx++;
}
}
#endif
}
/*
......@@ -1236,7 +1214,7 @@ int apic_version[MAX_APICS];
int __init APIC_init_uniprocessor(void)
{
if (enable_local_apic < 0)
if (disable_apic)
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
if (!smp_found_config && !cpu_has_apic)
......@@ -1265,10 +1243,14 @@ int __init APIC_init_uniprocessor(void)
#ifdef CONFIG_CRASH_DUMP
boot_cpu_physical_apicid = GET_APIC_ID(read_apic_id());
#endif
phys_cpu_present_map = physid_mask_of_physid(boot_cpu_physical_apicid);
physid_set_mask_of_physid(boot_cpu_physical_apicid, &phys_cpu_present_map);
setup_local_APIC();
#ifdef CONFIG_X86_IO_APIC
if (!smp_found_config || skip_ioapic_setup || !nr_ioapics)
#endif
localise_nmi_watchdog();
end_local_APIC_setup();
#ifdef CONFIG_X86_IO_APIC
if (smp_found_config)
......@@ -1351,13 +1333,13 @@ void __init smp_intr_init(void)
* The reschedule interrupt is a CPU-to-CPU reschedule-helper
* IPI, driven by wakeup.
*/
set_intr_gate(RESCHEDULE_VECTOR, reschedule_interrupt);
alloc_intr_gate(RESCHEDULE_VECTOR, reschedule_interrupt);
/* IPI for invalidation */
set_intr_gate(INVALIDATE_TLB_VECTOR, invalidate_interrupt);
alloc_intr_gate(INVALIDATE_TLB_VECTOR, invalidate_interrupt);
/* IPI for generic function call */
set_intr_gate(CALL_FUNCTION_VECTOR, call_function_interrupt);
alloc_intr_gate(CALL_FUNCTION_VECTOR, call_function_interrupt);
}
#endif
......@@ -1370,15 +1352,15 @@ void __init apic_intr_init(void)
smp_intr_init();
#endif
/* self generated IPI for local APIC timer */
set_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt);
alloc_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt);
/* IPI vectors for APIC spurious and error interrupts */
set_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
set_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
/* thermal monitor LVT interrupt */
#ifdef CONFIG_X86_MCE_P4THERMAL
set_intr_gate(THERMAL_APIC_VECTOR, thermal_interrupt);
alloc_intr_gate(THERMAL_APIC_VECTOR, thermal_interrupt);
#endif
}
......@@ -1513,6 +1495,9 @@ void __cpuinit generic_processor_info(int apicid, int version)
*/
cpu = 0;
if (apicid > max_physical_apicid)
max_physical_apicid = apicid;
/*
* Would be preferable to switch to bigsmp when CONFIG_HOTPLUG_CPU=y
* but we need to work other dependencies like SMP_SUSPEND etc
......@@ -1520,7 +1505,7 @@ void __cpuinit generic_processor_info(int apicid, int version)
* if (CPU_HOTPLUG_ENABLED || num_processors > 8)
* - Ashok Raj <ashok.raj@intel.com>
*/
if (num_processors > 8) {
if (max_physical_apicid >= 8) {
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_INTEL:
if (!APIC_XAPIC(version)) {
......@@ -1534,9 +1519,9 @@ void __cpuinit generic_processor_info(int apicid, int version)
}
#ifdef CONFIG_SMP
/* are we being called early in kernel startup? */
if (x86_cpu_to_apicid_early_ptr) {
u16 *cpu_to_apicid = x86_cpu_to_apicid_early_ptr;
u16 *bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr;
if (early_per_cpu_ptr(x86_cpu_to_apicid)) {
u16 *cpu_to_apicid = early_per_cpu_ptr(x86_cpu_to_apicid);
u16 *bios_cpu_apicid = early_per_cpu_ptr(x86_bios_cpu_apicid);
cpu_to_apicid[cpu] = apicid;
bios_cpu_apicid[cpu] = apicid;
......@@ -1703,14 +1688,14 @@ static void apic_pm_activate(void) { }
*/
static int __init parse_lapic(char *arg)
{
enable_local_apic = 1;
force_enable_local_apic = 1;
return 0;
}
early_param("lapic", parse_lapic);
static int __init parse_nolapic(char *arg)
{
enable_local_apic = -1;
disable_apic = 1;
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
return 0;
}
......@@ -1740,3 +1725,21 @@ static int __init apic_set_verbosity(char *str)
}
__setup("apic=", apic_set_verbosity);
static int __init lapic_insert_resource(void)
{
if (!apic_phys)
return -1;
/* Put local APIC into the resource map. */
lapic_resource.start = apic_phys;
lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
insert_resource(&iomem_resource, &lapic_resource);
return 0;
}
/*
* need call insert after e820_reserve_resources()
* that is using request_resource
*/
late_initcall(lapic_insert_resource);
This diff is collapsed.
......@@ -228,6 +228,7 @@
#include <linux/suspend.h>
#include <linux/kthread.h>
#include <linux/jiffies.h>
#include <linux/smp_lock.h>
#include <asm/system.h>
#include <asm/uaccess.h>
......@@ -1149,7 +1150,7 @@ static void queue_event(apm_event_t event, struct apm_user *sender)
as->event_tail = 0;
}
as->events[as->event_head] = event;
if ((!as->suser) || (!as->writer))
if (!as->suser || !as->writer)
continue;
switch (event) {
case APM_SYS_SUSPEND:
......@@ -1396,7 +1397,7 @@ static void apm_mainloop(void)
static int check_apm_user(struct apm_user *as, const char *func)
{
if ((as == NULL) || (as->magic != APM_BIOS_MAGIC)) {
if (as == NULL || as->magic != APM_BIOS_MAGIC) {
printk(KERN_ERR "apm: %s passed bad filp\n", func);
return 1;
}
......@@ -1459,18 +1460,19 @@ static unsigned int do_poll(struct file *fp, poll_table *wait)
return 0;
}
static int do_ioctl(struct inode *inode, struct file *filp,
u_int cmd, u_long arg)
static long do_ioctl(struct file *filp, u_int cmd, u_long arg)
{
struct apm_user *as;
int ret;
as = filp->private_data;
if (check_apm_user(as, "ioctl"))
return -EIO;
if ((!as->suser) || (!as->writer))
if (!as->suser || !as->writer)
return -EPERM;
switch (cmd) {
case APM_IOC_STANDBY:
lock_kernel();
if (as->standbys_read > 0) {
as->standbys_read--;
as->standbys_pending--;
......@@ -1479,8 +1481,10 @@ static int do_ioctl(struct inode *inode, struct file *filp,
queue_event(APM_USER_STANDBY, as);
if (standbys_pending <= 0)
standby();
unlock_kernel();
break;
case APM_IOC_SUSPEND:
lock_kernel();
if (as->suspends_read > 0) {
as->suspends_read--;
as->suspends_pending--;
......@@ -1488,16 +1492,17 @@ static int do_ioctl(struct inode *inode, struct file *filp,
} else
queue_event(APM_USER_SUSPEND, as);
if (suspends_pending <= 0) {
return suspend(1);
ret = suspend(1);
} else {
as->suspend_wait = 1;
wait_event_interruptible(apm_suspend_waitqueue,
as->suspend_wait == 0);
return as->suspend_result;
ret = as->suspend_result;
}
break;
unlock_kernel();
return ret;
default:
return -EINVAL;
return -ENOTTY;
}
return 0;
}
......@@ -1860,7 +1865,7 @@ static const struct file_operations apm_bios_fops = {
.owner = THIS_MODULE,
.read = do_read,
.poll = do_poll,
.ioctl = do_ioctl,
.unlocked_ioctl = do_ioctl,
.open = do_open,
.release = do_release,
};
......
......@@ -111,7 +111,7 @@ void foo(void)
OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
OFFSET(PV_CPU_irq_enable_syscall_ret, pv_cpu_ops, irq_enable_syscall_ret);
OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit);
OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0);
#endif
......
This diff is collapsed.
......@@ -6,11 +6,15 @@ obj-y := intel_cacheinfo.o addon_cpuid_features.o
obj-y += proc.o feature_names.o
obj-$(CONFIG_X86_32) += common.o bugs.o
obj-$(CONFIG_X86_64) += common_64.o bugs_64.o
obj-$(CONFIG_X86_32) += amd.o
obj-$(CONFIG_X86_64) += amd_64.o
obj-$(CONFIG_X86_32) += cyrix.o
obj-$(CONFIG_X86_32) += centaur.o
obj-$(CONFIG_X86_64) += centaur_64.o
obj-$(CONFIG_X86_32) += transmeta.o
obj-$(CONFIG_X86_32) += intel.o
obj-$(CONFIG_X86_64) += intel_64.o
obj-$(CONFIG_X86_32) += umc.o
obj-$(CONFIG_X86_MCE) += mcheck/
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment