Commit d403a648 authored by Linus Torvalds's avatar Linus Torvalds

Merge phase #1 of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

This merges phase 1 of the x86 tree, which is a collection of branches:

  x86/alternatives, x86/cleanups, x86/commandline, x86/crashdump,
  x86/debug, x86/defconfig, x86/doc, x86/exports, x86/fpu, x86/gart,
  x86/idle, x86/mm, x86/mtrr, x86/nmi-watchdog, x86/oprofile,
  x86/paravirt, x86/reboot, x86/sparse-fixes, x86/tsc, x86/urgent and
  x86/vmalloc

and as Ingo says: "these are the easiest, purely independent x86 topics
with no conflicts, in one nice Octopus merge".

* 'x86-v28-for-linus-phase1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (147 commits)
  x86: mtrr_cleanup: treat WRPROT as UNCACHEABLE
  x86: mtrr_cleanup: first 1M may be covered in var mtrrs
  x86: mtrr_cleanup: print out correct type v2
  x86: trivial printk fix in efi.c
  x86, debug: mtrr_cleanup print out var mtrr before change it
  x86: mtrr_cleanup try gran_size to less than 1M, v3
  x86: mtrr_cleanup try gran_size to less than 1M, cleanup
  x86: change MTRR_SANITIZER to def_bool y
  x86, debug printouts: IOMMU setup failures should not be KERN_ERR
  x86: export set_memory_ro and set_memory_rw
  x86: mtrr_cleanup try gran_size to less than 1M
  x86: mtrr_cleanup prepare to make gran_size to less 1M
  x86: mtrr_cleanup safe to get more spare regs now
  x86_64: be less annoying on boot, v2
  x86: mtrr_cleanup hole size should be less than half of chunk_size, v2
  x86: add mtrr_cleanup_debug command line
  x86: mtrr_cleanup optimization, v2
  x86: don't need to go to chunksize to 4G
  x86_64: be less annoying on boot
  x86, olpc: fix endian bug in openfirmware workaround
  ...
parents ed458df4 e496e3d6
...@@ -251,8 +251,6 @@ mono.txt ...@@ -251,8 +251,6 @@ mono.txt
- how to execute Mono-based .NET binaries with the help of BINFMT_MISC. - how to execute Mono-based .NET binaries with the help of BINFMT_MISC.
moxa-smartio moxa-smartio
- file with info on installing/using Moxa multiport serial driver. - file with info on installing/using Moxa multiport serial driver.
mtrr.txt
- how to use PPro Memory Type Range Registers to increase performance.
mutex-design.txt mutex-design.txt
- info on the generic mutex subsystem. - info on the generic mutex subsystem.
namespaces/ namespaces/
......
...@@ -463,12 +463,6 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -463,12 +463,6 @@ and is between 256 and 4096 characters. It is defined in the file
Range: 0 - 8192 Range: 0 - 8192
Default: 64 Default: 64
disable_8254_timer
enable_8254_timer
[IA32/X86_64] Disable/Enable interrupt 0 timer routing
over the 8254 in addition to over the IO-APIC. The
kernel tries to set a sensible default.
hpet= [X86-32,HPET] option to control HPET usage hpet= [X86-32,HPET] option to control HPET usage
Format: { enable (default) | disable | force } Format: { enable (default) | disable | force }
disable: disable HPET and use PIT instead disable: disable HPET and use PIT instead
...@@ -1882,6 +1876,12 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1882,6 +1876,12 @@ and is between 256 and 4096 characters. It is defined in the file
shapers= [NET] shapers= [NET]
Maximal number of shapers. Maximal number of shapers.
show_msr= [x86] show boot-time MSR settings
Format: { <integer> }
Show boot-time (BIOS-initialized) MSR settings.
The parameter means the number of CPUs to show,
for example 1 means boot CPU only.
sim710= [SCSI,HW] sim710= [SCSI,HW]
See header of drivers/scsi/sim710.c. See header of drivers/scsi/sim710.c.
......
00-INDEX
- this file
mtrr.txt
- how to use x86 Memory Type Range Registers to increase performance
...@@ -308,7 +308,7 @@ Protocol: 2.00+ ...@@ -308,7 +308,7 @@ Protocol: 2.00+
Field name: start_sys Field name: start_sys
Type: read Type: read
Offset/size: 0x20c/4 Offset/size: 0x20c/2
Protocol: 2.00+ Protocol: 2.00+
The load low segment (0x1000). Obsolete. The load low segment (0x1000). Obsolete.
......
...@@ -18,7 +18,7 @@ Richard Gooch ...@@ -18,7 +18,7 @@ Richard Gooch
The AMD K6-2 (stepping 8 and above) and K6-3 processors have two The AMD K6-2 (stepping 8 and above) and K6-3 processors have two
MTRRs. These are supported. The AMD Athlon family provide 8 Intel MTRRs. These are supported. The AMD Athlon family provide 8 Intel
style MTRRs. style MTRRs.
The Centaur C6 (WinChip) has 8 MCRs, allowing write-combining. These The Centaur C6 (WinChip) has 8 MCRs, allowing write-combining. These
are supported. are supported.
...@@ -87,7 +87,7 @@ reg00: base=0x00000000 ( 0MB), size= 64MB: write-back, count=1 ...@@ -87,7 +87,7 @@ reg00: base=0x00000000 ( 0MB), size= 64MB: write-back, count=1
reg01: base=0xfb000000 (4016MB), size= 16MB: write-combining, count=1 reg01: base=0xfb000000 (4016MB), size= 16MB: write-combining, count=1
reg02: base=0xfb000000 (4016MB), size= 4kB: uncachable, count=1 reg02: base=0xfb000000 (4016MB), size= 4kB: uncachable, count=1
Some cards (especially Voodoo Graphics boards) need this 4 kB area Some cards (especially Voodoo Graphics boards) need this 4 kB area
excluded from the beginning of the region because it is used for excluded from the beginning of the region because it is used for
registers. registers.
......
...@@ -14,6 +14,10 @@ PAT allows for different types of memory attributes. The most commonly used ...@@ -14,6 +14,10 @@ PAT allows for different types of memory attributes. The most commonly used
ones that will be supported at this time are Write-back, Uncached, ones that will be supported at this time are Write-back, Uncached,
Write-combined and Uncached Minus. Write-combined and Uncached Minus.
PAT APIs
--------
There are many different APIs in the kernel that allows setting of memory There are many different APIs in the kernel that allows setting of memory
attributes at the page level. In order to avoid aliasing, these interfaces attributes at the page level. In order to avoid aliasing, these interfaces
should be used thoughtfully. Below is a table of interfaces available, should be used thoughtfully. Below is a table of interfaces available,
...@@ -26,38 +30,38 @@ address range to avoid any aliasing. ...@@ -26,38 +30,38 @@ address range to avoid any aliasing.
API | RAM | ACPI,... | Reserved/Holes | API | RAM | ACPI,... | Reserved/Holes |
-----------------------|----------|------------|------------------| -----------------------|----------|------------|------------------|
| | | | | | | |
ioremap | -- | UC | UC | ioremap | -- | UC- | UC- |
| | | | | | | |
ioremap_cache | -- | WB | WB | ioremap_cache | -- | WB | WB |
| | | | | | | |
ioremap_nocache | -- | UC | UC | ioremap_nocache | -- | UC- | UC- |
| | | | | | | |
ioremap_wc | -- | -- | WC | ioremap_wc | -- | -- | WC |
| | | | | | | |
set_memory_uc | UC | -- | -- | set_memory_uc | UC- | -- | -- |
set_memory_wb | | | | set_memory_wb | | | |
| | | | | | | |
set_memory_wc | WC | -- | -- | set_memory_wc | WC | -- | -- |
set_memory_wb | | | | set_memory_wb | | | |
| | | | | | | |
pci sysfs resource | -- | -- | UC | pci sysfs resource | -- | -- | UC- |
| | | | | | | |
pci sysfs resource_wc | -- | -- | WC | pci sysfs resource_wc | -- | -- | WC |
is IORESOURCE_PREFETCH| | | | is IORESOURCE_PREFETCH| | | |
| | | | | | | |
pci proc | -- | -- | UC | pci proc | -- | -- | UC- |
!PCIIOC_WRITE_COMBINE | | | | !PCIIOC_WRITE_COMBINE | | | |
| | | | | | | |
pci proc | -- | -- | WC | pci proc | -- | -- | WC |
PCIIOC_WRITE_COMBINE | | | | PCIIOC_WRITE_COMBINE | | | |
| | | | | | | |
/dev/mem | -- | UC | UC | /dev/mem | -- | WB/WC/UC- | WB/WC/UC- |
read-write | | | | read-write | | | |
| | | | | | | |
/dev/mem | -- | UC | UC | /dev/mem | -- | UC- | UC- |
mmap SYNC flag | | | | mmap SYNC flag | | | |
| | | | | | | |
/dev/mem | -- | WB/WC/UC | WB/WC/UC | /dev/mem | -- | WB/WC/UC- | WB/WC/UC- |
mmap !SYNC flag | |(from exist-| (from exist- | mmap !SYNC flag | |(from exist-| (from exist- |
and | | ing alias)| ing alias) | and | | ing alias)| ing alias) |
any alias to this area| | | | any alias to this area| | | |
...@@ -68,7 +72,7 @@ pci proc | -- | -- | WC | ...@@ -68,7 +72,7 @@ pci proc | -- | -- | WC |
and | | | | and | | | |
MTRR says WB | | | | MTRR says WB | | | |
| | | | | | | |
/dev/mem | -- | -- | UC_MINUS | /dev/mem | -- | -- | UC- |
mmap !SYNC flag | | | | mmap !SYNC flag | | | |
no alias to this area | | | | no alias to this area | | | |
and | | | | and | | | |
...@@ -98,3 +102,35 @@ types. ...@@ -98,3 +102,35 @@ types.
Drivers should use set_memory_[uc|wc] to set access type for RAM ranges. Drivers should use set_memory_[uc|wc] to set access type for RAM ranges.
PAT debugging
-------------
With CONFIG_DEBUG_FS enabled, PAT memtype list can be examined by
# mount -t debugfs debugfs /sys/kernel/debug
# cat /sys/kernel/debug/x86/pat_memtype_list
PAT memtype list:
uncached-minus @ 0x7fadf000-0x7fae0000
uncached-minus @ 0x7fb19000-0x7fb1a000
uncached-minus @ 0x7fb1a000-0x7fb1b000
uncached-minus @ 0x7fb1b000-0x7fb1c000
uncached-minus @ 0x7fb1c000-0x7fb1d000
uncached-minus @ 0x7fb1d000-0x7fb1e000
uncached-minus @ 0x7fb1e000-0x7fb25000
uncached-minus @ 0x7fb25000-0x7fb26000
uncached-minus @ 0x7fb26000-0x7fb27000
uncached-minus @ 0x7fb27000-0x7fb28000
uncached-minus @ 0x7fb28000-0x7fb2e000
uncached-minus @ 0x7fb2e000-0x7fb2f000
uncached-minus @ 0x7fb2f000-0x7fb30000
uncached-minus @ 0x7fb31000-0x7fb32000
uncached-minus @ 0x80000000-0x90000000
This list shows physical address ranges and various PAT settings used to
access those physical address ranges.
Another, more verbose way of getting PAT related debug messages is with
"debugpat" boot parameter. With this parameter, various debug messages are
printed to dmesg log.
...@@ -54,10 +54,6 @@ APICs ...@@ -54,10 +54,6 @@ APICs
apicmaintimer. Useful when your PIT timer is totally apicmaintimer. Useful when your PIT timer is totally
broken. broken.
disable_8254_timer / enable_8254_timer
Enable interrupt 0 timer routing over the 8254 in addition to over
the IO-APIC. The kernel tries to set a sensible default.
Early Console Early Console
syntax: earlyprintk=vga syntax: earlyprintk=vga
......
...@@ -29,6 +29,7 @@ config X86 ...@@ -29,6 +29,7 @@ config X86
select HAVE_FTRACE select HAVE_FTRACE
select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64) select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64)
select HAVE_ARCH_KGDB if !X86_VOYAGER select HAVE_ARCH_KGDB if !X86_VOYAGER
select HAVE_ARCH_TRACEHOOK
select HAVE_GENERIC_DMA_COHERENT if X86_32 select HAVE_GENERIC_DMA_COHERENT if X86_32
select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EFFICIENT_UNALIGNED_ACCESS
...@@ -1020,7 +1021,7 @@ config HAVE_ARCH_ALLOC_REMAP ...@@ -1020,7 +1021,7 @@ config HAVE_ARCH_ALLOC_REMAP
config ARCH_FLATMEM_ENABLE config ARCH_FLATMEM_ENABLE
def_bool y def_bool y
depends on X86_32 && ARCH_SELECT_MEMORY_MODEL && X86_PC && !NUMA depends on X86_32 && ARCH_SELECT_MEMORY_MODEL && !NUMA
config ARCH_DISCONTIGMEM_ENABLE config ARCH_DISCONTIGMEM_ENABLE
def_bool y def_bool y
...@@ -1036,7 +1037,7 @@ config ARCH_SPARSEMEM_DEFAULT ...@@ -1036,7 +1037,7 @@ config ARCH_SPARSEMEM_DEFAULT
config ARCH_SPARSEMEM_ENABLE config ARCH_SPARSEMEM_ENABLE
def_bool y def_bool y
depends on X86_64 || NUMA || (EXPERIMENTAL && X86_PC) depends on X86_64 || NUMA || (EXPERIMENTAL && X86_PC) || X86_GENERICARCH
select SPARSEMEM_STATIC if X86_32 select SPARSEMEM_STATIC if X86_32
select SPARSEMEM_VMEMMAP_ENABLE if X86_64 select SPARSEMEM_VMEMMAP_ENABLE if X86_64
...@@ -1117,10 +1118,10 @@ config MTRR ...@@ -1117,10 +1118,10 @@ config MTRR
You can safely say Y even if your machine doesn't have MTRRs, you'll You can safely say Y even if your machine doesn't have MTRRs, you'll
just add about 9 KB to your kernel. just add about 9 KB to your kernel.
See <file:Documentation/mtrr.txt> for more information. See <file:Documentation/x86/mtrr.txt> for more information.
config MTRR_SANITIZER config MTRR_SANITIZER
bool def_bool y
prompt "MTRR cleanup support" prompt "MTRR cleanup support"
depends on MTRR depends on MTRR
help help
...@@ -1131,7 +1132,7 @@ config MTRR_SANITIZER ...@@ -1131,7 +1132,7 @@ config MTRR_SANITIZER
The largest mtrr entry size for a continous block can be set with The largest mtrr entry size for a continous block can be set with
mtrr_chunk_size. mtrr_chunk_size.
If unsure, say N. If unsure, say Y.
config MTRR_SANITIZER_ENABLE_DEFAULT config MTRR_SANITIZER_ENABLE_DEFAULT
int "MTRR cleanup enable value (0-1)" int "MTRR cleanup enable value (0-1)"
...@@ -1191,7 +1192,6 @@ config IRQBALANCE ...@@ -1191,7 +1192,6 @@ config IRQBALANCE
config SECCOMP config SECCOMP
def_bool y def_bool y
prompt "Enable seccomp to safely compute untrusted bytecode" prompt "Enable seccomp to safely compute untrusted bytecode"
depends on PROC_FS
help help
This kernel feature is useful for number crunching applications This kernel feature is useful for number crunching applications
that may need to compute untrusted bytecode during their that may need to compute untrusted bytecode during their
...@@ -1199,7 +1199,7 @@ config SECCOMP ...@@ -1199,7 +1199,7 @@ config SECCOMP
the process as file descriptors supporting the read/write the process as file descriptors supporting the read/write
syscalls, it's possible to isolate those applications in syscalls, it's possible to isolate those applications in
their own address space using seccomp. Once seccomp is their own address space using seccomp. Once seccomp is
enabled via /proc/<pid>/seccomp, it cannot be disabled enabled via prctl(PR_SET_SECCOMP), it cannot be disabled
and the task is only allowed to execute a few safe syscalls and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode. defined by each seccomp mode.
...@@ -1356,14 +1356,14 @@ config PHYSICAL_ALIGN ...@@ -1356,14 +1356,14 @@ config PHYSICAL_ALIGN
Don't change this unless you know what you are doing. Don't change this unless you know what you are doing.
config HOTPLUG_CPU config HOTPLUG_CPU
bool "Support for suspend on SMP and hot-pluggable CPUs (EXPERIMENTAL)" bool "Support for hot-pluggable CPUs"
depends on SMP && HOTPLUG && EXPERIMENTAL && !X86_VOYAGER depends on SMP && HOTPLUG && !X86_VOYAGER
---help--- ---help---
Say Y here to experiment with turning CPUs off and on, and to Say Y here to allow turning CPUs off and on. CPUs can be
enable suspend on SMP systems. CPUs can be controlled through controlled through /sys/devices/system/cpu.
/sys/devices/system/cpu. ( Note: power management support will enable this option
Say N if you want to disable CPU hotplug and don't need to automatically on SMP systems. )
suspend. Say N if you want to disable CPU hotplug.
config COMPAT_VDSO config COMPAT_VDSO
def_bool y def_bool y
...@@ -1378,6 +1378,51 @@ config COMPAT_VDSO ...@@ -1378,6 +1378,51 @@ config COMPAT_VDSO
If unsure, say Y. If unsure, say Y.
config CMDLINE_BOOL
bool "Built-in kernel command line"
default n
help
Allow for specifying boot arguments to the kernel at
build time. On some systems (e.g. embedded ones), it is
necessary or convenient to provide some or all of the
kernel boot arguments with the kernel itself (that is,
to not rely on the boot loader to provide them.)
To compile command line arguments into the kernel,
set this option to 'Y', then fill in the
the boot arguments in CONFIG_CMDLINE.
Systems with fully functional boot loaders (i.e. non-embedded)
should leave this option set to 'N'.
config CMDLINE
string "Built-in kernel command string"
depends on CMDLINE_BOOL
default ""
help
Enter arguments here that should be compiled into the kernel
image and used at boot time. If the boot loader provides a
command line at boot time, it is appended to this string to
form the full kernel command line, when the system boots.
However, you can use the CONFIG_CMDLINE_OVERRIDE option to
change this behavior.
In most cases, the command line (whether built-in or provided
by the boot loader) should specify the device for the root
file system.
config CMDLINE_OVERRIDE
bool "Built-in command line overrides boot loader arguments"
default n
depends on CMDLINE_BOOL
help
Set this option to 'Y' to have the kernel ignore the boot loader
command line, and use ONLY the built-in command line.
This is used to work around broken boot loaders. This should
be set to 'N' under normal conditions.
endmenu endmenu
config ARCH_ENABLE_MEMORY_HOTPLUG config ARCH_ENABLE_MEMORY_HOTPLUG
...@@ -1773,7 +1818,7 @@ config COMPAT_FOR_U64_ALIGNMENT ...@@ -1773,7 +1818,7 @@ config COMPAT_FOR_U64_ALIGNMENT
config SYSVIPC_COMPAT config SYSVIPC_COMPAT
def_bool y def_bool y
depends on X86_64 && COMPAT && SYSVIPC depends on COMPAT && SYSVIPC
endmenu endmenu
......
...@@ -418,3 +418,21 @@ config X86_MINIMUM_CPU_FAMILY ...@@ -418,3 +418,21 @@ config X86_MINIMUM_CPU_FAMILY
config X86_DEBUGCTLMSR config X86_DEBUGCTLMSR
def_bool y def_bool y
depends on !(MK6 || MWINCHIPC6 || MWINCHIP2 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486 || M386) depends on !(MK6 || MWINCHIPC6 || MWINCHIP2 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486 || M386)
config X86_DS
bool "Debug Store support"
default y
help
Add support for Debug Store.
This allows the kernel to provide a memory buffer to the hardware
to store various profiling and tracing events.
config X86_PTRACE_BTS
bool "ptrace interface to Branch Trace Store"
default y
depends on (X86_DS && X86_DEBUGCTLMSR)
help
Add a ptrace interface to allow collecting an execution trace
of the traced task.
This collects control flow changes in a (cyclic) buffer and allows
debuggers to fill in the gaps and show an execution trace of the debuggee.
...@@ -137,14 +137,15 @@ relocated: ...@@ -137,14 +137,15 @@ relocated:
*/ */
movl output_len(%ebx), %eax movl output_len(%ebx), %eax
pushl %eax pushl %eax
# push arguments for decompress_kernel:
pushl %ebp # output address pushl %ebp # output address
movl input_len(%ebx), %eax movl input_len(%ebx), %eax
pushl %eax # input_len pushl %eax # input_len
leal input_data(%ebx), %eax leal input_data(%ebx), %eax
pushl %eax # input_data pushl %eax # input_data
leal boot_heap(%ebx), %eax leal boot_heap(%ebx), %eax
pushl %eax # heap area as third argument pushl %eax # heap area
pushl %esi # real mode pointer as second arg pushl %esi # real mode pointer
call decompress_kernel call decompress_kernel
addl $20, %esp addl $20, %esp
popl %ecx popl %ecx
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
*/ */
#undef CONFIG_PARAVIRT #undef CONFIG_PARAVIRT
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
#define _ASM_DESC_H_ 1 #define ASM_X86__DESC_H 1
#endif #endif
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
...@@ -27,7 +27,7 @@ ...@@ -27,7 +27,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/screen_info.h> #include <linux/screen_info.h>
#include <linux/elf.h> #include <linux/elf.h>
#include <asm/io.h> #include <linux/io.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/bootparam.h> #include <asm/bootparam.h>
...@@ -251,7 +251,7 @@ static void __putstr(int error, const char *s) ...@@ -251,7 +251,7 @@ static void __putstr(int error, const char *s)
y--; y--;
} }
} else { } else {
vidmem [(x + cols * y) * 2] = c; vidmem[(x + cols * y) * 2] = c;
if (++x >= cols) { if (++x >= cols) {
x = 0; x = 0;
if (++y >= lines) { if (++y >= lines) {
...@@ -277,7 +277,8 @@ static void *memset(void *s, int c, unsigned n) ...@@ -277,7 +277,8 @@ static void *memset(void *s, int c, unsigned n)
int i; int i;
char *ss = s; char *ss = s;
for (i = 0; i < n; i++) ss[i] = c; for (i = 0; i < n; i++)
ss[i] = c;
return s; return s;
} }
...@@ -287,7 +288,8 @@ static void *memcpy(void *dest, const void *src, unsigned n) ...@@ -287,7 +288,8 @@ static void *memcpy(void *dest, const void *src, unsigned n)
const char *s = src; const char *s = src;
char *d = dest; char *d = dest;
for (i = 0; i < n; i++) d[i] = s[i]; for (i = 0; i < n; i++)
d[i] = s[i];
return dest; return dest;
} }
......
...@@ -30,7 +30,6 @@ SYSSEG = DEF_SYSSEG /* system loaded at 0x10000 (65536) */ ...@@ -30,7 +30,6 @@ SYSSEG = DEF_SYSSEG /* system loaded at 0x10000 (65536) */
SYSSIZE = DEF_SYSSIZE /* system size: # of 16-byte clicks */ SYSSIZE = DEF_SYSSIZE /* system size: # of 16-byte clicks */
/* to be loaded */ /* to be loaded */
ROOT_DEV = 0 /* ROOT_DEV is now written by "build" */ ROOT_DEV = 0 /* ROOT_DEV is now written by "build" */
SWAP_DEV = 0 /* SWAP_DEV is now written by "build" */
#ifndef SVGA_MODE #ifndef SVGA_MODE
#define SVGA_MODE ASK_VGA #define SVGA_MODE ASK_VGA
......
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.27-rc4 # Linux kernel version: 2.6.27-rc5
# Mon Aug 25 15:04:00 2008 # Wed Sep 3 17:23:09 2008
# #
# CONFIG_64BIT is not set # CONFIG_64BIT is not set
CONFIG_X86_32=y CONFIG_X86_32=y
...@@ -202,7 +202,7 @@ CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y ...@@ -202,7 +202,7 @@ CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
# CONFIG_M586 is not set # CONFIG_M586 is not set
# CONFIG_M586TSC is not set # CONFIG_M586TSC is not set
# CONFIG_M586MMX is not set # CONFIG_M586MMX is not set
# CONFIG_M686 is not set CONFIG_M686=y
# CONFIG_MPENTIUMII is not set # CONFIG_MPENTIUMII is not set
# CONFIG_MPENTIUMIII is not set # CONFIG_MPENTIUMIII is not set
# CONFIG_MPENTIUMM is not set # CONFIG_MPENTIUMM is not set
...@@ -221,13 +221,14 @@ CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y ...@@ -221,13 +221,14 @@ CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
# CONFIG_MVIAC3_2 is not set # CONFIG_MVIAC3_2 is not set
# CONFIG_MVIAC7 is not set # CONFIG_MVIAC7 is not set
# CONFIG_MPSC is not set # CONFIG_MPSC is not set
CONFIG_MCORE2=y # CONFIG_MCORE2 is not set
# CONFIG_GENERIC_CPU is not set # CONFIG_GENERIC_CPU is not set
CONFIG_X86_GENERIC=y CONFIG_X86_GENERIC=y
CONFIG_X86_CPU=y CONFIG_X86_CPU=y
CONFIG_X86_CMPXCHG=y CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=7 CONFIG_X86_L1_CACHE_SHIFT=7
CONFIG_X86_XADD=y CONFIG_X86_XADD=y
# CONFIG_X86_PPRO_FENCE is not set
CONFIG_X86_WP_WORKS_OK=y CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_INVLPG=y CONFIG_X86_INVLPG=y
CONFIG_X86_BSWAP=y CONFIG_X86_BSWAP=y
...@@ -235,14 +236,15 @@ CONFIG_X86_POPAD_OK=y ...@@ -235,14 +236,15 @@ CONFIG_X86_POPAD_OK=y
CONFIG_X86_INTEL_USERCOPY=y CONFIG_X86_INTEL_USERCOPY=y
CONFIG_X86_USE_PPRO_CHECKSUM=y CONFIG_X86_USE_PPRO_CHECKSUM=y
CONFIG_X86_TSC=y CONFIG_X86_TSC=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=4 CONFIG_X86_MINIMUM_CPU_FAMILY=4
CONFIG_X86_DEBUGCTLMSR=y CONFIG_X86_DEBUGCTLMSR=y
CONFIG_HPET_TIMER=y CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y CONFIG_DMI=y
# CONFIG_IOMMU_HELPER is not set # CONFIG_IOMMU_HELPER is not set
CONFIG_NR_CPUS=4 CONFIG_NR_CPUS=64
# CONFIG_SCHED_SMT is not set CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y CONFIG_SCHED_MC=y
# CONFIG_PREEMPT_NONE is not set # CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y CONFIG_PREEMPT_VOLUNTARY=y
...@@ -254,7 +256,8 @@ CONFIG_VM86=y ...@@ -254,7 +256,8 @@ CONFIG_VM86=y
# CONFIG_TOSHIBA is not set # CONFIG_TOSHIBA is not set
# CONFIG_I8K is not set # CONFIG_I8K is not set
CONFIG_X86_REBOOTFIXUPS=y CONFIG_X86_REBOOTFIXUPS=y
# CONFIG_MICROCODE is not set CONFIG_MICROCODE=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y CONFIG_X86_CPUID=y
# CONFIG_NOHIGHMEM is not set # CONFIG_NOHIGHMEM is not set
...@@ -2115,7 +2118,7 @@ CONFIG_IO_DELAY_0X80=y ...@@ -2115,7 +2118,7 @@ CONFIG_IO_DELAY_0X80=y
CONFIG_DEFAULT_IO_DELAY_TYPE=0 CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set # CONFIG_CPA_DEBUG is not set
# CONFIG_OPTIMIZE_INLINING is not set CONFIG_OPTIMIZE_INLINING=y
# #
# Security options # Security options
......
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.27-rc4 # Linux kernel version: 2.6.27-rc5
# Mon Aug 25 14:40:46 2008 # Wed Sep 3 17:13:39 2008
# #
CONFIG_64BIT=y CONFIG_64BIT=y
# CONFIG_X86_32 is not set # CONFIG_X86_32 is not set
...@@ -218,17 +218,14 @@ CONFIG_X86_PC=y ...@@ -218,17 +218,14 @@ CONFIG_X86_PC=y
# CONFIG_MVIAC3_2 is not set # CONFIG_MVIAC3_2 is not set
# CONFIG_MVIAC7 is not set # CONFIG_MVIAC7 is not set
# CONFIG_MPSC is not set # CONFIG_MPSC is not set
CONFIG_MCORE2=y # CONFIG_MCORE2 is not set
# CONFIG_GENERIC_CPU is not set CONFIG_GENERIC_CPU=y
CONFIG_X86_CPU=y CONFIG_X86_CPU=y
CONFIG_X86_L1_CACHE_BYTES=64 CONFIG_X86_L1_CACHE_BYTES=128
CONFIG_X86_INTERNODE_CACHE_BYTES=64 CONFIG_X86_INTERNODE_CACHE_BYTES=128
CONFIG_X86_CMPXCHG=y CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=6 CONFIG_X86_L1_CACHE_SHIFT=7
CONFIG_X86_WP_WORKS_OK=y CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_INTEL_USERCOPY=y
CONFIG_X86_USE_PPRO_CHECKSUM=y
CONFIG_X86_P6_NOP=y
CONFIG_X86_TSC=y CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y CONFIG_X86_CMOV=y
...@@ -243,9 +240,8 @@ CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y ...@@ -243,9 +240,8 @@ CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_AMD_IOMMU=y CONFIG_AMD_IOMMU=y
CONFIG_SWIOTLB=y CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set CONFIG_NR_CPUS=64
CONFIG_NR_CPUS=4 CONFIG_SCHED_SMT=y
# CONFIG_SCHED_SMT is not set
CONFIG_SCHED_MC=y CONFIG_SCHED_MC=y
# CONFIG_PREEMPT_NONE is not set # CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y CONFIG_PREEMPT_VOLUNTARY=y
...@@ -254,7 +250,8 @@ CONFIG_X86_LOCAL_APIC=y ...@@ -254,7 +250,8 @@ CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y CONFIG_X86_IO_APIC=y
# CONFIG_X86_MCE is not set # CONFIG_X86_MCE is not set
# CONFIG_I8K is not set # CONFIG_I8K is not set
# CONFIG_MICROCODE is not set CONFIG_MICROCODE=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y CONFIG_X86_CPUID=y
CONFIG_NUMA=y CONFIG_NUMA=y
...@@ -290,7 +287,7 @@ CONFIG_BOUNCE=y ...@@ -290,7 +287,7 @@ CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y CONFIG_VIRT_TO_BUS=y
CONFIG_MTRR=y CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set # CONFIG_MTRR_SANITIZER is not set
# CONFIG_X86_PAT is not set CONFIG_X86_PAT=y
CONFIG_EFI=y CONFIG_EFI=y
CONFIG_SECCOMP=y CONFIG_SECCOMP=y
# CONFIG_HZ_100 is not set # CONFIG_HZ_100 is not set
...@@ -2089,7 +2086,7 @@ CONFIG_IO_DELAY_0X80=y ...@@ -2089,7 +2086,7 @@ CONFIG_IO_DELAY_0X80=y
CONFIG_DEFAULT_IO_DELAY_TYPE=0 CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set # CONFIG_CPA_DEBUG is not set
# CONFIG_OPTIMIZE_INLINING is not set CONFIG_OPTIMIZE_INLINING=y
# #
# Security options # Security options
......
...@@ -85,8 +85,10 @@ static void dump_thread32(struct pt_regs *regs, struct user32 *dump) ...@@ -85,8 +85,10 @@ static void dump_thread32(struct pt_regs *regs, struct user32 *dump)
dump->regs.ax = regs->ax; dump->regs.ax = regs->ax;
dump->regs.ds = current->thread.ds; dump->regs.ds = current->thread.ds;
dump->regs.es = current->thread.es; dump->regs.es = current->thread.es;
asm("movl %%fs,%0" : "=r" (fs)); dump->regs.fs = fs; savesegment(fs, fs);
asm("movl %%gs,%0" : "=r" (gs)); dump->regs.gs = gs; dump->regs.fs = fs;
savesegment(gs, gs);
dump->regs.gs = gs;
dump->regs.orig_ax = regs->orig_ax; dump->regs.orig_ax = regs->orig_ax;
dump->regs.ip = regs->ip; dump->regs.ip = regs->ip;
dump->regs.cs = regs->cs; dump->regs.cs = regs->cs;
...@@ -430,8 +432,9 @@ static int load_aout_binary(struct linux_binprm *bprm, struct pt_regs *regs) ...@@ -430,8 +432,9 @@ static int load_aout_binary(struct linux_binprm *bprm, struct pt_regs *regs)
current->mm->start_stack = current->mm->start_stack =
(unsigned long)create_aout_tables((char __user *)bprm->p, bprm); (unsigned long)create_aout_tables((char __user *)bprm->p, bprm);
/* start thread */ /* start thread */
asm volatile("movl %0,%%fs" :: "r" (0)); \ loadsegment(fs, 0);
asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS)); loadsegment(ds, __USER32_DS);
loadsegment(es, __USER32_DS);
load_gs_index(0); load_gs_index(0);
(regs)->ip = ex.a_entry; (regs)->ip = ex.a_entry;
(regs)->sp = current->mm->start_stack; (regs)->sp = current->mm->start_stack;
......
...@@ -206,7 +206,7 @@ struct rt_sigframe ...@@ -206,7 +206,7 @@ struct rt_sigframe
{ unsigned int cur; \ { unsigned int cur; \
unsigned short pre; \ unsigned short pre; \
err |= __get_user(pre, &sc->seg); \ err |= __get_user(pre, &sc->seg); \
asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \ savesegment(seg, cur); \
pre |= mask; \ pre |= mask; \
if (pre != cur) loadsegment(seg, pre); } if (pre != cur) loadsegment(seg, pre); }
...@@ -235,7 +235,7 @@ static int ia32_restore_sigcontext(struct pt_regs *regs, ...@@ -235,7 +235,7 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
*/ */
err |= __get_user(gs, &sc->gs); err |= __get_user(gs, &sc->gs);
gs |= 3; gs |= 3;
asm("movl %%gs,%0" : "=r" (oldgs)); savesegment(gs, oldgs);
if (gs != oldgs) if (gs != oldgs)
load_gs_index(gs); load_gs_index(gs);
...@@ -355,14 +355,13 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, ...@@ -355,14 +355,13 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
{ {
int tmp, err = 0; int tmp, err = 0;
tmp = 0; savesegment(gs, tmp);
__asm__("movl %%gs,%0" : "=r"(tmp): "0"(tmp));
err |= __put_user(tmp, (unsigned int __user *)&sc->gs); err |= __put_user(tmp, (unsigned int __user *)&sc->gs);
__asm__("movl %%fs,%0" : "=r"(tmp): "0"(tmp)); savesegment(fs, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->fs); err |= __put_user(tmp, (unsigned int __user *)&sc->fs);
__asm__("movl %%ds,%0" : "=r"(tmp): "0"(tmp)); savesegment(ds, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->ds); err |= __put_user(tmp, (unsigned int __user *)&sc->ds);
__asm__("movl %%es,%0" : "=r"(tmp): "0"(tmp)); savesegment(es, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->es); err |= __put_user(tmp, (unsigned int __user *)&sc->es);
err |= __put_user((u32)regs->di, &sc->di); err |= __put_user((u32)regs->di, &sc->di);
...@@ -498,8 +497,8 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -498,8 +497,8 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
regs->dx = 0; regs->dx = 0;
regs->cx = 0; regs->cx = 0;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS)); loadsegment(ds, __USER32_DS);
asm volatile("movl %0,%%es" :: "r" (__USER32_DS)); loadsegment(es, __USER32_DS);
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->ss = __USER32_DS; regs->ss = __USER32_DS;
...@@ -591,8 +590,8 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -591,8 +590,8 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->dx = (unsigned long) &frame->info; regs->dx = (unsigned long) &frame->info;
regs->cx = (unsigned long) &frame->uc; regs->cx = (unsigned long) &frame->uc;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS)); loadsegment(ds, __USER32_DS);
asm volatile("movl %0,%%es" :: "r" (__USER32_DS)); loadsegment(es, __USER32_DS);
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->ss = __USER32_DS; regs->ss = __USER32_DS;
......
...@@ -556,15 +556,6 @@ asmlinkage long sys32_rt_sigqueueinfo(int pid, int sig, ...@@ -556,15 +556,6 @@ asmlinkage long sys32_rt_sigqueueinfo(int pid, int sig,
return ret; return ret;
} }
/* These are here just in case some old ia32 binary calls it. */
asmlinkage long sys32_pause(void)
{
current->state = TASK_INTERRUPTIBLE;
schedule();
return -ERESTARTNOHAND;
}
#ifdef CONFIG_SYSCTL_SYSCALL #ifdef CONFIG_SYSCTL_SYSCALL
struct sysctl_ia32 { struct sysctl_ia32 {
unsigned int name; unsigned int name;
......
...@@ -58,7 +58,6 @@ EXPORT_SYMBOL(acpi_disabled); ...@@ -58,7 +58,6 @@ EXPORT_SYMBOL(acpi_disabled);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/genapic.h>
#else /* X86 */ #else /* X86 */
...@@ -97,8 +96,6 @@ static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE; ...@@ -97,8 +96,6 @@ static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
#warning ACPI uses CMPXCHG, i486 and later hardware #warning ACPI uses CMPXCHG, i486 and later hardware
#endif #endif
static int acpi_mcfg_64bit_base_addr __initdata = FALSE;
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Boot-time Configuration Boot-time Configuration
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
...@@ -160,6 +157,8 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size) ...@@ -160,6 +157,8 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size)
struct acpi_mcfg_allocation *pci_mmcfg_config; struct acpi_mcfg_allocation *pci_mmcfg_config;
int pci_mmcfg_config_num; int pci_mmcfg_config_num;
static int acpi_mcfg_64bit_base_addr __initdata = FALSE;
static int __init acpi_mcfg_oem_check(struct acpi_table_mcfg *mcfg) static int __init acpi_mcfg_oem_check(struct acpi_table_mcfg *mcfg)
{ {
if (!strcmp(mcfg->header.oem_id, "SGI")) if (!strcmp(mcfg->header.oem_id, "SGI"))
......
...@@ -231,25 +231,25 @@ static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end) ...@@ -231,25 +231,25 @@ static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end)
continue; continue;
if (*ptr > text_end) if (*ptr > text_end)
continue; continue;
text_poke(*ptr, ((unsigned char []){0xf0}), 1); /* add lock prefix */ /* turn DS segment override prefix into lock prefix */
text_poke(*ptr, ((unsigned char []){0xf0}), 1);
}; };
} }
static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end) static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end)
{ {
u8 **ptr; u8 **ptr;
char insn[1];
if (noreplace_smp) if (noreplace_smp)
return; return;
add_nops(insn, 1);
for (ptr = start; ptr < end; ptr++) { for (ptr = start; ptr < end; ptr++) {
if (*ptr < text) if (*ptr < text)
continue; continue;
if (*ptr > text_end) if (*ptr > text_end)
continue; continue;
text_poke(*ptr, insn, 1); /* turn lock prefix into DS segment override prefix */
text_poke(*ptr, ((unsigned char []){0x3E}), 1);
}; };
} }
......
...@@ -455,11 +455,11 @@ void __init gart_iommu_hole_init(void) ...@@ -455,11 +455,11 @@ void __init gart_iommu_hole_init(void)
force_iommu || force_iommu ||
valid_agp || valid_agp ||
fallback_aper_force) { fallback_aper_force) {
printk(KERN_ERR printk(KERN_INFO
"Your BIOS doesn't leave a aperture memory hole\n"); "Your BIOS doesn't leave a aperture memory hole\n");
printk(KERN_ERR printk(KERN_INFO
"Please enable the IOMMU option in the BIOS setup\n"); "Please enable the IOMMU option in the BIOS setup\n");
printk(KERN_ERR printk(KERN_INFO
"This costs you %d MB of RAM\n", "This costs you %d MB of RAM\n",
32 << fallback_aper_order); 32 << fallback_aper_order);
......
...@@ -228,7 +228,6 @@ ...@@ -228,7 +228,6 @@
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
#include <linux/smp_lock.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
#define __NO_STUBS 1 #define __NO_STUBS 1
#undef __SYSCALL #undef __SYSCALL
#undef _ASM_X86_64_UNISTD_H_ #undef ASM_X86__UNISTD_64_H
#define __SYSCALL(nr, sym) [nr] = 1, #define __SYSCALL(nr, sym) [nr] = 1,
static char syscalls[] = { static char syscalls[] = {
#include <asm/unistd.h> #include <asm/unistd.h>
......
...@@ -25,11 +25,11 @@ x86_bios_strerror(long status) ...@@ -25,11 +25,11 @@ x86_bios_strerror(long status)
{ {
const char *str; const char *str;
switch (status) { switch (status) {
case 0: str = "Call completed without error"; break; case 0: str = "Call completed without error"; break;
case -1: str = "Not implemented"; break; case -1: str = "Not implemented"; break;
case -2: str = "Invalid argument"; break; case -2: str = "Invalid argument"; break;
case -3: str = "Call completed with error"; break; case -3: str = "Call completed with error"; break;
default: str = "Unknown BIOS status code"; break; default: str = "Unknown BIOS status code"; break;
} }
return str; return str;
} }
......
...@@ -430,6 +430,49 @@ static __init int setup_noclflush(char *arg) ...@@ -430,6 +430,49 @@ static __init int setup_noclflush(char *arg)
} }
__setup("noclflush", setup_noclflush); __setup("noclflush", setup_noclflush);
struct msr_range {
unsigned min;
unsigned max;
};
static struct msr_range msr_range_array[] __cpuinitdata = {
{ 0x00000000, 0x00000418},
{ 0xc0000000, 0xc000040b},
{ 0xc0010000, 0xc0010142},
{ 0xc0011000, 0xc001103b},
};
static void __cpuinit print_cpu_msr(void)
{
unsigned index;
u64 val;
int i;
unsigned index_min, index_max;
for (i = 0; i < ARRAY_SIZE(msr_range_array); i++) {
index_min = msr_range_array[i].min;
index_max = msr_range_array[i].max;
for (index = index_min; index < index_max; index++) {
if (rdmsrl_amd_safe(index, &val))
continue;
printk(KERN_INFO " MSR%08x: %016llx\n", index, val);
}
}
}
static int show_msr __cpuinitdata;
static __init int setup_show_msr(char *arg)
{
int num;
get_option(&arg, &num);
if (num > 0)
show_msr = num;
return 1;
}
__setup("show_msr=", setup_show_msr);
void __cpuinit print_cpu_info(struct cpuinfo_x86 *c) void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
{ {
if (c->x86_model_id[0]) if (c->x86_model_id[0])
...@@ -439,6 +482,14 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c) ...@@ -439,6 +482,14 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
printk(KERN_CONT " stepping %02x\n", c->x86_mask); printk(KERN_CONT " stepping %02x\n", c->x86_mask);
else else
printk(KERN_CONT "\n"); printk(KERN_CONT "\n");
#ifdef CONFIG_SMP
if (c->cpu_index < show_msr)
print_cpu_msr();
#else
if (show_msr)
print_cpu_msr();
#endif
} }
static __init int setup_disablecpuid(char *arg) static __init int setup_disablecpuid(char *arg)
......
...@@ -222,10 +222,11 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -222,10 +222,11 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_BTS); set_cpu_cap(c, X86_FEATURE_BTS);
if (!(l1 & (1<<12))) if (!(l1 & (1<<12)))
set_cpu_cap(c, X86_FEATURE_PEBS); set_cpu_cap(c, X86_FEATURE_PEBS);
ds_init_intel(c);
} }
if (cpu_has_bts) if (cpu_has_bts)
ds_init_intel(c); ptrace_bts_init_intel(c);
/* /*
* See if we have a good local APIC by checking for buggy Pentia, * See if we have a good local APIC by checking for buggy Pentia,
......
...@@ -401,12 +401,7 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base, ...@@ -401,12 +401,7 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
tmp |= ~((1<<(hi - 1)) - 1); tmp |= ~((1<<(hi - 1)) - 1);
if (tmp != mask_lo) { if (tmp != mask_lo) {
static int once = 1; WARN_ONCE(1, KERN_INFO "mtrr: your BIOS has set up an incorrect mask, fixing it up.\n");
if (once) {
printk(KERN_INFO "mtrr: your BIOS has set up an incorrect mask, fixing it up.\n");
once = 0;
}
mask_lo = tmp; mask_lo = tmp;
} }
} }
......
...@@ -405,9 +405,9 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset) ...@@ -405,9 +405,9 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset)
} }
/* RED-PEN: base can be > 32bit */ /* RED-PEN: base can be > 32bit */
len += seq_printf(seq, len += seq_printf(seq,
"reg%02i: base=0x%05lx000 (%4luMB), size=%4lu%cB: %s, count=%d\n", "reg%02i: base=0x%06lx000 (%5luMB), size=%5lu%cB, count=%d: %s\n",
i, base, base >> (20 - PAGE_SHIFT), size, factor, i, base, base >> (20 - PAGE_SHIFT), size, factor,
mtrr_attrib_to_str(type), mtrr_usage_table[i]); mtrr_usage_table[i], mtrr_attrib_to_str(type));
} }
} }
return 0; return 0;
......
This diff is collapsed.
...@@ -295,13 +295,19 @@ static int setup_k7_watchdog(unsigned nmi_hz) ...@@ -295,13 +295,19 @@ static int setup_k7_watchdog(unsigned nmi_hz)
/* setup the timer */ /* setup the timer */
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
write_watchdog_counter(perfctr_msr, "K7_PERFCTR0",nmi_hz); write_watchdog_counter(perfctr_msr, "K7_PERFCTR0",nmi_hz);
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= K7_EVNTSEL_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
/* initialize the wd struct before enabling */
wd->perfctr_msr = perfctr_msr; wd->perfctr_msr = perfctr_msr;
wd->evntsel_msr = evntsel_msr; wd->evntsel_msr = evntsel_msr;
wd->cccr_msr = 0; /* unused */ wd->cccr_msr = 0; /* unused */
/* ok, everything is initialized, announce that we're set */
cpu_nmi_set_wd_enabled();
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= K7_EVNTSEL_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
return 1; return 1;
} }
...@@ -379,13 +385,19 @@ static int setup_p6_watchdog(unsigned nmi_hz) ...@@ -379,13 +385,19 @@ static int setup_p6_watchdog(unsigned nmi_hz)
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
nmi_hz = adjust_for_32bit_ctr(nmi_hz); nmi_hz = adjust_for_32bit_ctr(nmi_hz);
write_watchdog_counter32(perfctr_msr, "P6_PERFCTR0",nmi_hz); write_watchdog_counter32(perfctr_msr, "P6_PERFCTR0",nmi_hz);
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= P6_EVNTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
/* initialize the wd struct before enabling */
wd->perfctr_msr = perfctr_msr; wd->perfctr_msr = perfctr_msr;
wd->evntsel_msr = evntsel_msr; wd->evntsel_msr = evntsel_msr;
wd->cccr_msr = 0; /* unused */ wd->cccr_msr = 0; /* unused */
/* ok, everything is initialized, announce that we're set */
cpu_nmi_set_wd_enabled();
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= P6_EVNTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
return 1; return 1;
} }
...@@ -432,6 +444,27 @@ static const struct wd_ops p6_wd_ops = { ...@@ -432,6 +444,27 @@ static const struct wd_ops p6_wd_ops = {
#define P4_CCCR_ENABLE (1 << 12) #define P4_CCCR_ENABLE (1 << 12)
#define P4_CCCR_OVF (1 << 31) #define P4_CCCR_OVF (1 << 31)
#define P4_CONTROLS 18
static unsigned int p4_controls[18] = {
MSR_P4_BPU_CCCR0,
MSR_P4_BPU_CCCR1,
MSR_P4_BPU_CCCR2,
MSR_P4_BPU_CCCR3,
MSR_P4_MS_CCCR0,
MSR_P4_MS_CCCR1,
MSR_P4_MS_CCCR2,
MSR_P4_MS_CCCR3,
MSR_P4_FLAME_CCCR0,
MSR_P4_FLAME_CCCR1,
MSR_P4_FLAME_CCCR2,
MSR_P4_FLAME_CCCR3,
MSR_P4_IQ_CCCR0,
MSR_P4_IQ_CCCR1,
MSR_P4_IQ_CCCR2,
MSR_P4_IQ_CCCR3,
MSR_P4_IQ_CCCR4,
MSR_P4_IQ_CCCR5,
};
/* /*
* Set up IQ_COUNTER0 to behave like a clock, by having IQ_CCCR0 filter * Set up IQ_COUNTER0 to behave like a clock, by having IQ_CCCR0 filter
* CRU_ESCR0 (with any non-null event selector) through a complemented * CRU_ESCR0 (with any non-null event selector) through a complemented
...@@ -473,6 +506,26 @@ static int setup_p4_watchdog(unsigned nmi_hz) ...@@ -473,6 +506,26 @@ static int setup_p4_watchdog(unsigned nmi_hz)
evntsel_msr = MSR_P4_CRU_ESCR0; evntsel_msr = MSR_P4_CRU_ESCR0;
cccr_msr = MSR_P4_IQ_CCCR0; cccr_msr = MSR_P4_IQ_CCCR0;
cccr_val = P4_CCCR_OVF_PMI0 | P4_CCCR_ESCR_SELECT(4); cccr_val = P4_CCCR_OVF_PMI0 | P4_CCCR_ESCR_SELECT(4);
/*
* If we're on the kdump kernel or other situation, we may
* still have other performance counter registers set to
* interrupt and they'll keep interrupting forever because
* of the P4_CCCR_OVF quirk. So we need to ACK all the
* pending interrupts and disable all the registers here,
* before reenabling the NMI delivery. Refer to p4_rearm()
* about the P4_CCCR_OVF quirk.
*/
if (reset_devices) {
unsigned int low, high;
int i;
for (i = 0; i < P4_CONTROLS; i++) {
rdmsr(p4_controls[i], low, high);
low &= ~(P4_CCCR_ENABLE | P4_CCCR_OVF);
wrmsr(p4_controls[i], low, high);
}
}
} else { } else {
/* logical cpu 1 */ /* logical cpu 1 */
perfctr_msr = MSR_P4_IQ_PERFCTR1; perfctr_msr = MSR_P4_IQ_PERFCTR1;
...@@ -499,12 +552,17 @@ static int setup_p4_watchdog(unsigned nmi_hz) ...@@ -499,12 +552,17 @@ static int setup_p4_watchdog(unsigned nmi_hz)
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
wrmsr(cccr_msr, cccr_val, 0); wrmsr(cccr_msr, cccr_val, 0);
write_watchdog_counter(perfctr_msr, "P4_IQ_COUNTER0", nmi_hz); write_watchdog_counter(perfctr_msr, "P4_IQ_COUNTER0", nmi_hz);
apic_write(APIC_LVTPC, APIC_DM_NMI);
cccr_val |= P4_CCCR_ENABLE;
wrmsr(cccr_msr, cccr_val, 0);
wd->perfctr_msr = perfctr_msr; wd->perfctr_msr = perfctr_msr;
wd->evntsel_msr = evntsel_msr; wd->evntsel_msr = evntsel_msr;
wd->cccr_msr = cccr_msr; wd->cccr_msr = cccr_msr;
/* ok, everything is initialized, announce that we're set */
cpu_nmi_set_wd_enabled();
apic_write(APIC_LVTPC, APIC_DM_NMI);
cccr_val |= P4_CCCR_ENABLE;
wrmsr(cccr_msr, cccr_val, 0);
return 1; return 1;
} }
...@@ -620,13 +678,17 @@ static int setup_intel_arch_watchdog(unsigned nmi_hz) ...@@ -620,13 +678,17 @@ static int setup_intel_arch_watchdog(unsigned nmi_hz)
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
nmi_hz = adjust_for_32bit_ctr(nmi_hz); nmi_hz = adjust_for_32bit_ctr(nmi_hz);
write_watchdog_counter32(perfctr_msr, "INTEL_ARCH_PERFCTR0", nmi_hz); write_watchdog_counter32(perfctr_msr, "INTEL_ARCH_PERFCTR0", nmi_hz);
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
wd->perfctr_msr = perfctr_msr; wd->perfctr_msr = perfctr_msr;
wd->evntsel_msr = evntsel_msr; wd->evntsel_msr = evntsel_msr;
wd->cccr_msr = 0; /* unused */ wd->cccr_msr = 0; /* unused */
/* ok, everything is initialized, announce that we're set */
cpu_nmi_set_wd_enabled();
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
intel_arch_wd_ops.checkbit = 1ULL << (eax.split.bit_width - 1); intel_arch_wd_ops.checkbit = 1ULL << (eax.split.bit_width - 1);
return 1; return 1;
} }
......
...@@ -36,7 +36,6 @@ ...@@ -36,7 +36,6 @@
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/major.h> #include <linux/major.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/smp_lock.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/notifier.h> #include <linux/notifier.h>
......
...@@ -7,9 +7,8 @@ ...@@ -7,9 +7,8 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/crash_dump.h> #include <linux/crash_dump.h>
#include <linux/uaccess.h>
#include <asm/uaccess.h> #include <linux/io.h>
#include <asm/io.h>
/** /**
* copy_oldmem_page - copy one page from "oldmem" * copy_oldmem_page - copy one page from "oldmem"
...@@ -25,7 +24,7 @@ ...@@ -25,7 +24,7 @@
* in the current kernel. We stitch up a pte, similar to kmap_atomic. * in the current kernel. We stitch up a pte, similar to kmap_atomic.
*/ */
ssize_t copy_oldmem_page(unsigned long pfn, char *buf, ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
size_t csize, unsigned long offset, int userbuf) size_t csize, unsigned long offset, int userbuf)
{ {
void *vaddr; void *vaddr;
...@@ -33,14 +32,16 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, ...@@ -33,14 +32,16 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0; return 0;
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE); vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
if (!vaddr)
return -ENOMEM;
if (userbuf) { if (userbuf) {
if (copy_to_user(buf, (vaddr + offset), csize)) { if (copy_to_user(buf, vaddr + offset, csize)) {
iounmap(vaddr); iounmap(vaddr);
return -EFAULT; return -EFAULT;
} }
} else } else
memcpy(buf, (vaddr + offset), csize); memcpy(buf, vaddr + offset, csize);
iounmap(vaddr); iounmap(vaddr);
return csize; return csize;
......
This diff is collapsed.
...@@ -414,9 +414,11 @@ void __init efi_init(void) ...@@ -414,9 +414,11 @@ void __init efi_init(void)
if (memmap.map == NULL) if (memmap.map == NULL)
printk(KERN_ERR "Could not map the EFI memory map!\n"); printk(KERN_ERR "Could not map the EFI memory map!\n");
memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size); memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size);
if (memmap.desc_size != sizeof(efi_memory_desc_t)) if (memmap.desc_size != sizeof(efi_memory_desc_t))
printk(KERN_WARNING "Kernel-defined memdesc" printk(KERN_WARNING
"doesn't match the one from EFI!\n"); "Kernel-defined memdesc doesn't match the one from EFI!\n");
if (add_efi_memmap) if (add_efi_memmap)
do_add_efi_memmap(); do_add_efi_memmap();
......
...@@ -275,9 +275,9 @@ ENTRY(native_usergs_sysret64) ...@@ -275,9 +275,9 @@ ENTRY(native_usergs_sysret64)
ENTRY(ret_from_fork) ENTRY(ret_from_fork)
CFI_DEFAULT_STACK CFI_DEFAULT_STACK
push kernel_eflags(%rip) push kernel_eflags(%rip)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 8
popf # reset kernel eflags popf # reset kernel eflags
CFI_ADJUST_CFA_OFFSET -4 CFI_ADJUST_CFA_OFFSET -8
call schedule_tail call schedule_tail
GET_THREAD_INFO(%rcx) GET_THREAD_INFO(%rcx)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT),TI_flags(%rcx) testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT),TI_flags(%rcx)
......
...@@ -108,12 +108,11 @@ void __init x86_64_start_kernel(char * real_mode_data) ...@@ -108,12 +108,11 @@ void __init x86_64_start_kernel(char * real_mode_data)
} }
load_idt((const struct desc_ptr *)&idt_descr); load_idt((const struct desc_ptr *)&idt_descr);
early_printk("Kernel alive\n"); if (console_loglevel == 10)
early_printk("Kernel alive\n");
x86_64_init_pda(); x86_64_init_pda();
early_printk("Kernel really alive\n");
x86_64_start_reservations(real_mode_data); x86_64_start_reservations(real_mode_data);
} }
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <asm/syscalls.h>
/* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */ /* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */
static void set_bitmap(unsigned long *bitmap, unsigned int base, static void set_bitmap(unsigned long *bitmap, unsigned int base,
......
...@@ -20,6 +20,8 @@ ...@@ -20,6 +20,8 @@
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
#include <mach_apic.h> #include <mach_apic.h>
#include <mach_ipi.h>
/* /*
* the following functions deal with sending IPIs between CPUs. * the following functions deal with sending IPIs between CPUs.
* *
...@@ -147,7 +149,6 @@ void send_IPI_mask_sequence(cpumask_t mask, int vector) ...@@ -147,7 +149,6 @@ void send_IPI_mask_sequence(cpumask_t mask, int vector)
} }
/* must come after the send_IPI functions above for inlining */ /* must come after the send_IPI functions above for inlining */
#include <mach_ipi.h>
static int convert_apicid_to_cpu(int apic_id) static int convert_apicid_to_cpu(int apic_id)
{ {
int i; int i;
......
...@@ -325,7 +325,7 @@ int show_interrupts(struct seq_file *p, void *v) ...@@ -325,7 +325,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j) for_each_online_cpu(j)
seq_printf(p, "%10u ", seq_printf(p, "%10u ",
per_cpu(irq_stat,j).irq_call_count); per_cpu(irq_stat,j).irq_call_count);
seq_printf(p, " function call interrupts\n"); seq_printf(p, " Function call interrupts\n");
seq_printf(p, "TLB: "); seq_printf(p, "TLB: ");
for_each_online_cpu(j) for_each_online_cpu(j)
seq_printf(p, "%10u ", seq_printf(p, "%10u ",
......
...@@ -129,7 +129,7 @@ int show_interrupts(struct seq_file *p, void *v) ...@@ -129,7 +129,7 @@ int show_interrupts(struct seq_file *p, void *v)
seq_printf(p, "CAL: "); seq_printf(p, "CAL: ");
for_each_online_cpu(j) for_each_online_cpu(j)
seq_printf(p, "%10u ", cpu_pda(j)->irq_call_count); seq_printf(p, "%10u ", cpu_pda(j)->irq_call_count);
seq_printf(p, " function call interrupts\n"); seq_printf(p, " Function call interrupts\n");
seq_printf(p, "TLB: "); seq_printf(p, "TLB: ");
for_each_online_cpu(j) for_each_online_cpu(j)
seq_printf(p, "%10u ", cpu_pda(j)->irq_tlb_count); seq_printf(p, "%10u ", cpu_pda(j)->irq_tlb_count);
......
...@@ -178,7 +178,7 @@ static void kvm_flush_tlb(void) ...@@ -178,7 +178,7 @@ static void kvm_flush_tlb(void)
kvm_deferred_mmu_op(&ftlb, sizeof ftlb); kvm_deferred_mmu_op(&ftlb, sizeof ftlb);
} }
static void kvm_release_pt(u32 pfn) static void kvm_release_pt(unsigned long pfn)
{ {
struct kvm_mmu_op_release_pt rpt = { struct kvm_mmu_op_release_pt rpt = {
.header.op = KVM_MMU_OP_RELEASE_PT, .header.op = KVM_MMU_OP_RELEASE_PT,
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <asm/ldt.h> #include <asm/ldt.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/syscalls.h>
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static void flush_ldt(void *current_mm) static void flush_ldt(void *current_mm)
......
...@@ -299,6 +299,15 @@ void acpi_nmi_disable(void) ...@@ -299,6 +299,15 @@ void acpi_nmi_disable(void)
on_each_cpu(__acpi_nmi_disable, NULL, 1); on_each_cpu(__acpi_nmi_disable, NULL, 1);
} }
/*
* This function is called as soon the LAPIC NMI watchdog driver has everything
* in place and it's ready to check if the NMIs belong to the NMI watchdog
*/
void cpu_nmi_set_wd_enabled(void)
{
__get_cpu_var(wd_enabled) = 1;
}
void setup_apic_nmi_watchdog(void *unused) void setup_apic_nmi_watchdog(void *unused)
{ {
if (__get_cpu_var(wd_enabled)) if (__get_cpu_var(wd_enabled))
...@@ -311,8 +320,6 @@ void setup_apic_nmi_watchdog(void *unused) ...@@ -311,8 +320,6 @@ void setup_apic_nmi_watchdog(void *unused)
switch (nmi_watchdog) { switch (nmi_watchdog) {
case NMI_LOCAL_APIC: case NMI_LOCAL_APIC:
/* enable it before to avoid race with handler */
__get_cpu_var(wd_enabled) = 1;
if (lapic_watchdog_init(nmi_hz) < 0) { if (lapic_watchdog_init(nmi_hz) < 0) {
__get_cpu_var(wd_enabled) = 0; __get_cpu_var(wd_enabled) = 0;
return; return;
......
...@@ -190,12 +190,12 @@ EXPORT_SYMBOL_GPL(olpc_ec_cmd); ...@@ -190,12 +190,12 @@ EXPORT_SYMBOL_GPL(olpc_ec_cmd);
static void __init platform_detect(void) static void __init platform_detect(void)
{ {
size_t propsize; size_t propsize;
u32 rev; __be32 rev;
if (ofw("getprop", 4, 1, NULL, "board-revision-int", &rev, 4, if (ofw("getprop", 4, 1, NULL, "board-revision-int", &rev, 4,
&propsize) || propsize != 4) { &propsize) || propsize != 4) {
printk(KERN_ERR "ofw: getprop call failed!\n"); printk(KERN_ERR "ofw: getprop call failed!\n");
rev = 0; rev = cpu_to_be32(0);
} }
olpc_platform_info.boardrev = be32_to_cpu(rev); olpc_platform_info.boardrev = be32_to_cpu(rev);
} }
...@@ -203,7 +203,7 @@ static void __init platform_detect(void) ...@@ -203,7 +203,7 @@ static void __init platform_detect(void)
static void __init platform_detect(void) static void __init platform_detect(void)
{ {
/* stopgap until OFW support is added to the kernel */ /* stopgap until OFW support is added to the kernel */
olpc_platform_info.boardrev = be32_to_cpu(0xc2); olpc_platform_info.boardrev = 0xc2;
} }
#endif #endif
......
...@@ -330,6 +330,7 @@ struct pv_cpu_ops pv_cpu_ops = { ...@@ -330,6 +330,7 @@ struct pv_cpu_ops pv_cpu_ops = {
#endif #endif
.wbinvd = native_wbinvd, .wbinvd = native_wbinvd,
.read_msr = native_read_msr_safe, .read_msr = native_read_msr_safe,
.read_msr_amd = native_read_msr_amd_safe,
.write_msr = native_write_msr_safe, .write_msr = native_write_msr_safe,
.read_tsc = native_read_tsc, .read_tsc = native_read_tsc,
.read_pmc = native_read_pmc, .read_pmc = native_read_pmc,
......
...@@ -23,7 +23,7 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf, ...@@ -23,7 +23,7 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
start = start_##ops##_##x; \ start = start_##ops##_##x; \
end = end_##ops##_##x; \ end = end_##ops##_##x; \
goto patch_site goto patch_site
switch(type) { switch (type) {
PATCH_SITE(pv_irq_ops, irq_disable); PATCH_SITE(pv_irq_ops, irq_disable);
PATCH_SITE(pv_irq_ops, irq_enable); PATCH_SITE(pv_irq_ops, irq_enable);
PATCH_SITE(pv_irq_ops, restore_fl); PATCH_SITE(pv_irq_ops, restore_fl);
......
...@@ -82,7 +82,7 @@ void __init dma32_reserve_bootmem(void) ...@@ -82,7 +82,7 @@ void __init dma32_reserve_bootmem(void)
* using 512M as goal * using 512M as goal
*/ */
align = 64ULL<<20; align = 64ULL<<20;
size = round_up(dma32_bootmem_size, align); size = roundup(dma32_bootmem_size, align);
dma32_bootmem_ptr = __alloc_bootmem_nopanic(size, align, dma32_bootmem_ptr = __alloc_bootmem_nopanic(size, align,
512ULL<<20); 512ULL<<20);
if (dma32_bootmem_ptr) if (dma32_bootmem_ptr)
......
...@@ -82,7 +82,8 @@ AGPEXTERN __u32 *agp_gatt_table; ...@@ -82,7 +82,8 @@ AGPEXTERN __u32 *agp_gatt_table;
static unsigned long next_bit; /* protected by iommu_bitmap_lock */ static unsigned long next_bit; /* protected by iommu_bitmap_lock */
static int need_flush; /* global flush state. set for each gart wrap */ static int need_flush; /* global flush state. set for each gart wrap */
static unsigned long alloc_iommu(struct device *dev, int size) static unsigned long alloc_iommu(struct device *dev, int size,
unsigned long align_mask)
{ {
unsigned long offset, flags; unsigned long offset, flags;
unsigned long boundary_size; unsigned long boundary_size;
...@@ -90,16 +91,17 @@ static unsigned long alloc_iommu(struct device *dev, int size) ...@@ -90,16 +91,17 @@ static unsigned long alloc_iommu(struct device *dev, int size)
base_index = ALIGN(iommu_bus_base & dma_get_seg_boundary(dev), base_index = ALIGN(iommu_bus_base & dma_get_seg_boundary(dev),
PAGE_SIZE) >> PAGE_SHIFT; PAGE_SIZE) >> PAGE_SHIFT;
boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1, boundary_size = ALIGN((unsigned long long)dma_get_seg_boundary(dev) + 1,
PAGE_SIZE) >> PAGE_SHIFT; PAGE_SIZE) >> PAGE_SHIFT;
spin_lock_irqsave(&iommu_bitmap_lock, flags); spin_lock_irqsave(&iommu_bitmap_lock, flags);
offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, next_bit, offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, next_bit,
size, base_index, boundary_size, 0); size, base_index, boundary_size, align_mask);
if (offset == -1) { if (offset == -1) {
need_flush = 1; need_flush = 1;
offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, 0, offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, 0,
size, base_index, boundary_size, 0); size, base_index, boundary_size,
align_mask);
} }
if (offset != -1) { if (offset != -1) {
next_bit = offset+size; next_bit = offset+size;
...@@ -236,10 +238,10 @@ nonforced_iommu(struct device *dev, unsigned long addr, size_t size) ...@@ -236,10 +238,10 @@ nonforced_iommu(struct device *dev, unsigned long addr, size_t size)
* Caller needs to check if the iommu is needed and flush. * Caller needs to check if the iommu is needed and flush.
*/ */
static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem,
size_t size, int dir) size_t size, int dir, unsigned long align_mask)
{ {
unsigned long npages = iommu_num_pages(phys_mem, size); unsigned long npages = iommu_num_pages(phys_mem, size);
unsigned long iommu_page = alloc_iommu(dev, npages); unsigned long iommu_page = alloc_iommu(dev, npages, align_mask);
int i; int i;
if (iommu_page == -1) { if (iommu_page == -1) {
...@@ -262,7 +264,11 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, ...@@ -262,7 +264,11 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem,
static dma_addr_t static dma_addr_t
gart_map_simple(struct device *dev, phys_addr_t paddr, size_t size, int dir) gart_map_simple(struct device *dev, phys_addr_t paddr, size_t size, int dir)
{ {
dma_addr_t map = dma_map_area(dev, paddr, size, dir); dma_addr_t map;
unsigned long align_mask;
align_mask = (1UL << get_order(size)) - 1;
map = dma_map_area(dev, paddr, size, dir, align_mask);
flush_gart(); flush_gart();
...@@ -281,7 +287,8 @@ gart_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir) ...@@ -281,7 +287,8 @@ gart_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir)
if (!need_iommu(dev, paddr, size)) if (!need_iommu(dev, paddr, size))
return paddr; return paddr;
bus = gart_map_simple(dev, paddr, size, dir); bus = dma_map_area(dev, paddr, size, dir, 0);
flush_gart();
return bus; return bus;
} }
...@@ -340,7 +347,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg, ...@@ -340,7 +347,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
unsigned long addr = sg_phys(s); unsigned long addr = sg_phys(s);
if (nonforced_iommu(dev, addr, s->length)) { if (nonforced_iommu(dev, addr, s->length)) {
addr = dma_map_area(dev, addr, s->length, dir); addr = dma_map_area(dev, addr, s->length, dir, 0);
if (addr == bad_dma_address) { if (addr == bad_dma_address) {
if (i > 0) if (i > 0)
gart_unmap_sg(dev, sg, i, dir); gart_unmap_sg(dev, sg, i, dir);
...@@ -362,7 +369,7 @@ static int __dma_map_cont(struct device *dev, struct scatterlist *start, ...@@ -362,7 +369,7 @@ static int __dma_map_cont(struct device *dev, struct scatterlist *start,
int nelems, struct scatterlist *sout, int nelems, struct scatterlist *sout,
unsigned long pages) unsigned long pages)
{ {
unsigned long iommu_start = alloc_iommu(dev, pages); unsigned long iommu_start = alloc_iommu(dev, pages, 0);
unsigned long iommu_page = iommu_start; unsigned long iommu_page = iommu_start;
struct scatterlist *s; struct scatterlist *s;
int i; int i;
......
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/errno.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
static __init int add_pcspkr(void) static __init int add_pcspkr(void)
{ {
struct platform_device *pd; struct platform_device *pd;
int ret;
pd = platform_device_alloc("pcspkr", -1); pd = platform_device_register_simple("pcspkr", -1, NULL, 0);
if (!pd)
return -ENOMEM;
ret = platform_device_add(pd); return IS_ERR(pd) ? PTR_ERR(pd) : 0;
if (ret)
platform_device_put(pd);
return ret;
} }
device_initcall(add_pcspkr); device_initcall(add_pcspkr);
...@@ -185,7 +185,8 @@ static void mwait_idle(void) ...@@ -185,7 +185,8 @@ static void mwait_idle(void)
static void poll_idle(void) static void poll_idle(void)
{ {
local_irq_enable(); local_irq_enable();
cpu_relax(); while (!need_resched())
cpu_relax();
} }
/* /*
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/tick.h> #include <linux/tick.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/prctl.h> #include <linux/prctl.h>
#include <linux/dmi.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -56,6 +57,8 @@ ...@@ -56,6 +57,8 @@
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/kdebug.h> #include <asm/kdebug.h>
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/syscalls.h>
#include <asm/smp.h>
asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
...@@ -161,6 +164,7 @@ void __show_registers(struct pt_regs *regs, int all) ...@@ -161,6 +164,7 @@ void __show_registers(struct pt_regs *regs, int all)
unsigned long d0, d1, d2, d3, d6, d7; unsigned long d0, d1, d2, d3, d6, d7;
unsigned long sp; unsigned long sp;
unsigned short ss, gs; unsigned short ss, gs;
const char *board;
if (user_mode_vm(regs)) { if (user_mode_vm(regs)) {
sp = regs->sp; sp = regs->sp;
...@@ -173,11 +177,15 @@ void __show_registers(struct pt_regs *regs, int all) ...@@ -173,11 +177,15 @@ void __show_registers(struct pt_regs *regs, int all)
} }
printk("\n"); printk("\n");
printk("Pid: %d, comm: %s %s (%s %.*s)\n",
board = dmi_get_system_info(DMI_PRODUCT_NAME);
if (!board)
board = "";
printk("Pid: %d, comm: %s %s (%s %.*s) %s\n",
task_pid_nr(current), current->comm, task_pid_nr(current), current->comm,
print_tainted(), init_utsname()->release, print_tainted(), init_utsname()->release,
(int)strcspn(init_utsname()->version, " "), (int)strcspn(init_utsname()->version, " "),
init_utsname()->version); init_utsname()->version, board);
printk("EIP: %04x:[<%08lx>] EFLAGS: %08lx CPU: %d\n", printk("EIP: %04x:[<%08lx>] EFLAGS: %08lx CPU: %d\n",
(u16)regs->cs, regs->ip, regs->flags, (u16)regs->cs, regs->ip, regs->flags,
...@@ -277,6 +285,14 @@ void exit_thread(void) ...@@ -277,6 +285,14 @@ void exit_thread(void)
tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET; tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET;
put_cpu(); put_cpu();
} }
#ifdef CONFIG_X86_DS
/* Free any DS contexts that have not been properly released. */
if (unlikely(current->thread.ds_ctx)) {
/* we clear debugctl to make sure DS is not used. */
update_debugctlmsr(0);
ds_free(current->thread.ds_ctx);
}
#endif /* CONFIG_X86_DS */
} }
void flush_thread(void) void flush_thread(void)
...@@ -438,6 +454,35 @@ int set_tsc_mode(unsigned int val) ...@@ -438,6 +454,35 @@ int set_tsc_mode(unsigned int val)
return 0; return 0;
} }
#ifdef CONFIG_X86_DS
static int update_debugctl(struct thread_struct *prev,
struct thread_struct *next, unsigned long debugctl)
{
unsigned long ds_prev = 0;
unsigned long ds_next = 0;
if (prev->ds_ctx)
ds_prev = (unsigned long)prev->ds_ctx->ds;
if (next->ds_ctx)
ds_next = (unsigned long)next->ds_ctx->ds;
if (ds_next != ds_prev) {
/* we clear debugctl to make sure DS
* is not in use when we change it */
debugctl = 0;
update_debugctlmsr(0);
wrmsr(MSR_IA32_DS_AREA, ds_next, 0);
}
return debugctl;
}
#else
static int update_debugctl(struct thread_struct *prev,
struct thread_struct *next, unsigned long debugctl)
{
return debugctl;
}
#endif /* CONFIG_X86_DS */
static noinline void static noinline void
__switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
struct tss_struct *tss) struct tss_struct *tss)
...@@ -448,14 +493,7 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, ...@@ -448,14 +493,7 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
prev = &prev_p->thread; prev = &prev_p->thread;
next = &next_p->thread; next = &next_p->thread;
debugctl = prev->debugctlmsr; debugctl = update_debugctl(prev, next, prev->debugctlmsr);
if (next->ds_area_msr != prev->ds_area_msr) {
/* we clear debugctl to make sure DS
* is not in use when we change it */
debugctl = 0;
update_debugctlmsr(0);
wrmsr(MSR_IA32_DS_AREA, next->ds_area_msr, 0);
}
if (next->debugctlmsr != debugctl) if (next->debugctlmsr != debugctl)
update_debugctlmsr(next->debugctlmsr); update_debugctlmsr(next->debugctlmsr);
...@@ -479,13 +517,13 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, ...@@ -479,13 +517,13 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
hard_enable_TSC(); hard_enable_TSC();
} }
#ifdef X86_BTS #ifdef CONFIG_X86_PTRACE_BTS
if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS)) if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS))
ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS); ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS);
if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS)) if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS))
ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES); ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES);
#endif #endif /* CONFIG_X86_PTRACE_BTS */
if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) { if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
......
This diff is collapsed.
This diff is collapsed.
...@@ -29,7 +29,11 @@ EXPORT_SYMBOL(pm_power_off); ...@@ -29,7 +29,11 @@ EXPORT_SYMBOL(pm_power_off);
static const struct desc_ptr no_idt = {}; static const struct desc_ptr no_idt = {};
static int reboot_mode; static int reboot_mode;
enum reboot_type reboot_type = BOOT_KBD; /*
* Keyboard reset and triple fault may result in INIT, not RESET, which
* doesn't work when we're in vmx root mode. Try ACPI first.
*/
enum reboot_type reboot_type = BOOT_ACPI;
int reboot_force; int reboot_force;
#if defined(CONFIG_X86_32) && defined(CONFIG_SMP) #if defined(CONFIG_X86_32) && defined(CONFIG_SMP)
......
...@@ -223,6 +223,9 @@ unsigned long saved_video_mode; ...@@ -223,6 +223,9 @@ unsigned long saved_video_mode;
#define RAMDISK_LOAD_FLAG 0x4000 #define RAMDISK_LOAD_FLAG 0x4000
static char __initdata command_line[COMMAND_LINE_SIZE]; static char __initdata command_line[COMMAND_LINE_SIZE];
#ifdef CONFIG_CMDLINE_BOOL
static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE;
#endif
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE) #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
struct edd edd; struct edd edd;
...@@ -665,6 +668,19 @@ void __init setup_arch(char **cmdline_p) ...@@ -665,6 +668,19 @@ void __init setup_arch(char **cmdline_p)
bss_resource.start = virt_to_phys(&__bss_start); bss_resource.start = virt_to_phys(&__bss_start);
bss_resource.end = virt_to_phys(&__bss_stop)-1; bss_resource.end = virt_to_phys(&__bss_stop)-1;
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
*cmdline_p = command_line; *cmdline_p = command_line;
......
...@@ -162,9 +162,16 @@ void __init setup_per_cpu_areas(void) ...@@ -162,9 +162,16 @@ void __init setup_per_cpu_areas(void)
printk(KERN_INFO printk(KERN_INFO
"cpu %d has no node %d or node-local memory\n", "cpu %d has no node %d or node-local memory\n",
cpu, node); cpu, node);
if (ptr)
printk(KERN_DEBUG "per cpu data for cpu%d at %016lx\n",
cpu, __pa(ptr));
} }
else else {
ptr = alloc_bootmem_pages_node(NODE_DATA(node), size); ptr = alloc_bootmem_pages_node(NODE_DATA(node), size);
if (ptr)
printk(KERN_DEBUG "per cpu data for cpu%d on node%d at %016lx\n",
cpu, node, __pa(ptr));
}
#endif #endif
per_cpu_offset(cpu) = ptr - __per_cpu_start; per_cpu_offset(cpu) = ptr - __per_cpu_start;
memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start); memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
......
...@@ -24,4 +24,9 @@ struct rt_sigframe { ...@@ -24,4 +24,9 @@ struct rt_sigframe {
struct ucontext uc; struct ucontext uc;
struct siginfo info; struct siginfo info;
}; };
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs *regs);
int ia32_setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs *regs);
#endif #endif
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/tracehook.h>
#include <linux/elf.h> #include <linux/elf.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -26,6 +27,7 @@ ...@@ -26,6 +27,7 @@
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/vdso.h> #include <asm/vdso.h>
#include <asm/syscalls.h>
#include "sigframe.h" #include "sigframe.h"
...@@ -558,8 +560,6 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, ...@@ -558,8 +560,6 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
* handler too. * handler too.
*/ */
regs->flags &= ~X86_EFLAGS_TF; regs->flags &= ~X86_EFLAGS_TF;
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
spin_lock_irq(&current->sighand->siglock); spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask); sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
...@@ -568,6 +568,9 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, ...@@ -568,6 +568,9 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock); spin_unlock_irq(&current->sighand->siglock);
tracehook_signal_handler(sig, info, ka, regs,
test_thread_flag(TIF_SINGLESTEP));
return 0; return 0;
} }
...@@ -661,5 +664,10 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags) ...@@ -661,5 +664,10 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
if (thread_info_flags & _TIF_SIGPENDING) if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs); do_signal(regs);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
}
clear_thread_flag(TIF_IRET); clear_thread_flag(TIF_IRET);
} }
...@@ -15,17 +15,21 @@ ...@@ -15,17 +15,21 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/tracehook.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/uaccess.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
#include <asm/uaccess.h>
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/ia32_unistd.h> #include <asm/ia32_unistd.h>
#include <asm/mce.h> #include <asm/mce.h>
#include <asm/syscall.h>
#include <asm/syscalls.h>
#include "sigframe.h" #include "sigframe.h"
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
...@@ -41,11 +45,6 @@ ...@@ -41,11 +45,6 @@
# define FIX_EFLAGS __FIX_EFLAGS # define FIX_EFLAGS __FIX_EFLAGS
#endif #endif
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs);
int ia32_setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs);
asmlinkage long asmlinkage long
sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss, sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
struct pt_regs *regs) struct pt_regs *regs)
...@@ -128,7 +127,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, ...@@ -128,7 +127,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc,
/* Always make any pending restarted system calls return -EINTR */ /* Always make any pending restarted system calls return -EINTR */
current_thread_info()->restart_block.fn = do_no_restart_syscall; current_thread_info()->restart_block.fn = do_no_restart_syscall;
#define COPY(x) err |= __get_user(regs->x, &sc->x) #define COPY(x) (err |= __get_user(regs->x, &sc->x))
COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx); COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
COPY(dx); COPY(cx); COPY(ip); COPY(dx); COPY(cx); COPY(ip);
...@@ -158,7 +157,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, ...@@ -158,7 +157,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc,
} }
{ {
struct _fpstate __user * buf; struct _fpstate __user *buf;
err |= __get_user(buf, &sc->fpstate); err |= __get_user(buf, &sc->fpstate);
if (buf) { if (buf) {
...@@ -198,7 +197,7 @@ asmlinkage long sys_rt_sigreturn(struct pt_regs *regs) ...@@ -198,7 +197,7 @@ asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
current->blocked = set; current->blocked = set;
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock); spin_unlock_irq(&current->sighand->siglock);
if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax)) if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax))
goto badframe; goto badframe;
...@@ -208,16 +207,17 @@ asmlinkage long sys_rt_sigreturn(struct pt_regs *regs) ...@@ -208,16 +207,17 @@ asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
return ax; return ax;
badframe: badframe:
signal_fault(regs,frame,"sigreturn"); signal_fault(regs, frame, "sigreturn");
return 0; return 0;
} }
/* /*
* Set up a signal frame. * Set up a signal frame.
*/ */
static inline int static inline int
setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, unsigned long mask, struct task_struct *me) setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
unsigned long mask, struct task_struct *me)
{ {
int err = 0; int err = 0;
...@@ -273,35 +273,35 @@ get_stack(struct k_sigaction *ka, struct pt_regs *regs, unsigned long size) ...@@ -273,35 +273,35 @@ get_stack(struct k_sigaction *ka, struct pt_regs *regs, unsigned long size)
} }
static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs) sigset_t *set, struct pt_regs *regs)
{ {
struct rt_sigframe __user *frame; struct rt_sigframe __user *frame;
struct _fpstate __user *fp = NULL; struct _fpstate __user *fp = NULL;
int err = 0; int err = 0;
struct task_struct *me = current; struct task_struct *me = current;
if (used_math()) { if (used_math()) {
fp = get_stack(ka, regs, sizeof(struct _fpstate)); fp = get_stack(ka, regs, sizeof(struct _fpstate));
frame = (void __user *)round_down( frame = (void __user *)round_down(
(unsigned long)fp - sizeof(struct rt_sigframe), 16) - 8; (unsigned long)fp - sizeof(struct rt_sigframe), 16) - 8;
if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate))) if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate)))
goto give_sigsegv; goto give_sigsegv;
if (save_i387(fp) < 0) if (save_i387(fp) < 0)
err |= -1; err |= -1;
} else } else
frame = get_stack(ka, regs, sizeof(struct rt_sigframe)) - 8; frame = get_stack(ka, regs, sizeof(struct rt_sigframe)) - 8;
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv; goto give_sigsegv;
if (ka->sa.sa_flags & SA_SIGINFO) { if (ka->sa.sa_flags & SA_SIGINFO) {
err |= copy_siginfo_to_user(&frame->info, info); err |= copy_siginfo_to_user(&frame->info, info);
if (err) if (err)
goto give_sigsegv; goto give_sigsegv;
} }
/* Create the ucontext. */ /* Create the ucontext. */
err |= __put_user(0, &frame->uc.uc_flags); err |= __put_user(0, &frame->uc.uc_flags);
err |= __put_user(0, &frame->uc.uc_link); err |= __put_user(0, &frame->uc.uc_link);
...@@ -311,9 +311,9 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -311,9 +311,9 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
err |= __put_user(me->sas_ss_size, &frame->uc.uc_stack.ss_size); err |= __put_user(me->sas_ss_size, &frame->uc.uc_stack.ss_size);
err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0], me); err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0], me);
err |= __put_user(fp, &frame->uc.uc_mcontext.fpstate); err |= __put_user(fp, &frame->uc.uc_mcontext.fpstate);
if (sizeof(*set) == 16) { if (sizeof(*set) == 16) {
__put_user(set->sig[0], &frame->uc.uc_sigmask.sig[0]); __put_user(set->sig[0], &frame->uc.uc_sigmask.sig[0]);
__put_user(set->sig[1], &frame->uc.uc_sigmask.sig[1]); __put_user(set->sig[1], &frame->uc.uc_sigmask.sig[1]);
} else } else
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
...@@ -324,7 +324,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -324,7 +324,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
err |= __put_user(ka->sa.sa_restorer, &frame->pretcode); err |= __put_user(ka->sa.sa_restorer, &frame->pretcode);
} else { } else {
/* could use a vstub here */ /* could use a vstub here */
goto give_sigsegv; goto give_sigsegv;
} }
if (err) if (err)
...@@ -332,7 +332,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -332,7 +332,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
/* Set up registers for signal handler */ /* Set up registers for signal handler */
regs->di = sig; regs->di = sig;
/* In case the signal handler was declared without prototypes */ /* In case the signal handler was declared without prototypes */
regs->ax = 0; regs->ax = 0;
/* This also works for non SA_SIGINFO handlers because they expect the /* This also works for non SA_SIGINFO handlers because they expect the
...@@ -354,38 +354,9 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -354,38 +354,9 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
return -EFAULT; return -EFAULT;
} }
/*
* Return -1L or the syscall number that @regs is executing.
*/
static long current_syscall(struct pt_regs *regs)
{
/*
* We always sign-extend a -1 value being set here,
* so this is always either -1L or a syscall number.
*/
return regs->orig_ax;
}
/*
* Return a value that is -EFOO if the system call in @regs->orig_ax
* returned an error. This only works for @regs from @current.
*/
static long current_syscall_ret(struct pt_regs *regs)
{
#ifdef CONFIG_IA32_EMULATION
if (test_thread_flag(TIF_IA32))
/*
* Sign-extend the value so (int)-EFOO becomes (long)-EFOO
* and will match correctly in comparisons.
*/
return (int) regs->ax;
#endif
return regs->ax;
}
/* /*
* OK, we're invoking a handler * OK, we're invoking a handler
*/ */
static int static int
handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
...@@ -394,9 +365,9 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, ...@@ -394,9 +365,9 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
int ret; int ret;
/* Are we from a system call? */ /* Are we from a system call? */
if (current_syscall(regs) >= 0) { if (syscall_get_nr(current, regs) >= 0) {
/* If so, check system call restarting.. */ /* If so, check system call restarting.. */
switch (current_syscall_ret(regs)) { switch (syscall_get_error(current, regs)) {
case -ERESTART_RESTARTBLOCK: case -ERESTART_RESTARTBLOCK:
case -ERESTARTNOHAND: case -ERESTARTNOHAND:
regs->ax = -EINTR; regs->ax = -EINTR;
...@@ -429,7 +400,7 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, ...@@ -429,7 +400,7 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
ret = ia32_setup_rt_frame(sig, ka, info, oldset, regs); ret = ia32_setup_rt_frame(sig, ka, info, oldset, regs);
else else
ret = ia32_setup_frame(sig, ka, oldset, regs); ret = ia32_setup_frame(sig, ka, oldset, regs);
} else } else
#endif #endif
ret = setup_rt_frame(sig, ka, info, oldset, regs); ret = setup_rt_frame(sig, ka, info, oldset, regs);
...@@ -453,15 +424,16 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, ...@@ -453,15 +424,16 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
* handler too. * handler too.
*/ */
regs->flags &= ~X86_EFLAGS_TF; regs->flags &= ~X86_EFLAGS_TF;
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
spin_lock_irq(&current->sighand->siglock); spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask); sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER)) if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig); sigaddset(&current->blocked, sig);
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock); spin_unlock_irq(&current->sighand->siglock);
tracehook_signal_handler(sig, info, ka, regs,
test_thread_flag(TIF_SINGLESTEP));
} }
return ret; return ret;
...@@ -518,9 +490,9 @@ static void do_signal(struct pt_regs *regs) ...@@ -518,9 +490,9 @@ static void do_signal(struct pt_regs *regs)
} }
/* Did we come from a system call? */ /* Did we come from a system call? */
if (current_syscall(regs) >= 0) { if (syscall_get_nr(current, regs) >= 0) {
/* Restart the system call - no handlers present */ /* Restart the system call - no handlers present */
switch (current_syscall_ret(regs)) { switch (syscall_get_error(current, regs)) {
case -ERESTARTNOHAND: case -ERESTARTNOHAND:
case -ERESTARTSYS: case -ERESTARTSYS:
case -ERESTARTNOINTR: case -ERESTARTNOINTR:
...@@ -558,17 +530,23 @@ void do_notify_resume(struct pt_regs *regs, void *unused, ...@@ -558,17 +530,23 @@ void do_notify_resume(struct pt_regs *regs, void *unused,
/* deal with pending signal delivery */ /* deal with pending signal delivery */
if (thread_info_flags & _TIF_SIGPENDING) if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs); do_signal(regs);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
}
} }
void signal_fault(struct pt_regs *regs, void __user *frame, char *where) void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
{ {
struct task_struct *me = current; struct task_struct *me = current;
if (show_unhandled_signals && printk_ratelimit()) { if (show_unhandled_signals && printk_ratelimit()) {
printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx", printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx",
me->comm,me->pid,where,frame,regs->ip,regs->sp,regs->orig_ax); me->comm, me->pid, where, frame, regs->ip,
regs->sp, regs->orig_ax);
print_vma_addr(" in ", regs->ip); print_vma_addr(" in ", regs->ip);
printk("\n"); printk("\n");
} }
force_sig(SIGSEGV, me); force_sig(SIGSEGV, me);
} }
...@@ -88,7 +88,7 @@ static DEFINE_PER_CPU(struct task_struct *, idle_thread_array); ...@@ -88,7 +88,7 @@ static DEFINE_PER_CPU(struct task_struct *, idle_thread_array);
#define get_idle_for_cpu(x) (per_cpu(idle_thread_array, x)) #define get_idle_for_cpu(x) (per_cpu(idle_thread_array, x))
#define set_idle_for_cpu(x, p) (per_cpu(idle_thread_array, x) = (p)) #define set_idle_for_cpu(x, p) (per_cpu(idle_thread_array, x) = (p))
#else #else
struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ; static struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ;
#define get_idle_for_cpu(x) (idle_thread_array[(x)]) #define get_idle_for_cpu(x) (idle_thread_array[(x)])
#define set_idle_for_cpu(x, p) (idle_thread_array[(x)] = (p)) #define set_idle_for_cpu(x, p) (idle_thread_array[(x)] = (p))
#endif #endif
...@@ -129,7 +129,7 @@ static int boot_cpu_logical_apicid; ...@@ -129,7 +129,7 @@ static int boot_cpu_logical_apicid;
static cpumask_t cpu_sibling_setup_map; static cpumask_t cpu_sibling_setup_map;
/* Set if we find a B stepping CPU */ /* Set if we find a B stepping CPU */
int __cpuinitdata smp_b_stepping; static int __cpuinitdata smp_b_stepping;
#if defined(CONFIG_NUMA) && defined(CONFIG_X86_32) #if defined(CONFIG_NUMA) && defined(CONFIG_X86_32)
...@@ -1313,16 +1313,13 @@ __init void prefill_possible_map(void) ...@@ -1313,16 +1313,13 @@ __init void prefill_possible_map(void)
if (!num_processors) if (!num_processors)
num_processors = 1; num_processors = 1;
#ifdef CONFIG_HOTPLUG_CPU
if (additional_cpus == -1) { if (additional_cpus == -1) {
if (disabled_cpus > 0) if (disabled_cpus > 0)
additional_cpus = disabled_cpus; additional_cpus = disabled_cpus;
else else
additional_cpus = 0; additional_cpus = 0;
} }
#else
additional_cpus = 0;
#endif
possible = num_processors + additional_cpus; possible = num_processors + additional_cpus;
if (possible > NR_CPUS) if (possible > NR_CPUS)
possible = NR_CPUS; possible = NR_CPUS;
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <asm/syscalls.h>
asmlinkage long sys_mmap2(unsigned long addr, unsigned long len, asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags, unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long pgoff) unsigned long fd, unsigned long pgoff)
......
...@@ -13,15 +13,17 @@ ...@@ -13,15 +13,17 @@
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/uaccess.h>
#include <asm/uaccess.h>
#include <asm/ia32.h> #include <asm/ia32.h>
#include <asm/syscalls.h>
asmlinkage long sys_mmap(unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, asmlinkage long sys_mmap(unsigned long addr, unsigned long len,
unsigned long fd, unsigned long off) unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long off)
{ {
long error; long error;
struct file * file; struct file *file;
error = -EINVAL; error = -EINVAL;
if (off & ~PAGE_MASK) if (off & ~PAGE_MASK)
...@@ -56,9 +58,9 @@ static void find_start_end(unsigned long flags, unsigned long *begin, ...@@ -56,9 +58,9 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
unmapped base down for this case. This can give unmapped base down for this case. This can give
conflicts with the heap, but we assume that glibc conflicts with the heap, but we assume that glibc
malloc knows how to fall back to mmap. Give it 1GB malloc knows how to fall back to mmap. Give it 1GB
of playground for now. -AK */ of playground for now. -AK */
*begin = 0x40000000; *begin = 0x40000000;
*end = 0x80000000; *end = 0x80000000;
if (current->flags & PF_RANDOMIZE) { if (current->flags & PF_RANDOMIZE) {
new_begin = randomize_range(*begin, *begin + 0x02000000, 0); new_begin = randomize_range(*begin, *begin + 0x02000000, 0);
if (new_begin) if (new_begin)
...@@ -66,9 +68,9 @@ static void find_start_end(unsigned long flags, unsigned long *begin, ...@@ -66,9 +68,9 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
} }
} else { } else {
*begin = TASK_UNMAPPED_BASE; *begin = TASK_UNMAPPED_BASE;
*end = TASK_SIZE; *end = TASK_SIZE;
} }
} }
unsigned long unsigned long
arch_get_unmapped_area(struct file *filp, unsigned long addr, arch_get_unmapped_area(struct file *filp, unsigned long addr,
...@@ -78,11 +80,11 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, ...@@ -78,11 +80,11 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
struct vm_area_struct *vma; struct vm_area_struct *vma;
unsigned long start_addr; unsigned long start_addr;
unsigned long begin, end; unsigned long begin, end;
if (flags & MAP_FIXED) if (flags & MAP_FIXED)
return addr; return addr;
find_start_end(flags, &begin, &end); find_start_end(flags, &begin, &end);
if (len > end) if (len > end)
return -ENOMEM; return -ENOMEM;
...@@ -96,12 +98,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, ...@@ -96,12 +98,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
} }
if (((flags & MAP_32BIT) || test_thread_flag(TIF_IA32)) if (((flags & MAP_32BIT) || test_thread_flag(TIF_IA32))
&& len <= mm->cached_hole_size) { && len <= mm->cached_hole_size) {
mm->cached_hole_size = 0; mm->cached_hole_size = 0;
mm->free_area_cache = begin; mm->free_area_cache = begin;
} }
addr = mm->free_area_cache; addr = mm->free_area_cache;
if (addr < begin) if (addr < begin)
addr = begin; addr = begin;
start_addr = addr; start_addr = addr;
full_search: full_search:
...@@ -127,7 +129,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, ...@@ -127,7 +129,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
return addr; return addr;
} }
if (addr + mm->cached_hole_size < vma->vm_start) if (addr + mm->cached_hole_size < vma->vm_start)
mm->cached_hole_size = vma->vm_start - addr; mm->cached_hole_size = vma->vm_start - addr;
addr = vma->vm_end; addr = vma->vm_end;
} }
...@@ -177,7 +179,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, ...@@ -177,7 +179,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
vma = find_vma(mm, addr-len); vma = find_vma(mm, addr-len);
if (!vma || addr <= vma->vm_start) if (!vma || addr <= vma->vm_start)
/* remember the address as a hint for next time */ /* remember the address as a hint for next time */
return (mm->free_area_cache = addr-len); return mm->free_area_cache = addr-len;
} }
if (mm->mmap_base < len) if (mm->mmap_base < len)
...@@ -194,7 +196,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, ...@@ -194,7 +196,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
vma = find_vma(mm, addr); vma = find_vma(mm, addr);
if (!vma || addr+len <= vma->vm_start) if (!vma || addr+len <= vma->vm_start)
/* remember the address as a hint for next time */ /* remember the address as a hint for next time */
return (mm->free_area_cache = addr); return mm->free_area_cache = addr;
/* remember the largest hole we saw so far */ /* remember the largest hole we saw so far */
if (addr + mm->cached_hole_size < vma->vm_start) if (addr + mm->cached_hole_size < vma->vm_start)
...@@ -224,13 +226,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, ...@@ -224,13 +226,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
} }
asmlinkage long sys_uname(struct new_utsname __user * name) asmlinkage long sys_uname(struct new_utsname __user *name)
{ {
int err; int err;
down_read(&uts_sem); down_read(&uts_sem);
err = copy_to_user(name, utsname(), sizeof (*name)); err = copy_to_user(name, utsname(), sizeof(*name));
up_read(&uts_sem); up_read(&uts_sem);
if (personality(current->personality) == PER_LINUX32) if (personality(current->personality) == PER_LINUX32)
err |= copy_to_user(&name->machine, "i686", 5); err |= copy_to_user(&name->machine, "i686", 5);
return err ? -EFAULT : 0; return err ? -EFAULT : 0;
} }
...@@ -8,12 +8,12 @@ ...@@ -8,12 +8,12 @@
#define __NO_STUBS #define __NO_STUBS
#define __SYSCALL(nr, sym) extern asmlinkage void sym(void) ; #define __SYSCALL(nr, sym) extern asmlinkage void sym(void) ;
#undef _ASM_X86_64_UNISTD_H_ #undef ASM_X86__UNISTD_64_H
#include <asm/unistd_64.h> #include <asm/unistd_64.h>
#undef __SYSCALL #undef __SYSCALL
#define __SYSCALL(nr, sym) [nr] = sym, #define __SYSCALL(nr, sym) [nr] = sym,
#undef _ASM_X86_64_UNISTD_H_ #undef ASM_X86__UNISTD_64_H
typedef void (*sys_call_ptr_t)(void); typedef void (*sys_call_ptr_t)(void);
......
...@@ -36,6 +36,7 @@ ...@@ -36,6 +36,7 @@
#include <asm/arch_hooks.h> #include <asm/arch_hooks.h>
#include <asm/hpet.h> #include <asm/hpet.h>
#include <asm/time.h> #include <asm/time.h>
#include <asm/timer.h>
#include "do_timer.h" #include "do_timer.h"
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <asm/ldt.h> #include <asm/ldt.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/syscalls.h>
#include "tls.h" #include "tls.h"
......
...@@ -32,6 +32,8 @@ ...@@ -32,6 +32,8 @@
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/nmi.h> #include <linux/nmi.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/smp.h>
#include <linux/io.h>
#if defined(CONFIG_EDAC) #if defined(CONFIG_EDAC)
#include <linux/edac.h> #include <linux/edac.h>
...@@ -45,9 +47,6 @@ ...@@ -45,9 +47,6 @@
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/nmi.h>
#include <asm/smp.h>
#include <asm/io.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/pda.h> #include <asm/pda.h>
...@@ -85,7 +84,8 @@ static inline void preempt_conditional_cli(struct pt_regs *regs) ...@@ -85,7 +84,8 @@ static inline void preempt_conditional_cli(struct pt_regs *regs)
void printk_address(unsigned long address, int reliable) void printk_address(unsigned long address, int reliable)
{ {
printk(" [<%016lx>] %s%pS\n", address, reliable ? "": "? ", (void *) address); printk(" [<%016lx>] %s%pS\n",
address, reliable ? "" : "? ", (void *) address);
} }
static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack, static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
...@@ -98,7 +98,8 @@ static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack, ...@@ -98,7 +98,8 @@ static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
[STACKFAULT_STACK - 1] = "#SS", [STACKFAULT_STACK - 1] = "#SS",
[MCE_STACK - 1] = "#MC", [MCE_STACK - 1] = "#MC",
#if DEBUG_STKSZ > EXCEPTION_STKSZ #if DEBUG_STKSZ > EXCEPTION_STKSZ
[N_EXCEPTION_STACKS ... N_EXCEPTION_STACKS + DEBUG_STKSZ / EXCEPTION_STKSZ - 2] = "#DB[?]" [N_EXCEPTION_STACKS ...
N_EXCEPTION_STACKS + DEBUG_STKSZ / EXCEPTION_STKSZ - 2] = "#DB[?]"
#endif #endif
}; };
unsigned k; unsigned k;
...@@ -163,7 +164,7 @@ static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack, ...@@ -163,7 +164,7 @@ static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
} }
/* /*
* x86-64 can have up to three kernel stacks: * x86-64 can have up to three kernel stacks:
* process stack * process stack
* interrupt stack * interrupt stack
* severe exception (double fault, nmi, stack fault, debug, mce) hardware stack * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack
...@@ -219,7 +220,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, ...@@ -219,7 +220,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
const struct stacktrace_ops *ops, void *data) const struct stacktrace_ops *ops, void *data)
{ {
const unsigned cpu = get_cpu(); const unsigned cpu = get_cpu();
unsigned long *irqstack_end = (unsigned long*)cpu_pda(cpu)->irqstackptr; unsigned long *irqstack_end = (unsigned long *)cpu_pda(cpu)->irqstackptr;
unsigned used = 0; unsigned used = 0;
struct thread_info *tinfo; struct thread_info *tinfo;
...@@ -237,7 +238,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, ...@@ -237,7 +238,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
if (!bp) { if (!bp) {
if (task == current) { if (task == current) {
/* Grab bp right from our regs */ /* Grab bp right from our regs */
asm("movq %%rbp, %0" : "=r" (bp) :); asm("movq %%rbp, %0" : "=r" (bp) : );
} else { } else {
/* bp is the last reg pushed by switch_to */ /* bp is the last reg pushed by switch_to */
bp = *(unsigned long *) task->thread.sp; bp = *(unsigned long *) task->thread.sp;
...@@ -339,9 +340,8 @@ static void ...@@ -339,9 +340,8 @@ static void
show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, unsigned long bp, char *log_lvl) unsigned long *stack, unsigned long bp, char *log_lvl)
{ {
printk("\nCall Trace:\n"); printk("Call Trace:\n");
dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl); dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl);
printk("\n");
} }
void show_trace(struct task_struct *task, struct pt_regs *regs, void show_trace(struct task_struct *task, struct pt_regs *regs,
...@@ -357,11 +357,15 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs, ...@@ -357,11 +357,15 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack; unsigned long *stack;
int i; int i;
const int cpu = smp_processor_id(); const int cpu = smp_processor_id();
unsigned long *irqstack_end = (unsigned long *) (cpu_pda(cpu)->irqstackptr); unsigned long *irqstack_end =
unsigned long *irqstack = (unsigned long *) (cpu_pda(cpu)->irqstackptr - IRQSTACKSIZE); (unsigned long *) (cpu_pda(cpu)->irqstackptr);
unsigned long *irqstack =
(unsigned long *) (cpu_pda(cpu)->irqstackptr - IRQSTACKSIZE);
// debugging aid: "show_stack(NULL, NULL);" prints the /*
// back trace for this cpu. * debugging aid: "show_stack(NULL, NULL);" prints the
* back trace for this cpu.
*/
if (sp == NULL) { if (sp == NULL) {
if (task) if (task)
...@@ -386,6 +390,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs, ...@@ -386,6 +390,7 @@ show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
printk(" %016lx", *stack++); printk(" %016lx", *stack++);
touch_nmi_watchdog(); touch_nmi_watchdog();
} }
printk("\n");
show_trace_log_lvl(task, regs, sp, bp, log_lvl); show_trace_log_lvl(task, regs, sp, bp, log_lvl);
} }
...@@ -404,7 +409,7 @@ void dump_stack(void) ...@@ -404,7 +409,7 @@ void dump_stack(void)
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
if (!bp) if (!bp)
asm("movq %%rbp, %0" : "=r" (bp):); asm("movq %%rbp, %0" : "=r" (bp) : );
#endif #endif
printk("Pid: %d, comm: %.20s %s %s %.*s\n", printk("Pid: %d, comm: %.20s %s %s %.*s\n",
...@@ -414,7 +419,6 @@ void dump_stack(void) ...@@ -414,7 +419,6 @@ void dump_stack(void)
init_utsname()->version); init_utsname()->version);
show_trace(NULL, NULL, &stack, bp); show_trace(NULL, NULL, &stack, bp);
} }
EXPORT_SYMBOL(dump_stack); EXPORT_SYMBOL(dump_stack);
void show_registers(struct pt_regs *regs) void show_registers(struct pt_regs *regs)
...@@ -443,7 +447,6 @@ void show_registers(struct pt_regs *regs) ...@@ -443,7 +447,6 @@ void show_registers(struct pt_regs *regs)
printk("Stack: "); printk("Stack: ");
show_stack_log_lvl(NULL, regs, (unsigned long *)sp, show_stack_log_lvl(NULL, regs, (unsigned long *)sp,
regs->bp, ""); regs->bp, "");
printk("\n");
printk(KERN_EMERG "Code: "); printk(KERN_EMERG "Code: ");
...@@ -493,7 +496,7 @@ unsigned __kprobes long oops_begin(void) ...@@ -493,7 +496,7 @@ unsigned __kprobes long oops_begin(void)
raw_local_irq_save(flags); raw_local_irq_save(flags);
cpu = smp_processor_id(); cpu = smp_processor_id();
if (!__raw_spin_trylock(&die_lock)) { if (!__raw_spin_trylock(&die_lock)) {
if (cpu == die_owner) if (cpu == die_owner)
/* nested oops. should stop eventually */; /* nested oops. should stop eventually */;
else else
__raw_spin_lock(&die_lock); __raw_spin_lock(&die_lock);
...@@ -638,7 +641,7 @@ do_trap(int trapnr, int signr, char *str, struct pt_regs *regs, ...@@ -638,7 +641,7 @@ do_trap(int trapnr, int signr, char *str, struct pt_regs *regs,
} }
#define DO_ERROR(trapnr, signr, str, name) \ #define DO_ERROR(trapnr, signr, str, name) \
asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ asmlinkage void do_##name(struct pt_regs *regs, long error_code) \
{ \ { \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \ == NOTIFY_STOP) \
...@@ -648,7 +651,7 @@ asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ ...@@ -648,7 +651,7 @@ asmlinkage void do_##name(struct pt_regs * regs, long error_code) \
} }
#define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ asmlinkage void do_##name(struct pt_regs *regs, long error_code) \
{ \ { \
siginfo_t info; \ siginfo_t info; \
info.si_signo = signr; \ info.si_signo = signr; \
...@@ -683,7 +686,7 @@ asmlinkage void do_stack_segment(struct pt_regs *regs, long error_code) ...@@ -683,7 +686,7 @@ asmlinkage void do_stack_segment(struct pt_regs *regs, long error_code)
preempt_conditional_cli(regs); preempt_conditional_cli(regs);
} }
asmlinkage void do_double_fault(struct pt_regs * regs, long error_code) asmlinkage void do_double_fault(struct pt_regs *regs, long error_code)
{ {
static const char str[] = "double fault"; static const char str[] = "double fault";
struct task_struct *tsk = current; struct task_struct *tsk = current;
...@@ -778,9 +781,10 @@ io_check_error(unsigned char reason, struct pt_regs *regs) ...@@ -778,9 +781,10 @@ io_check_error(unsigned char reason, struct pt_regs *regs)
} }
static notrace __kprobes void static notrace __kprobes void
unknown_nmi_error(unsigned char reason, struct pt_regs * regs) unknown_nmi_error(unsigned char reason, struct pt_regs *regs)
{ {
if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP) if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) ==
NOTIFY_STOP)
return; return;
printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n", printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n",
reason); reason);
...@@ -882,7 +886,7 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) ...@@ -882,7 +886,7 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
else if (user_mode(eregs)) else if (user_mode(eregs))
regs = task_pt_regs(current); regs = task_pt_regs(current);
/* Exception from kernel and interrupts are enabled. Move to /* Exception from kernel and interrupts are enabled. Move to
kernel process stack. */ kernel process stack. */
else if (eregs->flags & X86_EFLAGS_IF) else if (eregs->flags & X86_EFLAGS_IF)
regs = (struct pt_regs *)(eregs->sp -= sizeof(struct pt_regs)); regs = (struct pt_regs *)(eregs->sp -= sizeof(struct pt_regs));
if (eregs != regs) if (eregs != regs)
...@@ -891,7 +895,7 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) ...@@ -891,7 +895,7 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
} }
/* runs on IST stack. */ /* runs on IST stack. */
asmlinkage void __kprobes do_debug(struct pt_regs * regs, asmlinkage void __kprobes do_debug(struct pt_regs *regs,
unsigned long error_code) unsigned long error_code)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
...@@ -1035,7 +1039,7 @@ asmlinkage void do_coprocessor_error(struct pt_regs *regs) ...@@ -1035,7 +1039,7 @@ asmlinkage void do_coprocessor_error(struct pt_regs *regs)
asmlinkage void bad_intr(void) asmlinkage void bad_intr(void)
{ {
printk("bad interrupt"); printk("bad interrupt");
} }
asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs) asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
...@@ -1047,7 +1051,7 @@ asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs) ...@@ -1047,7 +1051,7 @@ asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
conditional_sti(regs); conditional_sti(regs);
if (!user_mode(regs) && if (!user_mode(regs) &&
kernel_math_error(regs, "kernel simd math error", 19)) kernel_math_error(regs, "kernel simd math error", 19))
return; return;
/* /*
...@@ -1092,7 +1096,7 @@ asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs) ...@@ -1092,7 +1096,7 @@ asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
force_sig_info(SIGFPE, &info, task); force_sig_info(SIGFPE, &info, task);
} }
asmlinkage void do_spurious_interrupt_bug(struct pt_regs * regs) asmlinkage void do_spurious_interrupt_bug(struct pt_regs *regs)
{ {
} }
...@@ -1149,8 +1153,10 @@ void __init trap_init(void) ...@@ -1149,8 +1153,10 @@ void __init trap_init(void)
set_intr_gate(0, &divide_error); set_intr_gate(0, &divide_error);
set_intr_gate_ist(1, &debug, DEBUG_STACK); set_intr_gate_ist(1, &debug, DEBUG_STACK);
set_intr_gate_ist(2, &nmi, NMI_STACK); set_intr_gate_ist(2, &nmi, NMI_STACK);
set_system_gate_ist(3, &int3, DEBUG_STACK); /* int3 can be called from all */ /* int3 can be called from all */
set_system_gate(4, &overflow); /* int4 can be called from all */ set_system_gate_ist(3, &int3, DEBUG_STACK);
/* int4 can be called from all */
set_system_gate(4, &overflow);
set_intr_gate(5, &bounds); set_intr_gate(5, &bounds);
set_intr_gate(6, &invalid_op); set_intr_gate(6, &invalid_op);
set_intr_gate(7, &device_not_available); set_intr_gate(7, &device_not_available);
......
This diff is collapsed.
...@@ -25,45 +25,31 @@ ...@@ -25,45 +25,31 @@
#include <asm/visws/cobalt.h> #include <asm/visws/cobalt.h>
#include <asm/visws/piix4.h> #include <asm/visws/piix4.h>
#include <asm/arch_hooks.h> #include <asm/arch_hooks.h>
#include <asm/io_apic.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/reboot.h> #include <asm/reboot.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/smp.h>
#include <asm/io.h> #include <asm/io.h>
#include <mach_ipi.h> #include <mach_ipi.h>
#include "mach_apic.h" #include "mach_apic.h"
#include <linux/init.h>
#include <linux/smp.h>
#include <linux/kernel_stat.h> #include <linux/kernel_stat.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <asm/io.h>
#include <asm/apic.h>
#include <asm/i8259.h> #include <asm/i8259.h>
#include <asm/irq_vectors.h> #include <asm/irq_vectors.h>
#include <asm/visws/cobalt.h>
#include <asm/visws/lithium.h> #include <asm/visws/lithium.h>
#include <asm/visws/piix4.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci_ids.h> #include <linux/pci_ids.h>
extern int no_broadcast; extern int no_broadcast;
#include <asm/io.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/arch_hooks.h>
#include <asm/visws/cobalt.h>
#include <asm/visws/lithium.h>
char visws_board_type = -1; char visws_board_type = -1;
char visws_board_rev = -1; char visws_board_rev = -1;
......
...@@ -46,6 +46,7 @@ ...@@ -46,6 +46,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/syscalls.h>
/* /*
* Known problems: * Known problems:
......
...@@ -393,13 +393,13 @@ static void *vmi_kmap_atomic_pte(struct page *page, enum km_type type) ...@@ -393,13 +393,13 @@ static void *vmi_kmap_atomic_pte(struct page *page, enum km_type type)
} }
#endif #endif
static void vmi_allocate_pte(struct mm_struct *mm, u32 pfn) static void vmi_allocate_pte(struct mm_struct *mm, unsigned long pfn)
{ {
vmi_set_page_type(pfn, VMI_PAGE_L1); vmi_set_page_type(pfn, VMI_PAGE_L1);
vmi_ops.allocate_page(pfn, VMI_PAGE_L1, 0, 0, 0); vmi_ops.allocate_page(pfn, VMI_PAGE_L1, 0, 0, 0);
} }
static void vmi_allocate_pmd(struct mm_struct *mm, u32 pfn) static void vmi_allocate_pmd(struct mm_struct *mm, unsigned long pfn)
{ {
/* /*
* This call comes in very early, before mem_map is setup. * This call comes in very early, before mem_map is setup.
...@@ -410,20 +410,20 @@ static void vmi_allocate_pmd(struct mm_struct *mm, u32 pfn) ...@@ -410,20 +410,20 @@ static void vmi_allocate_pmd(struct mm_struct *mm, u32 pfn)
vmi_ops.allocate_page(pfn, VMI_PAGE_L2, 0, 0, 0); vmi_ops.allocate_page(pfn, VMI_PAGE_L2, 0, 0, 0);
} }
static void vmi_allocate_pmd_clone(u32 pfn, u32 clonepfn, u32 start, u32 count) static void vmi_allocate_pmd_clone(unsigned long pfn, unsigned long clonepfn, unsigned long start, unsigned long count)
{ {
vmi_set_page_type(pfn, VMI_PAGE_L2 | VMI_PAGE_CLONE); vmi_set_page_type(pfn, VMI_PAGE_L2 | VMI_PAGE_CLONE);
vmi_check_page_type(clonepfn, VMI_PAGE_L2); vmi_check_page_type(clonepfn, VMI_PAGE_L2);
vmi_ops.allocate_page(pfn, VMI_PAGE_L2 | VMI_PAGE_CLONE, clonepfn, start, count); vmi_ops.allocate_page(pfn, VMI_PAGE_L2 | VMI_PAGE_CLONE, clonepfn, start, count);
} }
static void vmi_release_pte(u32 pfn) static void vmi_release_pte(unsigned long pfn)
{ {
vmi_ops.release_page(pfn, VMI_PAGE_L1); vmi_ops.release_page(pfn, VMI_PAGE_L1);
vmi_set_page_type(pfn, VMI_PAGE_NORMAL); vmi_set_page_type(pfn, VMI_PAGE_NORMAL);
} }
static void vmi_release_pmd(u32 pfn) static void vmi_release_pmd(unsigned long pfn)
{ {
vmi_ops.release_page(pfn, VMI_PAGE_L2); vmi_ops.release_page(pfn, VMI_PAGE_L2);
vmi_set_page_type(pfn, VMI_PAGE_NORMAL); vmi_set_page_type(pfn, VMI_PAGE_NORMAL);
......
...@@ -16,37 +16,46 @@ static void __rdmsr_on_cpu(void *info) ...@@ -16,37 +16,46 @@ static void __rdmsr_on_cpu(void *info)
rdmsr(rv->msr_no, rv->l, rv->h); rdmsr(rv->msr_no, rv->l, rv->h);
} }
static void __rdmsr_safe_on_cpu(void *info) static void __wrmsr_on_cpu(void *info)
{ {
struct msr_info *rv = info; struct msr_info *rv = info;
rv->err = rdmsr_safe(rv->msr_no, &rv->l, &rv->h); wrmsr(rv->msr_no, rv->l, rv->h);
} }
static int _rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h, int safe) int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{ {
int err = 0; int err;
struct msr_info rv; struct msr_info rv;
rv.msr_no = msr_no; rv.msr_no = msr_no;
if (safe) { err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1);
err = smp_call_function_single(cpu, __rdmsr_safe_on_cpu,
&rv, 1);
err = err ? err : rv.err;
} else {
err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1);
}
*l = rv.l; *l = rv.l;
*h = rv.h; *h = rv.h;
return err; return err;
} }
static void __wrmsr_on_cpu(void *info) int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
int err;
struct msr_info rv;
rv.msr_no = msr_no;
rv.l = l;
rv.h = h;
err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1);
return err;
}
/* These "safe" variants are slower and should be used when the target MSR
may not actually exist. */
static void __rdmsr_safe_on_cpu(void *info)
{ {
struct msr_info *rv = info; struct msr_info *rv = info;
wrmsr(rv->msr_no, rv->l, rv->h); rv->err = rdmsr_safe(rv->msr_no, &rv->l, &rv->h);
} }
static void __wrmsr_safe_on_cpu(void *info) static void __wrmsr_safe_on_cpu(void *info)
...@@ -56,45 +65,30 @@ static void __wrmsr_safe_on_cpu(void *info) ...@@ -56,45 +65,30 @@ static void __wrmsr_safe_on_cpu(void *info)
rv->err = wrmsr_safe(rv->msr_no, rv->l, rv->h); rv->err = wrmsr_safe(rv->msr_no, rv->l, rv->h);
} }
static int _wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h, int safe) int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{ {
int err = 0; int err;
struct msr_info rv; struct msr_info rv;
rv.msr_no = msr_no; rv.msr_no = msr_no;
rv.l = l; err = smp_call_function_single(cpu, __rdmsr_safe_on_cpu, &rv, 1);
rv.h = h; *l = rv.l;
if (safe) { *h = rv.h;
err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu,
&rv, 1);
err = err ? err : rv.err;
} else {
err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1);
}
return err;
}
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) return err ? err : rv.err;
{
return _wrmsr_on_cpu(cpu, msr_no, l, h, 0);
} }
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{
return _rdmsr_on_cpu(cpu, msr_no, l, h, 0);
}
/* These "safe" variants are slower and should be used when the target MSR
may not actually exist. */
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{ {
return _wrmsr_on_cpu(cpu, msr_no, l, h, 1); int err;
} struct msr_info rv;
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) rv.msr_no = msr_no;
{ rv.l = l;
return _rdmsr_on_cpu(cpu, msr_no, l, h, 1); rv.h = h;
err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, &rv, 1);
return err ? err : rv.err;
} }
EXPORT_SYMBOL(rdmsr_on_cpu); EXPORT_SYMBOL(rdmsr_on_cpu);
......
This diff is collapsed.
...@@ -23,9 +23,9 @@ __asm__ __volatile__( ...@@ -23,9 +23,9 @@ __asm__ __volatile__(
"jne 1b\n\t" "jne 1b\n\t"
"xorl %%eax,%%eax\n\t" "xorl %%eax,%%eax\n\t"
"2:" "2:"
:"=a" (__res), "=&c" (d0), "=&S" (d1) : "=a" (__res), "=&c" (d0), "=&S" (d1)
:"0" (0), "1" (0xffffffff), "2" (cs), "g" (ct) : "0" (0), "1" (0xffffffff), "2" (cs), "g" (ct)
:"dx", "di"); : "dx", "di");
return __res; return __res;
} }
...@@ -10,13 +10,15 @@ ...@@ -10,13 +10,15 @@
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <mach_ipi.h>
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
#define DEFAULT_SEND_IPI (1) #define DEFAULT_SEND_IPI (1)
#else #else
#define DEFAULT_SEND_IPI (0) #define DEFAULT_SEND_IPI (0)
#endif #endif
int no_broadcast=DEFAULT_SEND_IPI; int no_broadcast = DEFAULT_SEND_IPI;
/** /**
* pre_intr_init_hook - initialisation prior to setting up interrupt vectors * pre_intr_init_hook - initialisation prior to setting up interrupt vectors
......
...@@ -328,7 +328,7 @@ void __init initmem_init(unsigned long start_pfn, ...@@ -328,7 +328,7 @@ void __init initmem_init(unsigned long start_pfn,
get_memcfg_numa(); get_memcfg_numa();
kva_pages = round_up(calculate_numa_remap_pages(), PTRS_PER_PTE); kva_pages = roundup(calculate_numa_remap_pages(), PTRS_PER_PTE);
kva_target_pfn = round_down(max_low_pfn - kva_pages, PTRS_PER_PTE); kva_target_pfn = round_down(max_low_pfn - kva_pages, PTRS_PER_PTE);
do { do {
......
...@@ -148,8 +148,8 @@ static void note_page(struct seq_file *m, struct pg_state *st, ...@@ -148,8 +148,8 @@ static void note_page(struct seq_file *m, struct pg_state *st,
* we have now. "break" is either changing perms, levels or * we have now. "break" is either changing perms, levels or
* address space marker. * address space marker.
*/ */
prot = pgprot_val(new_prot) & ~(PTE_PFN_MASK); prot = pgprot_val(new_prot) & PTE_FLAGS_MASK;
cur = pgprot_val(st->current_prot) & ~(PTE_PFN_MASK); cur = pgprot_val(st->current_prot) & PTE_FLAGS_MASK;
if (!st->level) { if (!st->level) {
/* First entry */ /* First entry */
......
...@@ -35,6 +35,7 @@ ...@@ -35,6 +35,7 @@
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm-generic/sections.h> #include <asm-generic/sections.h>
#include <asm/traps.h>
/* /*
* Page fault error code bits * Page fault error code bits
...@@ -357,8 +358,6 @@ static int is_errata100(struct pt_regs *regs, unsigned long address) ...@@ -357,8 +358,6 @@ static int is_errata100(struct pt_regs *regs, unsigned long address)
return 0; return 0;
} }
void do_invalid_op(struct pt_regs *, unsigned long);
static int is_f00f_bug(struct pt_regs *regs, unsigned long address) static int is_f00f_bug(struct pt_regs *regs, unsigned long address)
{ {
#ifdef CONFIG_X86_F00F_BUG #ifdef CONFIG_X86_F00F_BUG
......
...@@ -47,6 +47,7 @@ ...@@ -47,6 +47,7 @@
#include <asm/paravirt.h> #include <asm/paravirt.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/smp.h>
unsigned int __VMALLOC_RESERVE = 128 << 20; unsigned int __VMALLOC_RESERVE = 128 << 20;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment