Commit ea62ccd0 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6

* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (231 commits)
  [PATCH] i386: Don't delete cpu_devs data to identify different x86 types in late_initcall
  [PATCH] i386: type may be unused
  [PATCH] i386: Some additional chipset register values validation.
  [PATCH] i386: Add missing !X86_PAE dependincy to the 2G/2G split.
  [PATCH] x86-64: Don't exclude asm-offsets.c in Documentation/dontdiff
  [PATCH] i386: avoid redundant preempt_disable in __unlazy_fpu
  [PATCH] i386: white space fixes in i387.h
  [PATCH] i386: Drop noisy e820 debugging printks
  [PATCH] x86-64: Fix allnoconfig error in genapic_flat.c
  [PATCH] x86-64: Shut up warnings for vfat compat ioctls on other file systems
  [PATCH] x86-64: Share identical video.S between i386 and x86-64
  [PATCH] x86-64: Remove CONFIG_REORDER
  [PATCH] x86-64: Print type and size correctly for unknown compat ioctls
  [PATCH] i386: Remove copy_*_user BUG_ONs for (size < 0)
  [PATCH] i386: Little cleanups in smpboot.c
  [PATCH] x86-64: Don't enable NUMA for a single node in K8 NUMA scanning
  [PATCH] x86: Use RDTSCP for synchronous get_cycles if possible
  [PATCH] i386: Add X86_FEATURE_RDTSCP
  [PATCH] i386: Implement X86_FEATURE_SYNC_RDTSC on i386
  [PATCH] i386: Implement alternative_io for i386
  ...

Fix up trivial conflict in include/linux/highmem.h manually.
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parents 886a0768 35060b6a
...@@ -1745,8 +1745,9 @@ S: D-64295 ...@@ -1745,8 +1745,9 @@ S: D-64295
S: Germany S: Germany
N: Andi Kleen N: Andi Kleen
E: ak@muc.de E: andi@firstfloor.org
D: network hacker, syncookies U: http://www.halobates.de
D: network, x86, NUMA, various hacks
S: Schwalbenstr. 96 S: Schwalbenstr. 96
S: 85551 Ottobrunn S: 85551 Ottobrunn
S: Germany S: Germany
......
...@@ -55,8 +55,8 @@ aic7*seq.h* ...@@ -55,8 +55,8 @@ aic7*seq.h*
aicasm aicasm
aicdb.h* aicdb.h*
asm asm
asm-offsets.* asm-offsets.h
asm_offsets.* asm_offsets.h
autoconf.h* autoconf.h*
bbootsect bbootsect
bin2c bin2c
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
---------------------------- ----------------------------
H. Peter Anvin <hpa@zytor.com> H. Peter Anvin <hpa@zytor.com>
Last update 2007-01-26 Last update 2007-03-06
On the i386 platform, the Linux kernel uses a rather complicated boot On the i386 platform, the Linux kernel uses a rather complicated boot
convention. This has evolved partially due to historical aspects, as convention. This has evolved partially due to historical aspects, as
...@@ -35,9 +35,13 @@ Protocol 2.03: (Kernel 2.4.18-pre1) Explicitly makes the highest possible ...@@ -35,9 +35,13 @@ Protocol 2.03: (Kernel 2.4.18-pre1) Explicitly makes the highest possible
initrd address available to the bootloader. initrd address available to the bootloader.
Protocol 2.04: (Kernel 2.6.14) Extend the syssize field to four bytes. Protocol 2.04: (Kernel 2.6.14) Extend the syssize field to four bytes.
Protocol 2.05: (Kernel 2.6.20) Make protected mode kernel relocatable. Protocol 2.05: (Kernel 2.6.20) Make protected mode kernel relocatable.
Introduce relocatable_kernel and kernel_alignment fields. Introduce relocatable_kernel and kernel_alignment fields.
Protocol 2.06: (Kernel 2.6.22) Added a field that contains the size of
the boot command line
**** MEMORY LAYOUT **** MEMORY LAYOUT
...@@ -133,6 +137,8 @@ Offset Proto Name Meaning ...@@ -133,6 +137,8 @@ Offset Proto Name Meaning
022C/4 2.03+ initrd_addr_max Highest legal initrd address 022C/4 2.03+ initrd_addr_max Highest legal initrd address
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel 0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/3 N/A pad2 Unused
0238/4 2.06+ cmdline_size Maximum size of the kernel command line
(1) For backwards compatibility, if the setup_sects field contains 0, the (1) For backwards compatibility, if the setup_sects field contains 0, the
real value is 4. real value is 4.
...@@ -233,6 +239,12 @@ filled out, however: ...@@ -233,6 +239,12 @@ filled out, however:
if your ramdisk is exactly 131072 bytes long and this field is if your ramdisk is exactly 131072 bytes long and this field is
0x37FFFFFF, you can start your ramdisk at 0x37FE0000.) 0x37FFFFFF, you can start your ramdisk at 0x37FE0000.)
cmdline_size:
The maximum size of the command line without the terminating
zero. This means that the command line can contain at most
cmdline_size characters. With protocol version 2.05 and
earlier, the maximum size was 255.
**** THE KERNEL COMMAND LINE **** THE KERNEL COMMAND LINE
...@@ -241,11 +253,10 @@ loader to communicate with the kernel. Some of its options are also ...@@ -241,11 +253,10 @@ loader to communicate with the kernel. Some of its options are also
relevant to the boot loader itself, see "special command line options" relevant to the boot loader itself, see "special command line options"
below. below.
The kernel command line is a null-terminated string currently up to The kernel command line is a null-terminated string. The maximum
255 characters long, plus the final null. A string that is too long length can be retrieved from the field cmdline_size. Before protocol
will be automatically truncated by the kernel, a boot loader may allow version 2.06, the maximum was 255 characters. A string that is too
a longer command line to be passed to permit future kernels to extend long will be automatically truncated by the kernel.
this limit.
If the boot protocol version is 2.02 or later, the address of the If the boot protocol version is 2.02 or later, the address of the
kernel command line is given by the header field cmd_line_ptr (see kernel command line is given by the header field cmd_line_ptr (see
......
...@@ -64,6 +64,7 @@ parameter is applicable: ...@@ -64,6 +64,7 @@ parameter is applicable:
GENERIC_TIME The generic timeofday code is enabled. GENERIC_TIME The generic timeofday code is enabled.
NFS Appropriate NFS support is enabled. NFS Appropriate NFS support is enabled.
OSS OSS sound support is enabled. OSS OSS sound support is enabled.
PV_OPS A paravirtualized kernel
PARIDE The ParIDE subsystem is enabled. PARIDE The ParIDE subsystem is enabled.
PARISC The PA-RISC architecture is enabled. PARISC The PA-RISC architecture is enabled.
PCI PCI bus support is enabled. PCI PCI bus support is enabled.
...@@ -695,8 +696,15 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -695,8 +696,15 @@ and is between 256 and 4096 characters. It is defined in the file
idebus= [HW] (E)IDE subsystem - VLB/PCI bus speed idebus= [HW] (E)IDE subsystem - VLB/PCI bus speed
See Documentation/ide.txt. See Documentation/ide.txt.
idle= [HW] idle= [X86]
Format: idle=poll or idle=halt Format: idle=poll or idle=mwait
Poll forces a polling idle loop that can slightly improves the performance
of waking up a idle CPU, but will use a lot of power and make the system
run hot. Not recommended.
idle=mwait. On systems which support MONITOR/MWAIT but the kernel chose
to not use it because it doesn't save as much power as a normal idle
loop use the MONITOR/MWAIT idle loop anyways. Performance should be the same
as idle=poll.
ignore_loglevel [KNL] ignore_loglevel [KNL]
Ignore loglevel setting - this will print /all/ Ignore loglevel setting - this will print /all/
...@@ -1157,6 +1165,11 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1157,6 +1165,11 @@ and is between 256 and 4096 characters. It is defined in the file
nomce [IA-32] Machine Check Exception nomce [IA-32] Machine Check Exception
noreplace-paravirt [IA-32,PV_OPS] Don't patch paravirt_ops
noreplace-smp [IA-32,SMP] Don't replace SMP instructions
with UP alternatives
noresidual [PPC] Don't use residual data on PReP machines. noresidual [PPC] Don't use residual data on PReP machines.
noresume [SWSUSP] Disables resume and restores original swap noresume [SWSUSP] Disables resume and restores original swap
...@@ -1562,6 +1575,9 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1562,6 +1575,9 @@ and is between 256 and 4096 characters. It is defined in the file
smart2= [HW] smart2= [HW]
Format: <io1>[,<io2>[,...,<io8>]] Format: <io1>[,<io2>[,...,<io8>]]
smp-alt-once [IA-32,SMP] On a hotplug CPU system, only
attempt to substitute SMP alternatives once at boot.
snd-ad1816a= [HW,ALSA] snd-ad1816a= [HW,ALSA]
snd-ad1848= [HW,ALSA] snd-ad1848= [HW,ALSA]
...@@ -1820,6 +1836,7 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1820,6 +1836,7 @@ and is between 256 and 4096 characters. It is defined in the file
[USBHID] The interval which mice are to be polled at. [USBHID] The interval which mice are to be polled at.
vdso= [IA-32,SH] vdso= [IA-32,SH]
vdso=2: enable compat VDSO (default with COMPAT_VDSO)
vdso=1: enable VDSO (default) vdso=1: enable VDSO (default)
vdso=0: disable VDSO mapping vdso=0: disable VDSO mapping
......
...@@ -149,7 +149,19 @@ NUMA ...@@ -149,7 +149,19 @@ NUMA
numa=noacpi Don't parse the SRAT table for NUMA setup numa=noacpi Don't parse the SRAT table for NUMA setup
numa=fake=X Fake X nodes and ignore NUMA setup of the actual machine. numa=fake=CMDLINE
If a number, fakes CMDLINE nodes and ignores NUMA setup of the
actual machine. Otherwise, system memory is configured
depending on the sizes and coefficients listed. For example:
numa=fake=2*512,1024,4*256,*128
gives two 512M nodes, a 1024M node, four 256M nodes, and the
rest split into 128M chunks. If the last character of CMDLINE
is a *, the remaining memory is divided up equally among its
coefficient:
numa=fake=2*512,2*
gives two 512M nodes and the rest split into two nodes.
Otherwise, the remaining system RAM is allocated to an
additional node.
numa=hotadd=percent numa=hotadd=percent
Only allow hotadd memory to preallocate page structures upto Only allow hotadd memory to preallocate page structures upto
......
Using numa=fake and CPUSets for Resource Management
Written by David Rientjes <rientjes@cs.washington.edu>
This document describes how the numa=fake x86_64 command-line option can be used
in conjunction with cpusets for coarse memory management. Using this feature,
you can create fake NUMA nodes that represent contiguous chunks of memory and
assign them to cpusets and their attached tasks. This is a way of limiting the
amount of system memory that are available to a certain class of tasks.
For more information on the features of cpusets, see Documentation/cpusets.txt.
There are a number of different configurations you can use for your needs. For
more information on the numa=fake command line option and its various ways of
configuring fake nodes, see Documentation/x86_64/boot-options.txt.
For the purposes of this introduction, we'll assume a very primitive NUMA
emulation setup of "numa=fake=4*512,". This will split our system memory into
four equal chunks of 512M each that we can now use to assign to cpusets. As
you become more familiar with using this combination for resource control,
you'll determine a better setup to minimize the number of nodes you have to deal
with.
A machine may be split as follows with "numa=fake=4*512," as reported by dmesg:
Faking node 0 at 0000000000000000-0000000020000000 (512MB)
Faking node 1 at 0000000020000000-0000000040000000 (512MB)
Faking node 2 at 0000000040000000-0000000060000000 (512MB)
Faking node 3 at 0000000060000000-0000000080000000 (512MB)
...
On node 0 totalpages: 130975
On node 1 totalpages: 131072
On node 2 totalpages: 131072
On node 3 totalpages: 131072
Now following the instructions for mounting the cpusets filesystem from
Documentation/cpusets.txt, you can assign fake nodes (i.e. contiguous memory
address spaces) to individual cpusets:
[root@xroads /]# mkdir exampleset
[root@xroads /]# mount -t cpuset none exampleset
[root@xroads /]# mkdir exampleset/ddset
[root@xroads /]# cd exampleset/ddset
[root@xroads /exampleset/ddset]# echo 0-1 > cpus
[root@xroads /exampleset/ddset]# echo 0-1 > mems
Now this cpuset, 'ddset', will only allowed access to fake nodes 0 and 1 for
memory allocations (1G).
You can now assign tasks to these cpusets to limit the memory resources
available to them according to the fake nodes assigned as mems:
[root@xroads /exampleset/ddset]# echo $$ > tasks
[root@xroads /exampleset/ddset]# dd if=/dev/zero of=tmp bs=1024 count=1G
[1] 13425
Notice the difference between the system memory usage as reported by
/proc/meminfo between the restricted cpuset case above and the unrestricted
case (i.e. running the same 'dd' command without assigning it to a fake NUMA
cpuset):
Unrestricted Restricted
MemTotal: 3091900 kB 3091900 kB
MemFree: 42113 kB 1513236 kB
This allows for coarse memory management for the tasks you assign to particular
cpusets. Since cpusets can form a hierarchy, you can create some pretty
interesting combinations of use-cases for various classes of tasks for your
memory management needs.
...@@ -36,7 +36,12 @@ between all CPUs. ...@@ -36,7 +36,12 @@ between all CPUs.
check_interval check_interval
How often to poll for corrected machine check errors, in seconds How often to poll for corrected machine check errors, in seconds
(Note output is hexademical). Default 5 minutes. (Note output is hexademical). Default 5 minutes. When the poller
finds MCEs it triggers an exponential speedup (poll more often) on
the polling interval. When the poller stops finding MCEs, it
triggers an exponential backoff (poll less often) on the polling
interval. The check_interval variable is both the initial and
maximum polling interval.
tolerant tolerant
Tolerance level. When a machine check exception occurs for a non Tolerance level. When a machine check exception occurs for a non
......
...@@ -1617,7 +1617,7 @@ S: Maintained ...@@ -1617,7 +1617,7 @@ S: Maintained
HPET: x86_64 HPET: x86_64
P: Andi Kleen and Vojtech Pavlik P: Andi Kleen and Vojtech Pavlik
M: ak@muc.de and vojtech@suse.cz M: andi@firstfloor.org and vojtech@suse.cz
S: Maintained S: Maintained
HPET: ACPI hpet.c HPET: ACPI hpet.c
...@@ -2652,6 +2652,19 @@ T: git kernel.org:/pub/scm/linux/kernel/git/kyle/parisc-2.6.git ...@@ -2652,6 +2652,19 @@ T: git kernel.org:/pub/scm/linux/kernel/git/kyle/parisc-2.6.git
T: cvs cvs.parisc-linux.org:/var/cvs/linux-2.6 T: cvs cvs.parisc-linux.org:/var/cvs/linux-2.6
S: Maintained S: Maintained
PARAVIRT_OPS INTERFACE
P: Jeremy Fitzhardinge
M: jeremy@xensource.com
P: Chris Wright
M: chrisw@sous-sol.org
P: Zachary Amsden
M: zach@vmware.com
P: Rusty Russell
M: rusty@rustcorp.com.au
L: virtualization@lists.osdl.org
L: linux-kernel@vger.kernel.org
S: Supported
PC87360 HARDWARE MONITORING DRIVER PC87360 HARDWARE MONITORING DRIVER
P: Jim Cromie P: Jim Cromie
M: jim.cromie@gmail.com M: jim.cromie@gmail.com
...@@ -3876,6 +3889,15 @@ M: eis@baty.hanse.de ...@@ -3876,6 +3889,15 @@ M: eis@baty.hanse.de
L: linux-x25@vger.kernel.org L: linux-x25@vger.kernel.org
S: Maintained S: Maintained
XEN HYPERVISOR INTERFACE
P: Jeremy Fitzhardinge
M: jeremy@xensource.com
P: Chris Wright
M: chrisw@sous-sol.org
L: virtualization@lists.osdl.org
L: xen-devel@lists.xensource.com
S: Supported
XFS FILESYSTEM XFS FILESYSTEM
P: Silicon Graphics Inc P: Silicon Graphics Inc
P: Tim Shimmin, David Chatterton P: Tim Shimmin, David Chatterton
......
...@@ -491,7 +491,7 @@ endif ...@@ -491,7 +491,7 @@ endif
include $(srctree)/arch/$(ARCH)/Makefile include $(srctree)/arch/$(ARCH)/Makefile
ifdef CONFIG_FRAME_POINTER ifdef CONFIG_FRAME_POINTER
CFLAGS += -fno-omit-frame-pointer $(call cc-option,-fno-optimize-sibling-calls,) CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
else else
CFLAGS += -fomit-frame-pointer CFLAGS += -fomit-frame-pointer
endif endif
......
...@@ -98,7 +98,7 @@ extern int end; ...@@ -98,7 +98,7 @@ extern int end;
static ulg free_mem_ptr; static ulg free_mem_ptr;
static ulg free_mem_ptr_end; static ulg free_mem_ptr_end;
#define HEAP_SIZE 0x2000 #define HEAP_SIZE 0x3000
#include "../../../lib/inflate.c" #include "../../../lib/inflate.c"
......
...@@ -69,7 +69,7 @@ SECTIONS ...@@ -69,7 +69,7 @@ SECTIONS
. = ALIGN(8); . = ALIGN(8);
SECURITY_INIT SECURITY_INIT
. = ALIGN(64); . = ALIGN(8192);
__per_cpu_start = .; __per_cpu_start = .;
.data.percpu : { *(.data.percpu) } .data.percpu : { *(.data.percpu) }
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -239,7 +239,7 @@ extern int end; ...@@ -239,7 +239,7 @@ extern int end;
static ulg free_mem_ptr; static ulg free_mem_ptr;
static ulg free_mem_ptr_end; static ulg free_mem_ptr_end;
#define HEAP_SIZE 0x2000 #define HEAP_SIZE 0x3000
#include "../../../../lib/inflate.c" #include "../../../../lib/inflate.c"
......
...@@ -59,7 +59,7 @@ SECTIONS ...@@ -59,7 +59,7 @@ SECTIONS
usr/built-in.o(.init.ramfs) usr/built-in.o(.init.ramfs)
__initramfs_end = .; __initramfs_end = .;
#endif #endif
. = ALIGN(64); . = ALIGN(4096);
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu) *(.data.percpu)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -182,7 +182,7 @@ extern int end; ...@@ -182,7 +182,7 @@ extern int end;
static ulg free_mem_ptr; static ulg free_mem_ptr;
static ulg free_mem_ptr_end; static ulg free_mem_ptr_end;
#define HEAP_SIZE 0x2000 #define HEAP_SIZE 0x3000
#include "../../../../lib/inflate.c" #include "../../../../lib/inflate.c"
......
...@@ -91,6 +91,7 @@ SECTIONS ...@@ -91,6 +91,7 @@ SECTIONS
} }
SECURITY_INIT SECURITY_INIT
. = ALIGN (8192);
__per_cpu_start = .; __per_cpu_start = .;
.data.percpu : { *(.data.percpu) } .data.percpu : { *(.data.percpu) }
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -57,6 +57,7 @@ SECTIONS ...@@ -57,6 +57,7 @@ SECTIONS
__alt_instructions_end = .; __alt_instructions_end = .;
.altinstr_replacement : { *(.altinstr_replacement) } .altinstr_replacement : { *(.altinstr_replacement) }
. = ALIGN(4096);
__per_cpu_start = .; __per_cpu_start = .;
.data.percpu : { *(.data.percpu) } .data.percpu : { *(.data.percpu) }
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -220,7 +220,7 @@ config PARAVIRT ...@@ -220,7 +220,7 @@ config PARAVIRT
config VMI config VMI
bool "VMI Paravirt-ops support" bool "VMI Paravirt-ops support"
depends on PARAVIRT && !COMPAT_VDSO depends on PARAVIRT
help help
VMI provides a paravirtualized interface to the VMware ESX server VMI provides a paravirtualized interface to the VMware ESX server
(it could be used by other hypervisors in theory too, but is not (it could be used by other hypervisors in theory too, but is not
...@@ -571,6 +571,9 @@ choice ...@@ -571,6 +571,9 @@ choice
bool "3G/1G user/kernel split (for full 1G low memory)" bool "3G/1G user/kernel split (for full 1G low memory)"
config VMSPLIT_2G config VMSPLIT_2G
bool "2G/2G user/kernel split" bool "2G/2G user/kernel split"
config VMSPLIT_2G_OPT
depends on !HIGHMEM
bool "2G/2G user/kernel split (for full 2G low memory)"
config VMSPLIT_1G config VMSPLIT_1G
bool "1G/3G user/kernel split" bool "1G/3G user/kernel split"
endchoice endchoice
...@@ -578,7 +581,8 @@ endchoice ...@@ -578,7 +581,8 @@ endchoice
config PAGE_OFFSET config PAGE_OFFSET
hex hex
default 0xB0000000 if VMSPLIT_3G_OPT default 0xB0000000 if VMSPLIT_3G_OPT
default 0x78000000 if VMSPLIT_2G default 0x80000000 if VMSPLIT_2G
default 0x78000000 if VMSPLIT_2G_OPT
default 0x40000000 if VMSPLIT_1G default 0x40000000 if VMSPLIT_1G
default 0xC0000000 default 0xC0000000
...@@ -915,12 +919,9 @@ source kernel/power/Kconfig ...@@ -915,12 +919,9 @@ source kernel/power/Kconfig
source "drivers/acpi/Kconfig" source "drivers/acpi/Kconfig"
menu "APM (Advanced Power Management) BIOS Support" menuconfig APM
depends on PM && !X86_VISWS
config APM
tristate "APM (Advanced Power Management) BIOS support" tristate "APM (Advanced Power Management) BIOS support"
depends on PM depends on PM && !X86_VISWS
---help--- ---help---
APM is a BIOS specification for saving power using several different APM is a BIOS specification for saving power using several different
techniques. This is mostly useful for battery powered laptops with techniques. This is mostly useful for battery powered laptops with
...@@ -977,9 +978,10 @@ config APM ...@@ -977,9 +978,10 @@ config APM
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called apm. module will be called apm.
if APM
config APM_IGNORE_USER_SUSPEND config APM_IGNORE_USER_SUSPEND
bool "Ignore USER SUSPEND" bool "Ignore USER SUSPEND"
depends on APM
help help
This option will ignore USER SUSPEND requests. On machines with a This option will ignore USER SUSPEND requests. On machines with a
compliant APM BIOS, you want to say N. However, on the NEC Versa M compliant APM BIOS, you want to say N. However, on the NEC Versa M
...@@ -987,7 +989,6 @@ config APM_IGNORE_USER_SUSPEND ...@@ -987,7 +989,6 @@ config APM_IGNORE_USER_SUSPEND
config APM_DO_ENABLE config APM_DO_ENABLE
bool "Enable PM at boot time" bool "Enable PM at boot time"
depends on APM
---help--- ---help---
Enable APM features at boot time. From page 36 of the APM BIOS Enable APM features at boot time. From page 36 of the APM BIOS
specification: "When disabled, the APM BIOS does not automatically specification: "When disabled, the APM BIOS does not automatically
...@@ -1005,7 +1006,6 @@ config APM_DO_ENABLE ...@@ -1005,7 +1006,6 @@ config APM_DO_ENABLE
config APM_CPU_IDLE config APM_CPU_IDLE
bool "Make CPU Idle calls when idle" bool "Make CPU Idle calls when idle"
depends on APM
help help
Enable calls to APM CPU Idle/CPU Busy inside the kernel's idle loop. Enable calls to APM CPU Idle/CPU Busy inside the kernel's idle loop.
On some machines, this can activate improved power savings, such as On some machines, this can activate improved power savings, such as
...@@ -1017,7 +1017,6 @@ config APM_CPU_IDLE ...@@ -1017,7 +1017,6 @@ config APM_CPU_IDLE
config APM_DISPLAY_BLANK config APM_DISPLAY_BLANK
bool "Enable console blanking using APM" bool "Enable console blanking using APM"
depends on APM
help help
Enable console blanking using the APM. Some laptops can use this to Enable console blanking using the APM. Some laptops can use this to
turn off the LCD backlight when the screen blanker of the Linux turn off the LCD backlight when the screen blanker of the Linux
...@@ -1029,22 +1028,8 @@ config APM_DISPLAY_BLANK ...@@ -1029,22 +1028,8 @@ config APM_DISPLAY_BLANK
backlight at all, or it might print a lot of errors to the console, backlight at all, or it might print a lot of errors to the console,
especially if you are using gpm. especially if you are using gpm.
config APM_RTC_IS_GMT
bool "RTC stores time in GMT"
depends on APM
help
Say Y here if your RTC (Real Time Clock a.k.a. hardware clock)
stores the time in GMT (Greenwich Mean Time). Say N if your RTC
stores localtime.
It is in fact recommended to store GMT in your RTC, because then you
don't have to worry about daylight savings time changes. The only
reason not to use GMT in your RTC is if you also run a broken OS
that doesn't understand GMT.
config APM_ALLOW_INTS config APM_ALLOW_INTS
bool "Allow interrupts during APM BIOS calls" bool "Allow interrupts during APM BIOS calls"
depends on APM
help help
Normally we disable external interrupts while we are making calls to Normally we disable external interrupts while we are making calls to
the APM BIOS as a measure to lessen the effects of a badly behaving the APM BIOS as a measure to lessen the effects of a badly behaving
...@@ -1055,13 +1040,12 @@ config APM_ALLOW_INTS ...@@ -1055,13 +1040,12 @@ config APM_ALLOW_INTS
config APM_REAL_MODE_POWER_OFF config APM_REAL_MODE_POWER_OFF
bool "Use real mode APM BIOS call to power off" bool "Use real mode APM BIOS call to power off"
depends on APM
help help
Use real mode APM BIOS calls to switch off the computer. This is Use real mode APM BIOS calls to switch off the computer. This is
a work-around for a number of buggy BIOSes. Switch this option on if a work-around for a number of buggy BIOSes. Switch this option on if
your computer crashes instead of powering off properly. your computer crashes instead of powering off properly.
endmenu endif # APM
source "arch/i386/kernel/cpu/cpufreq/Kconfig" source "arch/i386/kernel/cpu/cpufreq/Kconfig"
......
...@@ -43,6 +43,7 @@ config M386 ...@@ -43,6 +43,7 @@ config M386
- "Geode GX/LX" For AMD Geode GX and LX processors. - "Geode GX/LX" For AMD Geode GX and LX processors.
- "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3. - "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
- "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above). - "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above).
- "VIA C7" for VIA C7.
If you don't know what to do, choose "386". If you don't know what to do, choose "386".
...@@ -203,6 +204,12 @@ config MVIAC3_2 ...@@ -203,6 +204,12 @@ config MVIAC3_2
of SSE and tells gcc to treat the CPU as a 686. of SSE and tells gcc to treat the CPU as a 686.
Note, this kernel will not boot on older (pre model 9) C3s. Note, this kernel will not boot on older (pre model 9) C3s.
config MVIAC7
bool "VIA C7"
help
Select this for a VIA C7. Selecting this uses the correct cache
shift and tells gcc to treat the CPU as a 686.
endchoice endchoice
config X86_GENERIC config X86_GENERIC
...@@ -231,16 +238,21 @@ config X86_L1_CACHE_SHIFT ...@@ -231,16 +238,21 @@ config X86_L1_CACHE_SHIFT
default "7" if MPENTIUM4 || X86_GENERIC default "7" if MPENTIUM4 || X86_GENERIC
default "4" if X86_ELAN || M486 || M386 || MGEODEGX1 default "4" if X86_ELAN || M486 || M386 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX default "5" if MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7
config X86_XADD
bool
depends on !M386
default y
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
bool bool
depends on M386 depends on !X86_XADD
default y default y
config RWSEM_XCHGADD_ALGORITHM config RWSEM_XCHGADD_ALGORITHM
bool bool
depends on !M386 depends on X86_XADD
default y default y
config ARCH_HAS_ILOG2_U32 config ARCH_HAS_ILOG2_U32
...@@ -297,7 +309,7 @@ config X86_ALIGNMENT_16 ...@@ -297,7 +309,7 @@ config X86_ALIGNMENT_16
config X86_GOOD_APIC config X86_GOOD_APIC
bool bool
depends on MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8 || MEFFICEON || MCORE2 depends on MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8 || MEFFICEON || MCORE2 || MVIAC7
default y default y
config X86_INTEL_USERCOPY config X86_INTEL_USERCOPY
...@@ -322,5 +334,18 @@ config X86_OOSTORE ...@@ -322,5 +334,18 @@ config X86_OOSTORE
config X86_TSC config X86_TSC
bool bool
depends on (MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ depends on (MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ
default y default y
# this should be set for all -march=.. options where the compiler
# generates cmov.
config X86_CMOV
bool
depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7)
default y
config X86_MINIMUM_CPU_MODEL
int
default "4" if X86_XADD || X86_CMPXCHG || X86_BSWAP
default "0"
...@@ -85,14 +85,4 @@ config DOUBLEFAULT ...@@ -85,14 +85,4 @@ config DOUBLEFAULT
option saves about 4k and might cause you much additional grey option saves about 4k and might cause you much additional grey
hair. hair.
config DEBUG_PARAVIRT
bool "Enable some paravirtualization debugging"
default n
depends on PARAVIRT && DEBUG_KERNEL
help
Currently deliberately clobbers regs which are allowed to be
clobbered in inlined paravirt hooks, even in native mode.
If turning this off solves a problem, then DISABLE_INTERRUPTS() or
ENABLE_INTERRUPTS() is lying about what registers can be clobbered.
endmenu endmenu
...@@ -34,7 +34,7 @@ CHECKFLAGS += -D__i386__ ...@@ -34,7 +34,7 @@ CHECKFLAGS += -D__i386__
CFLAGS += -pipe -msoft-float -mregparm=3 -freg-struct-return CFLAGS += -pipe -msoft-float -mregparm=3 -freg-struct-return
# prevent gcc from keeping the stack 16 byte aligned # prevent gcc from keeping the stack 16 byte aligned
CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2) CFLAGS += -mpreferred-stack-boundary=4
# CPU-specific tuning. Anything which can be shared with UML should go here. # CPU-specific tuning. Anything which can be shared with UML should go here.
include $(srctree)/arch/i386/Makefile.cpu include $(srctree)/arch/i386/Makefile.cpu
......
...@@ -4,9 +4,9 @@ ...@@ -4,9 +4,9 @@
#-mtune exists since gcc 3.4 #-mtune exists since gcc 3.4
HAS_MTUNE := $(call cc-option-yn, -mtune=i386) HAS_MTUNE := $(call cc-option-yn, -mtune=i386)
ifeq ($(HAS_MTUNE),y) ifeq ($(HAS_MTUNE),y)
tune = $(call cc-option,-mtune=$(1),) tune = $(call cc-option,-mtune=$(1),$(2))
else else
tune = $(call cc-option,-mcpu=$(1),) tune = $(call cc-option,-mcpu=$(1),$(2))
endif endif
align := $(cc-option-align) align := $(cc-option-align)
...@@ -32,7 +32,8 @@ cflags-$(CONFIG_MWINCHIP2) += $(call cc-option,-march=winchip2,-march=i586) ...@@ -32,7 +32,8 @@ cflags-$(CONFIG_MWINCHIP2) += $(call cc-option,-march=winchip2,-march=i586)
cflags-$(CONFIG_MWINCHIP3D) += $(call cc-option,-march=winchip2,-march=i586) cflags-$(CONFIG_MWINCHIP3D) += $(call cc-option,-march=winchip2,-march=i586)
cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0 cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686) cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
cflags-$(CONFIG_MCORE2) += -march=i686 $(call cc-option,-mtune=core2,$(call cc-option,-mtune=generic,-mtune=i686)) cflags-$(CONFIG_MVIAC7) += -march=i686
cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
# AMD Elan support # AMD Elan support
cflags-$(CONFIG_X86_ELAN) += -march=i486 cflags-$(CONFIG_X86_ELAN) += -march=i486
...@@ -42,5 +43,5 @@ cflags-$(CONFIG_MGEODEGX1) += -march=pentium-mmx ...@@ -42,5 +43,5 @@ cflags-$(CONFIG_MGEODEGX1) += -march=pentium-mmx
# add at the end to overwrite eventual tuning options from earlier # add at the end to overwrite eventual tuning options from earlier
# cpu entries # cpu entries
cflags-$(CONFIG_X86_GENERIC) += $(call tune,generic) cflags-$(CONFIG_X86_GENERIC) += $(call tune,generic,$(call tune,i686))
...@@ -36,9 +36,9 @@ HOSTCFLAGS_build.o := $(LINUXINCLUDE) ...@@ -36,9 +36,9 @@ HOSTCFLAGS_build.o := $(LINUXINCLUDE)
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
$(obj)/zImage: IMAGE_OFFSET := 0x1000 $(obj)/zImage: IMAGE_OFFSET := 0x1000
$(obj)/zImage: EXTRA_AFLAGS := -traditional $(SVGA_MODE) $(RAMDISK) $(obj)/zImage: EXTRA_AFLAGS := $(SVGA_MODE) $(RAMDISK)
$(obj)/bzImage: IMAGE_OFFSET := 0x100000 $(obj)/bzImage: IMAGE_OFFSET := 0x100000
$(obj)/bzImage: EXTRA_AFLAGS := -traditional $(SVGA_MODE) $(RAMDISK) -D__BIG_KERNEL__ $(obj)/bzImage: EXTRA_AFLAGS := $(SVGA_MODE) $(RAMDISK) -D__BIG_KERNEL__
$(obj)/bzImage: BUILDFLAGS := -b $(obj)/bzImage: BUILDFLAGS := -b
quiet_cmd_image = BUILD $@ quiet_cmd_image = BUILD $@
......
...@@ -189,7 +189,7 @@ static void putstr(const char *); ...@@ -189,7 +189,7 @@ static void putstr(const char *);
static unsigned long free_mem_ptr; static unsigned long free_mem_ptr;
static unsigned long free_mem_end_ptr; static unsigned long free_mem_end_ptr;
#define HEAP_SIZE 0x3000 #define HEAP_SIZE 0x4000
static char *vidmem = (char *)0xb8000; static char *vidmem = (char *)0xb8000;
static int vidport; static int vidport;
......
...@@ -52,6 +52,7 @@ ...@@ -52,6 +52,7 @@
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/setup.h>
/* Signature words to ensure LILO loaded us right */ /* Signature words to ensure LILO loaded us right */
#define SIG1 0xAA55 #define SIG1 0xAA55
...@@ -81,7 +82,7 @@ start: ...@@ -81,7 +82,7 @@ start:
# This is the setup header, and it must start at %cs:2 (old 0x9020:2) # This is the setup header, and it must start at %cs:2 (old 0x9020:2)
.ascii "HdrS" # header signature .ascii "HdrS" # header signature
.word 0x0205 # header version number (>= 0x0105) .word 0x0206 # header version number (>= 0x0105)
# or else old loadlin-1.5 will fail) # or else old loadlin-1.5 will fail)
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
start_sys_seg: .word SYSSEG start_sys_seg: .word SYSSEG
...@@ -171,6 +172,10 @@ relocatable_kernel: .byte 0 ...@@ -171,6 +172,10 @@ relocatable_kernel: .byte 0
pad2: .byte 0 pad2: .byte 0
pad3: .word 0 pad3: .word 0
cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line,
#added with boot protocol
#version 2.06
trampoline: call start_of_setup trampoline: call start_of_setup
.align 16 .align 16
# The offset at this point is 0x240 # The offset at this point is 0x240
...@@ -297,7 +302,24 @@ good_sig: ...@@ -297,7 +302,24 @@ good_sig:
loader_panic_mess: .string "Wrong loader, giving up..." loader_panic_mess: .string "Wrong loader, giving up..."
# check minimum cpuid
# we do this here because it is the last place we can actually
# show a user visible error message. Later the video modus
# might be already messed up.
loader_ok: loader_ok:
call verify_cpu
testl %eax,%eax
jz cpu_ok
lea cpu_panic_mess,%si
call prtstr
1: jmp 1b
cpu_panic_mess:
.asciz "PANIC: CPU too old for this kernel."
#include "../kernel/verify_cpu.S"
cpu_ok:
# Get memory size (extended mem, kB) # Get memory size (extended mem, kB)
xorl %eax, %eax xorl %eax, %eax
......
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.21-rc3 # Linux kernel version: 2.6.21-git3
# Wed Mar 7 15:29:47 2007 # Tue May 1 07:30:51 2007
# #
CONFIG_X86_32=y CONFIG_X86_32=y
CONFIG_GENERIC_TIME=y CONFIG_GENERIC_TIME=y
...@@ -108,9 +108,9 @@ CONFIG_DEFAULT_IOSCHED="anticipatory" ...@@ -108,9 +108,9 @@ CONFIG_DEFAULT_IOSCHED="anticipatory"
# #
# Processor type and features # Processor type and features
# #
# CONFIG_TICK_ONESHOT is not set CONFIG_TICK_ONESHOT=y
# CONFIG_NO_HZ is not set CONFIG_NO_HZ=y
# CONFIG_HIGH_RES_TIMERS is not set CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y CONFIG_SMP=y
# CONFIG_X86_PC is not set # CONFIG_X86_PC is not set
# CONFIG_X86_ELAN is not set # CONFIG_X86_ELAN is not set
...@@ -146,9 +146,11 @@ CONFIG_MPENTIUMIII=y ...@@ -146,9 +146,11 @@ CONFIG_MPENTIUMIII=y
# CONFIG_MGEODE_LX is not set # CONFIG_MGEODE_LX is not set
# CONFIG_MCYRIXIII is not set # CONFIG_MCYRIXIII is not set
# CONFIG_MVIAC3_2 is not set # CONFIG_MVIAC3_2 is not set
# CONFIG_MVIAC7 is not set
CONFIG_X86_GENERIC=y CONFIG_X86_GENERIC=y
CONFIG_X86_CMPXCHG=y CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=7 CONFIG_X86_L1_CACHE_SHIFT=7
CONFIG_X86_XADD=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y CONFIG_RWSEM_XCHGADD_ALGORITHM=y
# CONFIG_ARCH_HAS_ILOG2_U32 is not set # CONFIG_ARCH_HAS_ILOG2_U32 is not set
# CONFIG_ARCH_HAS_ILOG2_U64 is not set # CONFIG_ARCH_HAS_ILOG2_U64 is not set
...@@ -162,6 +164,8 @@ CONFIG_X86_GOOD_APIC=y ...@@ -162,6 +164,8 @@ CONFIG_X86_GOOD_APIC=y
CONFIG_X86_INTEL_USERCOPY=y CONFIG_X86_INTEL_USERCOPY=y
CONFIG_X86_USE_PPRO_CHECKSUM=y CONFIG_X86_USE_PPRO_CHECKSUM=y
CONFIG_X86_TSC=y CONFIG_X86_TSC=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_MODEL=4
CONFIG_HPET_TIMER=y CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET_EMULATE_RTC=y
CONFIG_NR_CPUS=32 CONFIG_NR_CPUS=32
...@@ -248,7 +252,6 @@ CONFIG_ACPI_FAN=y ...@@ -248,7 +252,6 @@ CONFIG_ACPI_FAN=y
CONFIG_ACPI_PROCESSOR=y CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y CONFIG_ACPI_THERMAL=y
# CONFIG_ACPI_ASUS is not set # CONFIG_ACPI_ASUS is not set
# CONFIG_ACPI_IBM is not set
# CONFIG_ACPI_TOSHIBA is not set # CONFIG_ACPI_TOSHIBA is not set
CONFIG_ACPI_BLACKLIST_YEAR=2001 CONFIG_ACPI_BLACKLIST_YEAR=2001
CONFIG_ACPI_DEBUG=y CONFIG_ACPI_DEBUG=y
...@@ -257,10 +260,7 @@ CONFIG_ACPI_POWER=y ...@@ -257,10 +260,7 @@ CONFIG_ACPI_POWER=y
CONFIG_ACPI_SYSTEM=y CONFIG_ACPI_SYSTEM=y
CONFIG_X86_PM_TIMER=y CONFIG_X86_PM_TIMER=y
# CONFIG_ACPI_CONTAINER is not set # CONFIG_ACPI_CONTAINER is not set
# CONFIG_ACPI_SBS is not set
#
# APM (Advanced Power Management) BIOS Support
#
# CONFIG_APM is not set # CONFIG_APM is not set
# #
...@@ -277,7 +277,7 @@ CONFIG_CPU_FREQ_GOV_PERFORMANCE=y ...@@ -277,7 +277,7 @@ CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set # CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
CONFIG_CPU_FREQ_GOV_USERSPACE=y CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
# #
# CPUFreq processor drivers # CPUFreq processor drivers
...@@ -349,7 +349,6 @@ CONFIG_NET=y ...@@ -349,7 +349,6 @@ CONFIG_NET=y
# #
# Networking options # Networking options
# #
# CONFIG_NETDEBUG is not set
CONFIG_PACKET=y CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set # CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y CONFIG_UNIX=y
...@@ -388,6 +387,7 @@ CONFIG_DEFAULT_TCP_CONG="cubic" ...@@ -388,6 +387,7 @@ CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_IPV6=y CONFIG_IPV6=y
# CONFIG_IPV6_PRIVACY is not set # CONFIG_IPV6_PRIVACY is not set
# CONFIG_IPV6_ROUTER_PREF is not set # CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
# CONFIG_INET6_AH is not set # CONFIG_INET6_AH is not set
# CONFIG_INET6_ESP is not set # CONFIG_INET6_ESP is not set
# CONFIG_INET6_IPCOMP is not set # CONFIG_INET6_IPCOMP is not set
...@@ -443,6 +443,13 @@ CONFIG_IPV6_SIT=y ...@@ -443,6 +443,13 @@ CONFIG_IPV6_SIT=y
# CONFIG_HAMRADIO is not set # CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set # CONFIG_IRDA is not set
# CONFIG_BT is not set # CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
#
# Wireless
#
# CONFIG_CFG80211 is not set
# CONFIG_WIRELESS_EXT is not set
# CONFIG_IEEE80211 is not set # CONFIG_IEEE80211 is not set
# #
...@@ -463,10 +470,6 @@ CONFIG_FW_LOADER=y ...@@ -463,10 +470,6 @@ CONFIG_FW_LOADER=y
# Connector - unified userspace <-> kernelspace linker # Connector - unified userspace <-> kernelspace linker
# #
# CONFIG_CONNECTOR is not set # CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
# CONFIG_MTD is not set # CONFIG_MTD is not set
# #
...@@ -513,6 +516,7 @@ CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024 ...@@ -513,6 +516,7 @@ CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024
# CONFIG_SGI_IOC4 is not set # CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set # CONFIG_TIFM_CORE is not set
# CONFIG_SONY_LAPTOP is not set # CONFIG_SONY_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# #
# ATA/ATAPI/MFM/RLL support # ATA/ATAPI/MFM/RLL support
...@@ -548,7 +552,6 @@ CONFIG_BLK_DEV_IDEPCI=y ...@@ -548,7 +552,6 @@ CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_BLK_DEV_RZ1000 is not set # CONFIG_BLK_DEV_RZ1000 is not set
CONFIG_BLK_DEV_IDEDMA_PCI=y CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set # CONFIG_BLK_DEV_IDEDMA_FORCED is not set
CONFIG_IDEDMA_PCI_AUTO=y
# CONFIG_IDEDMA_ONLYDISK is not set # CONFIG_IDEDMA_ONLYDISK is not set
# CONFIG_BLK_DEV_AEC62XX is not set # CONFIG_BLK_DEV_AEC62XX is not set
# CONFIG_BLK_DEV_ALI15X3 is not set # CONFIG_BLK_DEV_ALI15X3 is not set
...@@ -580,7 +583,6 @@ CONFIG_BLK_DEV_PIIX=y ...@@ -580,7 +583,6 @@ CONFIG_BLK_DEV_PIIX=y
# CONFIG_IDE_ARM is not set # CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set # CONFIG_IDEDMA_IVB is not set
CONFIG_IDEDMA_AUTO=y
# CONFIG_BLK_DEV_HD is not set # CONFIG_BLK_DEV_HD is not set
# #
...@@ -669,6 +671,7 @@ CONFIG_AIC79XX_DEBUG_MASK=0 ...@@ -669,6 +671,7 @@ CONFIG_AIC79XX_DEBUG_MASK=0
# CONFIG_SCSI_DC390T is not set # CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_NSP32 is not set # CONFIG_SCSI_NSP32 is not set
# CONFIG_SCSI_DEBUG is not set # CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_ESP_CORE is not set
# CONFIG_SCSI_SRP is not set # CONFIG_SCSI_SRP is not set
# #
...@@ -697,6 +700,7 @@ CONFIG_SATA_ACPI=y ...@@ -697,6 +700,7 @@ CONFIG_SATA_ACPI=y
# CONFIG_PATA_AMD is not set # CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set # CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set # CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_CMD64X is not set # CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set # CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set # CONFIG_PATA_CS5530 is not set
...@@ -762,10 +766,9 @@ CONFIG_IEEE1394=y ...@@ -762,10 +766,9 @@ CONFIG_IEEE1394=y
# Subsystem Options # Subsystem Options
# #
# CONFIG_IEEE1394_VERBOSEDEBUG is not set # CONFIG_IEEE1394_VERBOSEDEBUG is not set
# CONFIG_IEEE1394_EXTRA_CONFIG_ROMS is not set
# #
# Device Drivers # Controllers
# #
# #
...@@ -774,10 +777,11 @@ CONFIG_IEEE1394=y ...@@ -774,10 +777,11 @@ CONFIG_IEEE1394=y
CONFIG_IEEE1394_OHCI1394=y CONFIG_IEEE1394_OHCI1394=y
# #
# Protocol Drivers # Protocols
# #
# CONFIG_IEEE1394_VIDEO1394 is not set # CONFIG_IEEE1394_VIDEO1394 is not set
# CONFIG_IEEE1394_SBP2 is not set # CONFIG_IEEE1394_SBP2 is not set
# CONFIG_IEEE1394_ETH1394_ROM_ENTRY is not set
# CONFIG_IEEE1394_ETH1394 is not set # CONFIG_IEEE1394_ETH1394 is not set
# CONFIG_IEEE1394_DV1394 is not set # CONFIG_IEEE1394_DV1394 is not set
CONFIG_IEEE1394_RAWIO=y CONFIG_IEEE1394_RAWIO=y
...@@ -820,7 +824,9 @@ CONFIG_MII=y ...@@ -820,7 +824,9 @@ CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set # CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set # CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set # CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set CONFIG_NET_VENDOR_3COM=y
CONFIG_VORTEX=y
# CONFIG_TYPHOON is not set
# #
# Tulip family network device support # Tulip family network device support
...@@ -901,9 +907,10 @@ CONFIG_BNX2=y ...@@ -901,9 +907,10 @@ CONFIG_BNX2=y
# CONFIG_TR is not set # CONFIG_TR is not set
# #
# Wireless LAN (non-hamradio) # Wireless LAN
# #
# CONFIG_NET_RADIO is not set # CONFIG_WLAN_PRE80211 is not set
# CONFIG_WLAN_80211 is not set
# #
# Wan interfaces # Wan interfaces
...@@ -917,7 +924,6 @@ CONFIG_BNX2=y ...@@ -917,7 +924,6 @@ CONFIG_BNX2=y
# CONFIG_SHAPER is not set # CONFIG_SHAPER is not set
CONFIG_NETCONSOLE=y CONFIG_NETCONSOLE=y
CONFIG_NETPOLL=y CONFIG_NETPOLL=y
# CONFIG_NETPOLL_RX is not set
# CONFIG_NETPOLL_TRAP is not set # CONFIG_NETPOLL_TRAP is not set
CONFIG_NET_POLL_CONTROLLER=y CONFIG_NET_POLL_CONTROLLER=y
...@@ -1050,7 +1056,7 @@ CONFIG_MAX_RAW_DEVS=256 ...@@ -1050,7 +1056,7 @@ CONFIG_MAX_RAW_DEVS=256
CONFIG_HPET=y CONFIG_HPET=y
# CONFIG_HPET_RTC_IRQ is not set # CONFIG_HPET_RTC_IRQ is not set
CONFIG_HPET_MMAP=y CONFIG_HPET_MMAP=y
CONFIG_HANGCHECK_TIMER=y # CONFIG_HANGCHECK_TIMER is not set
# #
# TPM devices # TPM devices
...@@ -1141,6 +1147,14 @@ CONFIG_SOUND_ICH=y ...@@ -1141,6 +1147,14 @@ CONFIG_SOUND_ICH=y
CONFIG_HID=y CONFIG_HID=y
# CONFIG_HID_DEBUG is not set # CONFIG_HID_DEBUG is not set
#
# USB Input Devices
#
CONFIG_USB_HID=y
# CONFIG_USB_HIDINPUT_POWERBOOK is not set
# CONFIG_HID_FF is not set
# CONFIG_USB_HIDDEV is not set
# #
# USB support # USB support
# #
...@@ -1154,6 +1168,7 @@ CONFIG_USB=y ...@@ -1154,6 +1168,7 @@ CONFIG_USB=y
# Miscellaneous USB options # Miscellaneous USB options
# #
CONFIG_USB_DEVICEFS=y CONFIG_USB_DEVICEFS=y
# CONFIG_USB_DEVICE_CLASS is not set
# CONFIG_USB_DYNAMIC_MINORS is not set # CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_SUSPEND is not set # CONFIG_USB_SUSPEND is not set
# CONFIG_USB_OTG is not set # CONFIG_USB_OTG is not set
...@@ -1204,10 +1219,6 @@ CONFIG_USB_STORAGE=y ...@@ -1204,10 +1219,6 @@ CONFIG_USB_STORAGE=y
# #
# USB Input Devices # USB Input Devices
# #
CONFIG_USB_HID=y
# CONFIG_USB_HIDINPUT_POWERBOOK is not set
# CONFIG_HID_FF is not set
# CONFIG_USB_HIDDEV is not set
# CONFIG_USB_AIPTEK is not set # CONFIG_USB_AIPTEK is not set
# CONFIG_USB_WACOM is not set # CONFIG_USB_WACOM is not set
# CONFIG_USB_ACECAD is not set # CONFIG_USB_ACECAD is not set
...@@ -1528,7 +1539,7 @@ CONFIG_DEBUG_KERNEL=y ...@@ -1528,7 +1539,7 @@ CONFIG_DEBUG_KERNEL=y
CONFIG_LOG_BUF_SHIFT=18 CONFIG_LOG_BUF_SHIFT=18
CONFIG_DETECT_SOFTLOCKUP=y CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set # CONFIG_SCHEDSTATS is not set
# CONFIG_TIMER_STATS is not set CONFIG_TIMER_STATS=y
# CONFIG_DEBUG_SLAB is not set # CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_RT_MUTEXES is not set # CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set # CONFIG_RT_MUTEX_TESTER is not set
......
...@@ -39,12 +39,10 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o ...@@ -39,12 +39,10 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_HPET_TIMER) += hpet.o obj-$(CONFIG_HPET_TIMER) += hpet.o
obj-$(CONFIG_K8_NB) += k8.o obj-$(CONFIG_K8_NB) += k8.o
obj-$(CONFIG_VMI) += vmi.o vmitime.o obj-$(CONFIG_VMI) += vmi.o vmiclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o obj-$(CONFIG_PARAVIRT) += paravirt.o
obj-y += pcspeaker.o obj-y += pcspeaker.o
EXTRA_AFLAGS := -traditional
obj-$(CONFIG_SCx200) += scx200.o obj-$(CONFIG_SCx200) += scx200.o
# vsyscall.o contains the vsyscall DSO images as __initdata. # vsyscall.o contains the vsyscall DSO images as __initdata.
......
...@@ -874,7 +874,7 @@ static void __init acpi_process_madt(void) ...@@ -874,7 +874,7 @@ static void __init acpi_process_madt(void)
acpi_ioapic = 1; acpi_ioapic = 1;
smp_found_config = 1; smp_found_config = 1;
clustered_apic_check(); setup_apic_routing();
} }
} }
if (error == -EINVAL) { if (error == -EINVAL) {
......
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#include <asm/pci-direct.h> #include <asm/pci-direct.h>
#include <asm/acpi.h> #include <asm/acpi.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/irq.h>
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
...@@ -48,24 +47,6 @@ static int __init check_bridge(int vendor, int device) ...@@ -48,24 +47,6 @@ static int __init check_bridge(int vendor, int device)
return 0; return 0;
} }
static void check_intel(void)
{
u16 vendor, device;
vendor = read_pci_config_16(0, 0, 0, PCI_VENDOR_ID);
if (vendor != PCI_VENDOR_ID_INTEL)
return;
device = read_pci_config_16(0, 0, 0, PCI_DEVICE_ID);
#ifdef CONFIG_SMP
if (device == PCI_DEVICE_ID_INTEL_E7320_MCH ||
device == PCI_DEVICE_ID_INTEL_E7520_MCH ||
device == PCI_DEVICE_ID_INTEL_E7525_MCH)
quirk_intel_irqbalance();
#endif
}
void __init check_acpi_pci(void) void __init check_acpi_pci(void)
{ {
int num, slot, func; int num, slot, func;
...@@ -77,8 +58,6 @@ void __init check_acpi_pci(void) ...@@ -77,8 +58,6 @@ void __init check_acpi_pci(void)
if (!early_pci_allowed()) if (!early_pci_allowed())
return; return;
check_intel();
/* Poor man's PCI discovery */ /* Poor man's PCI discovery */
for (num = 0; num < 32; num++) { for (num = 0; num < 32; num++) {
for (slot = 0; slot < 32; slot++) { for (slot = 0; slot < 32; slot++) {
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/sections.h> #include <asm/sections.h>
static int noreplace_smp = 0;
static int smp_alt_once = 0; static int smp_alt_once = 0;
static int debug_alternative = 0; static int debug_alternative = 0;
...@@ -13,15 +14,33 @@ static int __init bootonly(char *str) ...@@ -13,15 +14,33 @@ static int __init bootonly(char *str)
smp_alt_once = 1; smp_alt_once = 1;
return 1; return 1;
} }
__setup("smp-alt-boot", bootonly);
static int __init debug_alt(char *str) static int __init debug_alt(char *str)
{ {
debug_alternative = 1; debug_alternative = 1;
return 1; return 1;
} }
__setup("smp-alt-boot", bootonly);
__setup("debug-alternative", debug_alt); __setup("debug-alternative", debug_alt);
static int __init setup_noreplace_smp(char *str)
{
noreplace_smp = 1;
return 1;
}
__setup("noreplace-smp", setup_noreplace_smp);
#ifdef CONFIG_PARAVIRT
static int noreplace_paravirt = 0;
static int __init setup_noreplace_paravirt(char *str)
{
noreplace_paravirt = 1;
return 1;
}
__setup("noreplace-paravirt", setup_noreplace_paravirt);
#endif
#define DPRINTK(fmt, args...) if (debug_alternative) \ #define DPRINTK(fmt, args...) if (debug_alternative) \
printk(KERN_DEBUG fmt, args) printk(KERN_DEBUG fmt, args)
...@@ -132,11 +151,8 @@ static void nop_out(void *insns, unsigned int len) ...@@ -132,11 +151,8 @@ static void nop_out(void *insns, unsigned int len)
} }
extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
extern struct alt_instr __smp_alt_instructions[], __smp_alt_instructions_end[];
extern u8 *__smp_locks[], *__smp_locks_end[]; extern u8 *__smp_locks[], *__smp_locks_end[];
extern u8 __smp_alt_begin[], __smp_alt_end[];
/* Replace instructions with better alternatives for this CPU type. /* Replace instructions with better alternatives for this CPU type.
This runs before SMP is initialized to avoid SMP problems with This runs before SMP is initialized to avoid SMP problems with
self modifying code. This implies that assymetric systems where self modifying code. This implies that assymetric systems where
...@@ -171,29 +187,6 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end) ...@@ -171,29 +187,6 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static void alternatives_smp_save(struct alt_instr *start, struct alt_instr *end)
{
struct alt_instr *a;
DPRINTK("%s: alt table %p-%p\n", __FUNCTION__, start, end);
for (a = start; a < end; a++) {
memcpy(a->replacement + a->replacementlen,
a->instr,
a->instrlen);
}
}
static void alternatives_smp_apply(struct alt_instr *start, struct alt_instr *end)
{
struct alt_instr *a;
for (a = start; a < end; a++) {
memcpy(a->instr,
a->replacement + a->replacementlen,
a->instrlen);
}
}
static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end) static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end)
{ {
u8 **ptr; u8 **ptr;
...@@ -211,6 +204,9 @@ static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end ...@@ -211,6 +204,9 @@ static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end
{ {
u8 **ptr; u8 **ptr;
if (noreplace_smp)
return;
for (ptr = start; ptr < end; ptr++) { for (ptr = start; ptr < end; ptr++) {
if (*ptr < text) if (*ptr < text)
continue; continue;
...@@ -245,6 +241,9 @@ void alternatives_smp_module_add(struct module *mod, char *name, ...@@ -245,6 +241,9 @@ void alternatives_smp_module_add(struct module *mod, char *name,
struct smp_alt_module *smp; struct smp_alt_module *smp;
unsigned long flags; unsigned long flags;
if (noreplace_smp)
return;
if (smp_alt_once) { if (smp_alt_once) {
if (boot_cpu_has(X86_FEATURE_UP)) if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(locks, locks_end, alternatives_smp_unlock(locks, locks_end,
...@@ -279,7 +278,7 @@ void alternatives_smp_module_del(struct module *mod) ...@@ -279,7 +278,7 @@ void alternatives_smp_module_del(struct module *mod)
struct smp_alt_module *item; struct smp_alt_module *item;
unsigned long flags; unsigned long flags;
if (smp_alt_once) if (smp_alt_once || noreplace_smp)
return; return;
spin_lock_irqsave(&smp_alt, flags); spin_lock_irqsave(&smp_alt, flags);
...@@ -310,7 +309,7 @@ void alternatives_smp_switch(int smp) ...@@ -310,7 +309,7 @@ void alternatives_smp_switch(int smp)
return; return;
#endif #endif
if (smp_alt_once) if (noreplace_smp || smp_alt_once)
return; return;
BUG_ON(!smp && (num_online_cpus() > 1)); BUG_ON(!smp && (num_online_cpus() > 1));
...@@ -319,8 +318,6 @@ void alternatives_smp_switch(int smp) ...@@ -319,8 +318,6 @@ void alternatives_smp_switch(int smp)
printk(KERN_INFO "SMP alternatives: switching to SMP code\n"); printk(KERN_INFO "SMP alternatives: switching to SMP code\n");
clear_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability); clear_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
clear_bit(X86_FEATURE_UP, cpu_data[0].x86_capability); clear_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
alternatives_smp_apply(__smp_alt_instructions,
__smp_alt_instructions_end);
list_for_each_entry(mod, &smp_alt_modules, next) list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_lock(mod->locks, mod->locks_end, alternatives_smp_lock(mod->locks, mod->locks_end,
mod->text, mod->text_end); mod->text, mod->text_end);
...@@ -328,8 +325,6 @@ void alternatives_smp_switch(int smp) ...@@ -328,8 +325,6 @@ void alternatives_smp_switch(int smp)
printk(KERN_INFO "SMP alternatives: switching to UP code\n"); printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability); set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability); set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
apply_alternatives(__smp_alt_instructions,
__smp_alt_instructions_end);
list_for_each_entry(mod, &smp_alt_modules, next) list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_unlock(mod->locks, mod->locks_end, alternatives_smp_unlock(mod->locks, mod->locks_end,
mod->text, mod->text_end); mod->text, mod->text_end);
...@@ -340,36 +335,31 @@ void alternatives_smp_switch(int smp) ...@@ -340,36 +335,31 @@ void alternatives_smp_switch(int smp)
#endif #endif
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
void apply_paravirt(struct paravirt_patch *start, struct paravirt_patch *end) void apply_paravirt(struct paravirt_patch_site *start,
struct paravirt_patch_site *end)
{ {
struct paravirt_patch *p; struct paravirt_patch_site *p;
if (noreplace_paravirt)
return;
for (p = start; p < end; p++) { for (p = start; p < end; p++) {
unsigned int used; unsigned int used;
used = paravirt_ops.patch(p->instrtype, p->clobbers, p->instr, used = paravirt_ops.patch(p->instrtype, p->clobbers, p->instr,
p->len); p->len);
#ifdef CONFIG_DEBUG_PARAVIRT
{ BUG_ON(used > p->len);
int i;
/* Deliberately clobber regs using "not %reg" to find bugs. */
for (i = 0; i < 3; i++) {
if (p->len - used >= 2 && (p->clobbers & (1 << i))) {
memcpy(p->instr + used, "\xf7\xd0", 2);
p->instr[used+1] |= i;
used += 2;
}
}
}
#endif
/* Pad the rest with nops */ /* Pad the rest with nops */
nop_out(p->instr + used, p->len - used); nop_out(p->instr + used, p->len - used);
} }
/* Sync to be conservative, in case we patched following instructions */ /* Sync to be conservative, in case we patched following
* instructions */
sync_core(); sync_core();
} }
extern struct paravirt_patch __start_parainstructions[], extern struct paravirt_patch_site __start_parainstructions[],
__stop_parainstructions[]; __stop_parainstructions[];
#endif /* CONFIG_PARAVIRT */ #endif /* CONFIG_PARAVIRT */
...@@ -396,23 +386,19 @@ void __init alternative_instructions(void) ...@@ -396,23 +386,19 @@ void __init alternative_instructions(void)
printk(KERN_INFO "SMP alternatives: switching to UP code\n"); printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability); set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability); set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
apply_alternatives(__smp_alt_instructions,
__smp_alt_instructions_end);
alternatives_smp_unlock(__smp_locks, __smp_locks_end, alternatives_smp_unlock(__smp_locks, __smp_locks_end,
_text, _etext); _text, _etext);
} }
free_init_pages("SMP alternatives", free_init_pages("SMP alternatives",
(unsigned long)__smp_alt_begin, __pa_symbol(&__smp_locks),
(unsigned long)__smp_alt_end); __pa_symbol(&__smp_locks_end));
} else { } else {
alternatives_smp_save(__smp_alt_instructions,
__smp_alt_instructions_end);
alternatives_smp_module_add(NULL, "core kernel", alternatives_smp_module_add(NULL, "core kernel",
__smp_locks, __smp_locks_end, __smp_locks, __smp_locks_end,
_text, _etext); _text, _etext);
alternatives_smp_switch(0); alternatives_smp_switch(0);
} }
#endif #endif
apply_paravirt(__start_parainstructions, __stop_parainstructions); apply_paravirt(__parainstructions, __parainstructions_end);
local_irq_restore(flags); local_irq_restore(flags);
} }
...@@ -129,6 +129,28 @@ static int modern_apic(void) ...@@ -129,6 +129,28 @@ static int modern_apic(void)
return lapic_get_version() >= 0x14; return lapic_get_version() >= 0x14;
} }
void apic_wait_icr_idle(void)
{
while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
cpu_relax();
}
unsigned long safe_apic_wait_icr_idle(void)
{
unsigned long send_status;
int timeout;
timeout = 0;
do {
send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
if (!send_status)
break;
udelay(100);
} while (timeout++ < 1000);
return send_status;
}
/** /**
* enable_NMI_through_LVT0 - enable NMI through local vector table 0 * enable_NMI_through_LVT0 - enable NMI through local vector table 0
*/ */
......
...@@ -233,11 +233,10 @@ ...@@ -233,11 +233,10 @@
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/i8253.h> #include <asm/i8253.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
#include <asm/reboot.h>
#include "io_ports.h" #include "io_ports.h"
extern void machine_real_restart(unsigned char *, int);
#if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT) #if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT)
extern int (*console_blank_hook)(int); extern int (*console_blank_hook)(int);
#endif #endif
...@@ -384,13 +383,6 @@ static int ignore_sys_suspend; ...@@ -384,13 +383,6 @@ static int ignore_sys_suspend;
static int ignore_normal_resume; static int ignore_normal_resume;
static int bounce_interval __read_mostly = DEFAULT_BOUNCE_INTERVAL; static int bounce_interval __read_mostly = DEFAULT_BOUNCE_INTERVAL;
#ifdef CONFIG_APM_RTC_IS_GMT
# define clock_cmos_diff 0
# define got_clock_diff 1
#else
static long clock_cmos_diff;
static int got_clock_diff;
#endif
static int debug __read_mostly; static int debug __read_mostly;
static int smp __read_mostly; static int smp __read_mostly;
static int apm_disabled = -1; static int apm_disabled = -1;
......
...@@ -11,11 +11,11 @@ ...@@ -11,11 +11,11 @@
#include <linux/suspend.h> #include <linux/suspend.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
#include "sigframe.h" #include "sigframe.h"
#include <asm/pgtable.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/pda.h>
#define DEFINE(sym, val) \ #define DEFINE(sym, val) \
asm volatile("\n->" #sym " %0 " #val : : "i" (val)) asm volatile("\n->" #sym " %0 " #val : : "i" (val))
...@@ -25,6 +25,9 @@ ...@@ -25,6 +25,9 @@
#define OFFSET(sym, str, mem) \ #define OFFSET(sym, str, mem) \
DEFINE(sym, offsetof(struct str, mem)); DEFINE(sym, offsetof(struct str, mem));
/* workaround for a warning with -Wmissing-prototypes */
void foo(void);
void foo(void) void foo(void)
{ {
OFFSET(SIGCONTEXT_eax, sigcontext, eax); OFFSET(SIGCONTEXT_eax, sigcontext, eax);
...@@ -90,17 +93,18 @@ void foo(void) ...@@ -90,17 +93,18 @@ void foo(void)
OFFSET(pbe_next, pbe, next); OFFSET(pbe_next, pbe, next);
/* Offset from the sysenter stack to tss.esp0 */ /* Offset from the sysenter stack to tss.esp0 */
DEFINE(TSS_sysenter_esp0, offsetof(struct tss_struct, esp0) - DEFINE(TSS_sysenter_esp0, offsetof(struct tss_struct, x86_tss.esp0) -
sizeof(struct tss_struct)); sizeof(struct tss_struct));
DEFINE(PAGE_SIZE_asm, PAGE_SIZE); DEFINE(PAGE_SIZE_asm, PAGE_SIZE);
DEFINE(VDSO_PRELINK, VDSO_PRELINK); DEFINE(PAGE_SHIFT_asm, PAGE_SHIFT);
DEFINE(PTRS_PER_PTE, PTRS_PER_PTE);
DEFINE(PTRS_PER_PMD, PTRS_PER_PMD);
DEFINE(PTRS_PER_PGD, PTRS_PER_PGD);
OFFSET(crypto_tfm_ctx_offset, crypto_tfm, __crt_ctx); DEFINE(VDSO_PRELINK_asm, VDSO_PRELINK);
BLANK(); OFFSET(crypto_tfm_ctx_offset, crypto_tfm, __crt_ctx);
OFFSET(PDA_cpu, i386_pda, cpu_number);
OFFSET(PDA_pcurrent, i386_pda, pcurrent);
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
BLANK(); BLANK();
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# Makefile for x86-compatible CPU details and quirks # Makefile for x86-compatible CPU details and quirks
# #
obj-y := common.o proc.o obj-y := common.o proc.o bugs.o
obj-y += amd.o obj-y += amd.o
obj-y += cyrix.o obj-y += cyrix.o
...@@ -17,3 +17,5 @@ obj-$(CONFIG_X86_MCE) += mcheck/ ...@@ -17,3 +17,5 @@ obj-$(CONFIG_X86_MCE) += mcheck/
obj-$(CONFIG_MTRR) += mtrr/ obj-$(CONFIG_MTRR) += mtrr/
obj-$(CONFIG_CPU_FREQ) += cpufreq/ obj-$(CONFIG_CPU_FREQ) += cpufreq/
obj-$(CONFIG_X86_LOCAL_APIC) += perfctr-watchdog.o
...@@ -53,6 +53,8 @@ static __cpuinit int amd_apic_timer_broken(void) ...@@ -53,6 +53,8 @@ static __cpuinit int amd_apic_timer_broken(void)
return 0; return 0;
} }
int force_mwait __cpuinitdata;
static void __cpuinit init_amd(struct cpuinfo_x86 *c) static void __cpuinit init_amd(struct cpuinfo_x86 *c)
{ {
u32 l, h; u32 l, h;
...@@ -275,6 +277,9 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -275,6 +277,9 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
if (amd_apic_timer_broken()) if (amd_apic_timer_broken())
set_bit(X86_FEATURE_LAPIC_TIMER_BROKEN, c->x86_capability); set_bit(X86_FEATURE_LAPIC_TIMER_BROKEN, c->x86_capability);
if (c->x86 == 0x10 && !force_mwait)
clear_bit(X86_FEATURE_MWAIT, c->x86_capability);
} }
static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size) static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
...@@ -314,13 +319,3 @@ int __init amd_init_cpu(void) ...@@ -314,13 +319,3 @@ int __init amd_init_cpu(void)
cpu_devs[X86_VENDOR_AMD] = &amd_cpu_dev; cpu_devs[X86_VENDOR_AMD] = &amd_cpu_dev;
return 0; return 0;
} }
//early_arch_initcall(amd_init_cpu);
static int __init amd_exit_cpu(void)
{
cpu_devs[X86_VENDOR_AMD] = NULL;
return 0;
}
late_initcall(amd_exit_cpu);
/*
* arch/i386/cpu/bugs.c
*
* Copyright (C) 1994 Linus Torvalds
*
* Cyrix stuff, June 1998 by:
* - Rafael R. Reilova (moved everything from head.S),
* <rreilova@ececs.uc.edu>
* - Channing Corn (tests & fixes),
* - Andrew D. Balsa (code cleanup).
*/
#include <linux/init.h>
#include <linux/utsname.h>
#include <asm/processor.h>
#include <asm/i387.h>
#include <asm/msr.h>
#include <asm/paravirt.h>
#include <asm/alternative.h>
static int __init no_halt(char *s)
{
boot_cpu_data.hlt_works_ok = 0;
return 1;
}
__setup("no-hlt", no_halt);
static int __init mca_pentium(char *s)
{
mca_pentium_flag = 1;
return 1;
}
__setup("mca-pentium", mca_pentium);
static int __init no_387(char *s)
{
boot_cpu_data.hard_math = 0;
write_cr0(0xE | read_cr0());
return 1;
}
__setup("no387", no_387);
static double __initdata x = 4195835.0;
static double __initdata y = 3145727.0;
/*
* This used to check for exceptions..
* However, it turns out that to support that,
* the XMM trap handlers basically had to
* be buggy. So let's have a correct XMM trap
* handler, and forget about printing out
* some status at boot.
*
* We should really only care about bugs here
* anyway. Not features.
*/
static void __init check_fpu(void)
{
if (!boot_cpu_data.hard_math) {
#ifndef CONFIG_MATH_EMULATION
printk(KERN_EMERG "No coprocessor found and no math emulation present.\n");
printk(KERN_EMERG "Giving up.\n");
for (;;) ;
#endif
return;
}
/* trap_init() enabled FXSR and company _before_ testing for FP problems here. */
/* Test for the divl bug.. */
__asm__("fninit\n\t"
"fldl %1\n\t"
"fdivl %2\n\t"
"fmull %2\n\t"
"fldl %1\n\t"
"fsubp %%st,%%st(1)\n\t"
"fistpl %0\n\t"
"fwait\n\t"
"fninit"
: "=m" (*&boot_cpu_data.fdiv_bug)
: "m" (*&x), "m" (*&y));
if (boot_cpu_data.fdiv_bug)
printk("Hmm, FPU with FDIV bug.\n");
}
static void __init check_hlt(void)
{
if (paravirt_enabled())
return;
printk(KERN_INFO "Checking 'hlt' instruction... ");
if (!boot_cpu_data.hlt_works_ok) {
printk("disabled\n");
return;
}
halt();
halt();
halt();
halt();
printk("OK.\n");
}
/*
* Most 386 processors have a bug where a POPAD can lock the
* machine even from user space.
*/
static void __init check_popad(void)
{
#ifndef CONFIG_X86_POPAD_OK
int res, inp = (int) &res;
printk(KERN_INFO "Checking for popad bug... ");
__asm__ __volatile__(
"movl $12345678,%%eax; movl $0,%%edi; pusha; popa; movl (%%edx,%%edi),%%ecx "
: "=&a" (res)
: "d" (inp)
: "ecx", "edi" );
/* If this fails, it means that any user program may lock the CPU hard. Too bad. */
if (res != 12345678) printk( "Buggy.\n" );
else printk( "OK.\n" );
#endif
}
/*
* Check whether we are able to run this kernel safely on SMP.
*
* - In order to run on a i386, we need to be compiled for i386
* (for due to lack of "invlpg" and working WP on a i386)
* - In order to run on anything without a TSC, we need to be
* compiled for a i486.
* - In order to support the local APIC on a buggy Pentium machine,
* we need to be compiled with CONFIG_X86_GOOD_APIC disabled,
* which happens implicitly if compiled for a Pentium or lower
* (unless an advanced selection of CPU features is used) as an
* otherwise config implies a properly working local APIC without
* the need to do extra reads from the APIC.
*/
static void __init check_config(void)
{
/*
* We'd better not be a i386 if we're configured to use some
* i486+ only features! (WP works in supervisor mode and the
* new "invlpg" and "bswap" instructions)
*/
#if defined(CONFIG_X86_WP_WORKS_OK) || defined(CONFIG_X86_INVLPG) || defined(CONFIG_X86_BSWAP)
if (boot_cpu_data.x86 == 3)
panic("Kernel requires i486+ for 'invlpg' and other features");
#endif
/*
* If we configured ourselves for a TSC, we'd better have one!
*/
#ifdef CONFIG_X86_TSC
if (!cpu_has_tsc && !tsc_disable)
panic("Kernel compiled for Pentium+, requires TSC feature!");
#endif
/*
* If we were told we had a good local APIC, check for buggy Pentia,
* i.e. all B steppings and the C2 stepping of P54C when using their
* integrated APIC (see 11AP erratum in "Pentium Processor
* Specification Update").
*/
#if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_X86_GOOD_APIC)
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL
&& cpu_has_apic
&& boot_cpu_data.x86 == 5
&& boot_cpu_data.x86_model == 2
&& (boot_cpu_data.x86_mask < 6 || boot_cpu_data.x86_mask == 11))
panic("Kernel compiled for PMMX+, assumes a local APIC without the read-before-write bug!");
#endif
}
void __init check_bugs(void)
{
identify_boot_cpu();
#ifndef CONFIG_SMP
printk("CPU: ");
print_cpu_info(&boot_cpu_data);
#endif
check_config();
check_fpu();
check_hlt();
check_popad();
init_utsname()->machine[1] = '0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
alternative_instructions();
}
...@@ -469,13 +469,3 @@ int __init centaur_init_cpu(void) ...@@ -469,13 +469,3 @@ int __init centaur_init_cpu(void)
cpu_devs[X86_VENDOR_CENTAUR] = &centaur_cpu_dev; cpu_devs[X86_VENDOR_CENTAUR] = &centaur_cpu_dev;
return 0; return 0;
} }
//early_arch_initcall(centaur_init_cpu);
static int __init centaur_exit_cpu(void)
{
cpu_devs[X86_VENDOR_CENTAUR] = NULL;
return 0;
}
late_initcall(centaur_exit_cpu);
...@@ -18,15 +18,37 @@ ...@@ -18,15 +18,37 @@
#include <asm/apic.h> #include <asm/apic.h>
#include <mach_apic.h> #include <mach_apic.h>
#endif #endif
#include <asm/pda.h>
#include "cpu.h" #include "cpu.h"
DEFINE_PER_CPU(struct Xgt_desc_struct, cpu_gdt_descr); DEFINE_PER_CPU(struct gdt_page, gdt_page) = { .gdt = {
EXPORT_PER_CPU_SYMBOL(cpu_gdt_descr); [GDT_ENTRY_KERNEL_CS] = { 0x0000ffff, 0x00cf9a00 },
[GDT_ENTRY_KERNEL_DS] = { 0x0000ffff, 0x00cf9200 },
[GDT_ENTRY_DEFAULT_USER_CS] = { 0x0000ffff, 0x00cffa00 },
[GDT_ENTRY_DEFAULT_USER_DS] = { 0x0000ffff, 0x00cff200 },
/*
* Segments used for calling PnP BIOS have byte granularity.
* They code segments and data segments have fixed 64k limits,
* the transfer segment sizes are set at run time.
*/
[GDT_ENTRY_PNPBIOS_CS32] = { 0x0000ffff, 0x00409a00 },/* 32-bit code */
[GDT_ENTRY_PNPBIOS_CS16] = { 0x0000ffff, 0x00009a00 },/* 16-bit code */
[GDT_ENTRY_PNPBIOS_DS] = { 0x0000ffff, 0x00009200 }, /* 16-bit data */
[GDT_ENTRY_PNPBIOS_TS1] = { 0x00000000, 0x00009200 },/* 16-bit data */
[GDT_ENTRY_PNPBIOS_TS2] = { 0x00000000, 0x00009200 },/* 16-bit data */
/*
* The APM segments have byte granularity and their bases
* are set at run time. All have 64k limits.
*/
[GDT_ENTRY_APMBIOS_BASE] = { 0x0000ffff, 0x00409a00 },/* 32-bit code */
/* 16-bit code */
[GDT_ENTRY_APMBIOS_BASE+1] = { 0x0000ffff, 0x00009a00 },
[GDT_ENTRY_APMBIOS_BASE+2] = { 0x0000ffff, 0x00409200 }, /* data */
struct i386_pda *_cpu_pda[NR_CPUS] __read_mostly; [GDT_ENTRY_ESPFIX_SS] = { 0x00000000, 0x00c09200 },
EXPORT_SYMBOL(_cpu_pda); [GDT_ENTRY_PERCPU] = { 0x00000000, 0x00000000 },
} };
EXPORT_PER_CPU_SYMBOL_GPL(gdt_page);
static int cachesize_override __cpuinitdata = -1; static int cachesize_override __cpuinitdata = -1;
static int disable_x86_fxsr __cpuinitdata; static int disable_x86_fxsr __cpuinitdata;
...@@ -368,7 +390,7 @@ __setup("serialnumber", x86_serial_nr_setup); ...@@ -368,7 +390,7 @@ __setup("serialnumber", x86_serial_nr_setup);
/* /*
* This does the hard work of actually picking apart the CPU stuff... * This does the hard work of actually picking apart the CPU stuff...
*/ */
void __cpuinit identify_cpu(struct cpuinfo_x86 *c) static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
{ {
int i; int i;
...@@ -479,14 +501,21 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c) ...@@ -479,14 +501,21 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
/* Init Machine Check Exception if available. */ /* Init Machine Check Exception if available. */
mcheck_init(c); mcheck_init(c);
}
if (c == &boot_cpu_data) void __init identify_boot_cpu(void)
{
identify_cpu(&boot_cpu_data);
sysenter_setup(); sysenter_setup();
enable_sep_cpu(); enable_sep_cpu();
if (c == &boot_cpu_data)
mtrr_bp_init(); mtrr_bp_init();
else }
void __cpuinit identify_secondary_cpu(struct cpuinfo_x86 *c)
{
BUG_ON(c == &boot_cpu_data);
identify_cpu(c);
enable_sep_cpu();
mtrr_ap_init(); mtrr_ap_init();
} }
...@@ -601,129 +630,36 @@ void __init early_cpu_init(void) ...@@ -601,129 +630,36 @@ void __init early_cpu_init(void)
#endif #endif
} }
/* Make sure %gs is initialized properly in idle threads */ /* Make sure %fs is initialized properly in idle threads */
struct pt_regs * __devinit idle_regs(struct pt_regs *regs) struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
{ {
memset(regs, 0, sizeof(struct pt_regs)); memset(regs, 0, sizeof(struct pt_regs));
regs->xfs = __KERNEL_PDA; regs->xfs = __KERNEL_PERCPU;
return regs; return regs;
} }
static __cpuinit int alloc_gdt(int cpu) /* Current gdt points %fs at the "master" per-cpu area: after this,
* it's on the real one. */
void switch_to_new_gdt(void)
{ {
struct Xgt_desc_struct *cpu_gdt_descr = &per_cpu(cpu_gdt_descr, cpu); struct Xgt_desc_struct gdt_descr;
struct desc_struct *gdt;
struct i386_pda *pda;
gdt = (struct desc_struct *)cpu_gdt_descr->address;
pda = cpu_pda(cpu);
/*
* This is a horrible hack to allocate the GDT. The problem
* is that cpu_init() is called really early for the boot CPU
* (and hence needs bootmem) but much later for the secondary
* CPUs, when bootmem will have gone away
*/
if (NODE_DATA(0)->bdata->node_bootmem_map) {
BUG_ON(gdt != NULL || pda != NULL);
gdt = alloc_bootmem_pages(PAGE_SIZE);
pda = alloc_bootmem(sizeof(*pda));
/* alloc_bootmem(_pages) panics on failure, so no check */
memset(gdt, 0, PAGE_SIZE);
memset(pda, 0, sizeof(*pda));
} else {
/* GDT and PDA might already have been allocated if
this is a CPU hotplug re-insertion. */
if (gdt == NULL)
gdt = (struct desc_struct *)get_zeroed_page(GFP_KERNEL);
if (pda == NULL)
pda = kmalloc_node(sizeof(*pda), GFP_KERNEL, cpu_to_node(cpu));
if (unlikely(!gdt || !pda)) { gdt_descr.address = (long)get_cpu_gdt_table(smp_processor_id());
free_pages((unsigned long)gdt, 0); gdt_descr.size = GDT_SIZE - 1;
kfree(pda); load_gdt(&gdt_descr);
return 0; asm("mov %0, %%fs" : : "r" (__KERNEL_PERCPU) : "memory");
}
}
cpu_gdt_descr->address = (unsigned long)gdt;
cpu_pda(cpu) = pda;
return 1;
} }
/* Initial PDA used by boot CPU */ /*
struct i386_pda boot_pda = { * cpu_init() initializes state that is per-CPU. Some data is already
._pda = &boot_pda, * initialized (naturally) in the bootstrap process, such as the GDT
.cpu_number = 0, * and IDT. We reload them nevertheless, this function acts as a
.pcurrent = &init_task, * 'CPU state barrier', nothing should get across.
};
static inline void set_kernel_fs(void)
{
/* Set %fs for this CPU's PDA. Memory clobber is to create a
barrier with respect to any PDA operations, so the compiler
doesn't move any before here. */
asm volatile ("mov %0, %%fs" : : "r" (__KERNEL_PDA) : "memory");
}
/* Initialize the CPU's GDT and PDA. The boot CPU does this for
itself, but secondaries find this done for them. */
__cpuinit int init_gdt(int cpu, struct task_struct *idle)
{
struct Xgt_desc_struct *cpu_gdt_descr = &per_cpu(cpu_gdt_descr, cpu);
struct desc_struct *gdt;
struct i386_pda *pda;
/* For non-boot CPUs, the GDT and PDA should already have been
allocated. */
if (!alloc_gdt(cpu)) {
printk(KERN_CRIT "CPU%d failed to allocate GDT or PDA\n", cpu);
return 0;
}
gdt = (struct desc_struct *)cpu_gdt_descr->address;
pda = cpu_pda(cpu);
BUG_ON(gdt == NULL || pda == NULL);
/*
* Initialize the per-CPU GDT with the boot GDT,
* and set up the GDT descriptor:
*/ */
memcpy(gdt, cpu_gdt_table, GDT_SIZE); void __cpuinit cpu_init(void)
cpu_gdt_descr->size = GDT_SIZE - 1;
pack_descriptor((u32 *)&gdt[GDT_ENTRY_PDA].a,
(u32 *)&gdt[GDT_ENTRY_PDA].b,
(unsigned long)pda, sizeof(*pda) - 1,
0x80 | DESCTYPE_S | 0x2, 0); /* present read-write data segment */
memset(pda, 0, sizeof(*pda));
pda->_pda = pda;
pda->cpu_number = cpu;
pda->pcurrent = idle;
return 1;
}
void __cpuinit cpu_set_gdt(int cpu)
{
struct Xgt_desc_struct *cpu_gdt_descr = &per_cpu(cpu_gdt_descr, cpu);
/* Reinit these anyway, even if they've already been done (on
the boot CPU, this will transition from the boot gdt+pda to
the real ones). */
load_gdt(cpu_gdt_descr);
set_kernel_fs();
}
/* Common CPU init for both boot and secondary CPUs */
static void __cpuinit _cpu_init(int cpu, struct task_struct *curr)
{ {
int cpu = smp_processor_id();
struct task_struct *curr = current;
struct tss_struct * t = &per_cpu(init_tss, cpu); struct tss_struct * t = &per_cpu(init_tss, cpu);
struct thread_struct *thread = &curr->thread; struct thread_struct *thread = &curr->thread;
...@@ -744,6 +680,7 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr) ...@@ -744,6 +680,7 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr)
} }
load_idt(&idt_descr); load_idt(&idt_descr);
switch_to_new_gdt();
/* /*
* Set up and load the per-CPU TSS and LDT * Set up and load the per-CPU TSS and LDT
...@@ -783,38 +720,6 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr) ...@@ -783,38 +720,6 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr)
mxcsr_feature_mask_init(); mxcsr_feature_mask_init();
} }
/* Entrypoint to initialize secondary CPU */
void __cpuinit secondary_cpu_init(void)
{
int cpu = smp_processor_id();
struct task_struct *curr = current;
_cpu_init(cpu, curr);
}
/*
* cpu_init() initializes state that is per-CPU. Some data is already
* initialized (naturally) in the bootstrap process, such as the GDT
* and IDT. We reload them nevertheless, this function acts as a
* 'CPU state barrier', nothing should get across.
*/
void __cpuinit cpu_init(void)
{
int cpu = smp_processor_id();
struct task_struct *curr = current;
/* Set up the real GDT and PDA, so we can transition from the
boot versions. */
if (!init_gdt(cpu, curr)) {
/* failed to allocate something; not much we can do... */
for (;;)
local_irq_enable();
}
cpu_set_gdt(cpu);
_cpu_init(cpu, curr);
}
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
void __cpuinit cpu_uninit(void) void __cpuinit cpu_uninit(void)
{ {
......
...@@ -279,7 +279,7 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c) ...@@ -279,7 +279,7 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
*/ */
if (vendor == PCI_VENDOR_ID_CYRIX && if (vendor == PCI_VENDOR_ID_CYRIX &&
(device == PCI_DEVICE_ID_CYRIX_5510 || device == PCI_DEVICE_ID_CYRIX_5520)) (device == PCI_DEVICE_ID_CYRIX_5510 || device == PCI_DEVICE_ID_CYRIX_5520))
pit_latch_buggy = 1; mark_tsc_unstable("cyrix 5510/5520 detected");
} }
#endif #endif
c->x86_cache_size=16; /* Yep 16K integrated cache thats it */ c->x86_cache_size=16; /* Yep 16K integrated cache thats it */
...@@ -448,16 +448,6 @@ int __init cyrix_init_cpu(void) ...@@ -448,16 +448,6 @@ int __init cyrix_init_cpu(void)
return 0; return 0;
} }
//early_arch_initcall(cyrix_init_cpu);
static int __init cyrix_exit_cpu(void)
{
cpu_devs[X86_VENDOR_CYRIX] = NULL;
return 0;
}
late_initcall(cyrix_exit_cpu);
static struct cpu_dev nsc_cpu_dev __cpuinitdata = { static struct cpu_dev nsc_cpu_dev __cpuinitdata = {
.c_vendor = "NSC", .c_vendor = "NSC",
.c_ident = { "Geode by NSC" }, .c_ident = { "Geode by NSC" },
...@@ -470,12 +460,3 @@ int __init nsc_init_cpu(void) ...@@ -470,12 +460,3 @@ int __init nsc_init_cpu(void)
return 0; return 0;
} }
//early_arch_initcall(nsc_init_cpu);
static int __init nsc_exit_cpu(void)
{
cpu_devs[X86_VENDOR_NSC] = NULL;
return 0;
}
late_initcall(nsc_exit_cpu);
...@@ -188,8 +188,10 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -188,8 +188,10 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
} }
#endif #endif
if (c->x86 == 15) if (c->x86 == 15) {
set_bit(X86_FEATURE_P4, c->x86_capability); set_bit(X86_FEATURE_P4, c->x86_capability);
set_bit(X86_FEATURE_SYNC_RDTSC, c->x86_capability);
}
if (c->x86 == 6) if (c->x86 == 6)
set_bit(X86_FEATURE_P3, c->x86_capability); set_bit(X86_FEATURE_P3, c->x86_capability);
if ((c->x86 == 0xf && c->x86_model >= 0x03) || if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
......
...@@ -75,6 +75,9 @@ void amd_mcheck_init(struct cpuinfo_x86 *c) ...@@ -75,6 +75,9 @@ void amd_mcheck_init(struct cpuinfo_x86 *c)
machine_check_vector = k7_machine_check; machine_check_vector = k7_machine_check;
wmb(); wmb();
if (!cpu_has(c, X86_FEATURE_MCE))
return;
printk (KERN_INFO "Intel machine check architecture supported.\n"); printk (KERN_INFO "Intel machine check architecture supported.\n");
rdmsr (MSR_IA32_MCG_CAP, l, h); rdmsr (MSR_IA32_MCG_CAP, l, h);
if (l & (1<<8)) /* Control register present ? */ if (l & (1<<8)) /* Control register present ? */
...@@ -82,9 +85,13 @@ void amd_mcheck_init(struct cpuinfo_x86 *c) ...@@ -82,9 +85,13 @@ void amd_mcheck_init(struct cpuinfo_x86 *c)
nr_mce_banks = l & 0xff; nr_mce_banks = l & 0xff;
/* Clear status for MC index 0 separately, we don't touch CTL, /* Clear status for MC index 0 separately, we don't touch CTL,
* as some Athlons cause spurious MCEs when its enabled. */ * as some K7 Athlons cause spurious MCEs when its enabled. */
if (boot_cpu_data.x86 == 6) {
wrmsr (MSR_IA32_MC0_STATUS, 0x0, 0x0); wrmsr (MSR_IA32_MC0_STATUS, 0x0, 0x0);
for (i=1; i<nr_mce_banks; i++) { i = 1;
} else
i = 0;
for (; i<nr_mce_banks; i++) {
wrmsr (MSR_IA32_MC0_CTL+4*i, 0xffffffff, 0xffffffff); wrmsr (MSR_IA32_MC0_CTL+4*i, 0xffffffff, 0xffffffff);
wrmsr (MSR_IA32_MC0_STATUS+4*i, 0x0, 0x0); wrmsr (MSR_IA32_MC0_STATUS+4*i, 0x0, 0x0);
} }
......
...@@ -38,7 +38,6 @@ void mcheck_init(struct cpuinfo_x86 *c) ...@@ -38,7 +38,6 @@ void mcheck_init(struct cpuinfo_x86 *c)
switch (c->x86_vendor) { switch (c->x86_vendor) {
case X86_VENDOR_AMD: case X86_VENDOR_AMD:
if (c->x86==6 || c->x86==15)
amd_mcheck_init(c); amd_mcheck_init(c);
break; break;
......
...@@ -124,13 +124,10 @@ static void intel_init_thermal(struct cpuinfo_x86 *c) ...@@ -124,13 +124,10 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
/* P4/Xeon Extended MCE MSR retrieval, return 0 if unsupported */ /* P4/Xeon Extended MCE MSR retrieval, return 0 if unsupported */
static inline int intel_get_extended_msrs(struct intel_mce_extended_msrs *r) static inline void intel_get_extended_msrs(struct intel_mce_extended_msrs *r)
{ {
u32 h; u32 h;
if (mce_num_extended_msrs == 0)
goto done;
rdmsr (MSR_IA32_MCG_EAX, r->eax, h); rdmsr (MSR_IA32_MCG_EAX, r->eax, h);
rdmsr (MSR_IA32_MCG_EBX, r->ebx, h); rdmsr (MSR_IA32_MCG_EBX, r->ebx, h);
rdmsr (MSR_IA32_MCG_ECX, r->ecx, h); rdmsr (MSR_IA32_MCG_ECX, r->ecx, h);
...@@ -141,12 +138,6 @@ static inline int intel_get_extended_msrs(struct intel_mce_extended_msrs *r) ...@@ -141,12 +138,6 @@ static inline int intel_get_extended_msrs(struct intel_mce_extended_msrs *r)
rdmsr (MSR_IA32_MCG_ESP, r->esp, h); rdmsr (MSR_IA32_MCG_ESP, r->esp, h);
rdmsr (MSR_IA32_MCG_EFLAGS, r->eflags, h); rdmsr (MSR_IA32_MCG_EFLAGS, r->eflags, h);
rdmsr (MSR_IA32_MCG_EIP, r->eip, h); rdmsr (MSR_IA32_MCG_EIP, r->eip, h);
/* can we rely on kmalloc to do a dynamic
* allocation for the reserved registers?
*/
done:
return mce_num_extended_msrs;
} }
static fastcall void intel_machine_check(struct pt_regs * regs, long error_code) static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
...@@ -155,7 +146,6 @@ static fastcall void intel_machine_check(struct pt_regs * regs, long error_code) ...@@ -155,7 +146,6 @@ static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
u32 alow, ahigh, high, low; u32 alow, ahigh, high, low;
u32 mcgstl, mcgsth; u32 mcgstl, mcgsth;
int i; int i;
struct intel_mce_extended_msrs dbg;
rdmsr (MSR_IA32_MCG_STATUS, mcgstl, mcgsth); rdmsr (MSR_IA32_MCG_STATUS, mcgstl, mcgsth);
if (mcgstl & (1<<0)) /* Recoverable ? */ if (mcgstl & (1<<0)) /* Recoverable ? */
...@@ -164,7 +154,9 @@ static fastcall void intel_machine_check(struct pt_regs * regs, long error_code) ...@@ -164,7 +154,9 @@ static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
printk (KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n", printk (KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
smp_processor_id(), mcgsth, mcgstl); smp_processor_id(), mcgsth, mcgstl);
if (intel_get_extended_msrs(&dbg)) { if (mce_num_extended_msrs > 0) {
struct intel_mce_extended_msrs dbg;
intel_get_extended_msrs(&dbg);
printk (KERN_DEBUG "CPU %d: EIP: %08x EFLAGS: %08x\n", printk (KERN_DEBUG "CPU %d: EIP: %08x EFLAGS: %08x\n",
smp_processor_id(), dbg.eip, dbg.eflags); smp_processor_id(), dbg.eip, dbg.eflags);
printk (KERN_DEBUG "\teax: %08x ebx: %08x ecx: %08x edx: %08x\n", printk (KERN_DEBUG "\teax: %08x ebx: %08x ecx: %08x edx: %08x\n",
......
...@@ -20,13 +20,25 @@ struct mtrr_state { ...@@ -20,13 +20,25 @@ struct mtrr_state {
mtrr_type def_type; mtrr_type def_type;
}; };
struct fixed_range_block {
int base_msr; /* start address of an MTRR block */
int ranges; /* number of MTRRs in this block */
};
static struct fixed_range_block fixed_range_blocks[] = {
{ MTRRfix64K_00000_MSR, 1 }, /* one 64k MTRR */
{ MTRRfix16K_80000_MSR, 2 }, /* two 16k MTRRs */
{ MTRRfix4K_C0000_MSR, 8 }, /* eight 4k MTRRs */
{}
};
static unsigned long smp_changes_mask; static unsigned long smp_changes_mask;
static struct mtrr_state mtrr_state = {}; static struct mtrr_state mtrr_state = {};
#undef MODULE_PARAM_PREFIX #undef MODULE_PARAM_PREFIX
#define MODULE_PARAM_PREFIX "mtrr." #define MODULE_PARAM_PREFIX "mtrr."
static __initdata int mtrr_show; static int mtrr_show;
module_param_named(show, mtrr_show, bool, 0); module_param_named(show, mtrr_show, bool, 0);
/* Get the MSR pair relating to a var range */ /* Get the MSR pair relating to a var range */
...@@ -37,7 +49,7 @@ get_mtrr_var_range(unsigned int index, struct mtrr_var_range *vr) ...@@ -37,7 +49,7 @@ get_mtrr_var_range(unsigned int index, struct mtrr_var_range *vr)
rdmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi); rdmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi);
} }
static void __init static void
get_fixed_ranges(mtrr_type * frs) get_fixed_ranges(mtrr_type * frs)
{ {
unsigned int *p = (unsigned int *) frs; unsigned int *p = (unsigned int *) frs;
...@@ -51,12 +63,18 @@ get_fixed_ranges(mtrr_type * frs) ...@@ -51,12 +63,18 @@ get_fixed_ranges(mtrr_type * frs)
rdmsr(MTRRfix4K_C0000_MSR + i, p[6 + i * 2], p[7 + i * 2]); rdmsr(MTRRfix4K_C0000_MSR + i, p[6 + i * 2], p[7 + i * 2]);
} }
static void __init print_fixed(unsigned base, unsigned step, const mtrr_type*types) void mtrr_save_fixed_ranges(void *info)
{
get_fixed_ranges(mtrr_state.fixed_ranges);
}
static void __cpuinit print_fixed(unsigned base, unsigned step, const mtrr_type*types)
{ {
unsigned i; unsigned i;
for (i = 0; i < 8; ++i, ++types, base += step) for (i = 0; i < 8; ++i, ++types, base += step)
printk(KERN_INFO "MTRR %05X-%05X %s\n", base, base + step - 1, mtrr_attrib_to_str(*types)); printk(KERN_INFO "MTRR %05X-%05X %s\n",
base, base + step - 1, mtrr_attrib_to_str(*types));
} }
/* Grab all of the MTRR state for this CPU into *state */ /* Grab all of the MTRR state for this CPU into *state */
...@@ -147,6 +165,44 @@ void mtrr_wrmsr(unsigned msr, unsigned a, unsigned b) ...@@ -147,6 +165,44 @@ void mtrr_wrmsr(unsigned msr, unsigned a, unsigned b)
smp_processor_id(), msr, a, b); smp_processor_id(), msr, a, b);
} }
/**
* Enable and allow read/write of extended fixed-range MTRR bits on K8 CPUs
* see AMD publication no. 24593, chapter 3.2.1 for more information
*/
static inline void k8_enable_fixed_iorrs(void)
{
unsigned lo, hi;
rdmsr(MSR_K8_SYSCFG, lo, hi);
mtrr_wrmsr(MSR_K8_SYSCFG, lo
| K8_MTRRFIXRANGE_DRAM_ENABLE
| K8_MTRRFIXRANGE_DRAM_MODIFY, hi);
}
/**
* Checks and updates an fixed-range MTRR if it differs from the value it
* should have. If K8 extenstions are wanted, update the K8 SYSCFG MSR also.
* see AMD publication no. 24593, chapter 7.8.1, page 233 for more information
* \param msr MSR address of the MTTR which should be checked and updated
* \param changed pointer which indicates whether the MTRR needed to be changed
* \param msrwords pointer to the MSR values which the MSR should have
*/
static void set_fixed_range(int msr, int * changed, unsigned int * msrwords)
{
unsigned lo, hi;
rdmsr(msr, lo, hi);
if (lo != msrwords[0] || hi != msrwords[1]) {
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
boot_cpu_data.x86 == 15 &&
((msrwords[0] | msrwords[1]) & K8_MTRR_RDMEM_WRMEM_MASK))
k8_enable_fixed_iorrs();
mtrr_wrmsr(msr, msrwords[0], msrwords[1]);
*changed = TRUE;
}
}
int generic_get_free_region(unsigned long base, unsigned long size, int replace_reg) int generic_get_free_region(unsigned long base, unsigned long size, int replace_reg)
/* [SUMMARY] Get a free MTRR. /* [SUMMARY] Get a free MTRR.
<base> The starting (base) address of the region. <base> The starting (base) address of the region.
...@@ -196,36 +252,21 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base, ...@@ -196,36 +252,21 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
*type = base_lo & 0xff; *type = base_lo & 0xff;
} }
/**
* Checks and updates the fixed-range MTRRs if they differ from the saved set
* \param frs pointer to fixed-range MTRR values, saved by get_fixed_ranges()
*/
static int set_fixed_ranges(mtrr_type * frs) static int set_fixed_ranges(mtrr_type * frs)
{ {
unsigned int *p = (unsigned int *) frs; unsigned long long *saved = (unsigned long long *) frs;
int changed = FALSE; int changed = FALSE;
int i; int block=-1, range;
unsigned int lo, hi;
rdmsr(MTRRfix64K_00000_MSR, lo, hi); while (fixed_range_blocks[++block].ranges)
if (p[0] != lo || p[1] != hi) { for (range=0; range < fixed_range_blocks[block].ranges; range++)
mtrr_wrmsr(MTRRfix64K_00000_MSR, p[0], p[1]); set_fixed_range(fixed_range_blocks[block].base_msr + range,
changed = TRUE; &changed, (unsigned int *) saved++);
}
for (i = 0; i < 2; i++) {
rdmsr(MTRRfix16K_80000_MSR + i, lo, hi);
if (p[2 + i * 2] != lo || p[3 + i * 2] != hi) {
mtrr_wrmsr(MTRRfix16K_80000_MSR + i, p[2 + i * 2],
p[3 + i * 2]);
changed = TRUE;
}
}
for (i = 0; i < 8; i++) {
rdmsr(MTRRfix4K_C0000_MSR + i, lo, hi);
if (p[6 + i * 2] != lo || p[7 + i * 2] != hi) {
mtrr_wrmsr(MTRRfix4K_C0000_MSR + i, p[6 + i * 2],
p[7 + i * 2]);
changed = TRUE;
}
}
return changed; return changed;
} }
...@@ -428,7 +469,7 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i ...@@ -428,7 +469,7 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
} }
} }
if (base + size < 0x100) { if (base < 0x100) {
printk(KERN_WARNING "mtrr: cannot set region below 1 MiB (0x%lx000,0x%lx000)\n", printk(KERN_WARNING "mtrr: cannot set region below 1 MiB (0x%lx000,0x%lx000)\n",
base, size); base, size);
return -EINVAL; return -EINVAL;
......
...@@ -729,6 +729,17 @@ void mtrr_ap_init(void) ...@@ -729,6 +729,17 @@ void mtrr_ap_init(void)
local_irq_restore(flags); local_irq_restore(flags);
} }
/**
* Save current fixed-range MTRR state of the BSP
*/
void mtrr_save_state(void)
{
if (smp_processor_id() == 0)
mtrr_save_fixed_ranges(NULL);
else
smp_call_function_single(0, mtrr_save_fixed_ranges, NULL, 1, 1);
}
static int __init mtrr_init_finialize(void) static int __init mtrr_init_finialize(void)
{ {
if (!mtrr_if) if (!mtrr_if)
......
...@@ -58,13 +58,3 @@ int __init nexgen_init_cpu(void) ...@@ -58,13 +58,3 @@ int __init nexgen_init_cpu(void)
cpu_devs[X86_VENDOR_NEXGEN] = &nexgen_cpu_dev; cpu_devs[X86_VENDOR_NEXGEN] = &nexgen_cpu_dev;
return 0; return 0;
} }
//early_arch_initcall(nexgen_init_cpu);
static int __init nexgen_exit_cpu(void)
{
cpu_devs[X86_VENDOR_NEXGEN] = NULL;
return 0;
}
late_initcall(nexgen_exit_cpu);
This diff is collapsed.
...@@ -72,8 +72,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) ...@@ -72,8 +72,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
"stc", "stc",
"100mhzsteps", "100mhzsteps",
"hwpstate", "hwpstate",
NULL, "", /* constant_tsc - moved to flags */
NULL, /* constant_tsc - moved to flags */
/* nothing */ /* nothing */
}; };
struct cpuinfo_x86 *c = v; struct cpuinfo_x86 *c = v;
......
...@@ -50,12 +50,3 @@ int __init rise_init_cpu(void) ...@@ -50,12 +50,3 @@ int __init rise_init_cpu(void)
return 0; return 0;
} }
//early_arch_initcall(rise_init_cpu);
static int __init rise_exit_cpu(void)
{
cpu_devs[X86_VENDOR_RISE] = NULL;
return 0;
}
late_initcall(rise_exit_cpu);
...@@ -112,13 +112,3 @@ int __init transmeta_init_cpu(void) ...@@ -112,13 +112,3 @@ int __init transmeta_init_cpu(void)
cpu_devs[X86_VENDOR_TRANSMETA] = &transmeta_cpu_dev; cpu_devs[X86_VENDOR_TRANSMETA] = &transmeta_cpu_dev;
return 0; return 0;
} }
//early_arch_initcall(transmeta_init_cpu);
static int __init transmeta_exit_cpu(void)
{
cpu_devs[X86_VENDOR_TRANSMETA] = NULL;
return 0;
}
late_initcall(transmeta_exit_cpu);
...@@ -24,13 +24,3 @@ int __init umc_init_cpu(void) ...@@ -24,13 +24,3 @@ int __init umc_init_cpu(void)
cpu_devs[X86_VENDOR_UMC] = &umc_cpu_dev; cpu_devs[X86_VENDOR_UMC] = &umc_cpu_dev;
return 0; return 0;
} }
//early_arch_initcall(umc_init_cpu);
static int __init umc_exit_cpu(void)
{
cpu_devs[X86_VENDOR_UMC] = NULL;
return 0;
}
late_initcall(umc_exit_cpu);
...@@ -33,7 +33,7 @@ static void doublefault_fn(void) ...@@ -33,7 +33,7 @@ static void doublefault_fn(void)
printk("double fault, tss at %08lx\n", tss); printk("double fault, tss at %08lx\n", tss);
if (ptr_ok(tss)) { if (ptr_ok(tss)) {
struct tss_struct *t = (struct tss_struct *)tss; struct i386_hw_tss *t = (struct i386_hw_tss *)tss;
printk("eip = %08lx, esp = %08lx\n", t->eip, t->esp); printk("eip = %08lx, esp = %08lx\n", t->eip, t->esp);
...@@ -49,13 +49,15 @@ static void doublefault_fn(void) ...@@ -49,13 +49,15 @@ static void doublefault_fn(void)
} }
struct tss_struct doublefault_tss __cacheline_aligned = { struct tss_struct doublefault_tss __cacheline_aligned = {
.x86_tss = {
.esp0 = STACK_START, .esp0 = STACK_START,
.ss0 = __KERNEL_DS, .ss0 = __KERNEL_DS,
.ldt = 0, .ldt = 0,
.io_bitmap_base = INVALID_IO_BITMAP_OFFSET, .io_bitmap_base = INVALID_IO_BITMAP_OFFSET,
.eip = (unsigned long) doublefault_fn, .eip = (unsigned long) doublefault_fn,
.eflags = X86_EFLAGS_SF | 0x2, /* 0x2 bit is always set */ /* 0x2 bit is always set */
.eflags = X86_EFLAGS_SF | 0x2,
.esp = STACK_START, .esp = STACK_START,
.es = __USER_DS, .es = __USER_DS,
.cs = __KERNEL_CS, .cs = __KERNEL_CS,
...@@ -63,4 +65,5 @@ struct tss_struct doublefault_tss __cacheline_aligned = { ...@@ -63,4 +65,5 @@ struct tss_struct doublefault_tss __cacheline_aligned = {
.ds = __USER_DS, .ds = __USER_DS,
.__cr3 = __pa(swapper_pg_dir) .__cr3 = __pa(swapper_pg_dir)
}
}; };
...@@ -161,25 +161,26 @@ static struct resource standard_io_resources[] = { { ...@@ -161,25 +161,26 @@ static struct resource standard_io_resources[] = { {
static int __init romsignature(const unsigned char *rom) static int __init romsignature(const unsigned char *rom)
{ {
const unsigned short * const ptr = (const unsigned short *)rom;
unsigned short sig; unsigned short sig;
return probe_kernel_address((const unsigned short *)rom, sig) == 0 && return probe_kernel_address(ptr, sig) == 0 && sig == ROMSIGNATURE;
sig == ROMSIGNATURE;
} }
static int __init romchecksum(unsigned char *rom, unsigned long length) static int __init romchecksum(const unsigned char *rom, unsigned long length)
{ {
unsigned char sum; unsigned char sum, c;
for (sum = 0; length; length--) for (sum = 0; length && probe_kernel_address(rom++, c) == 0; length--)
sum += *rom++; sum += c;
return sum == 0; return !length && !sum;
} }
static void __init probe_roms(void) static void __init probe_roms(void)
{ {
const unsigned char *rom;
unsigned long start, length, upper; unsigned long start, length, upper;
unsigned char *rom; unsigned char c;
int i; int i;
/* video rom */ /* video rom */
...@@ -191,8 +192,11 @@ static void __init probe_roms(void) ...@@ -191,8 +192,11 @@ static void __init probe_roms(void)
video_rom_resource.start = start; video_rom_resource.start = start;
if (probe_kernel_address(rom + 2, c) != 0)
continue;
/* 0 < length <= 0x7f * 512, historically */ /* 0 < length <= 0x7f * 512, historically */
length = rom[2] * 512; length = c * 512;
/* if checksum okay, trust length byte */ /* if checksum okay, trust length byte */
if (length && romchecksum(rom, length)) if (length && romchecksum(rom, length))
...@@ -226,8 +230,11 @@ static void __init probe_roms(void) ...@@ -226,8 +230,11 @@ static void __init probe_roms(void)
if (!romsignature(rom)) if (!romsignature(rom))
continue; continue;
if (probe_kernel_address(rom + 2, c) != 0)
continue;
/* 0 < length <= 0x7f * 512, historically */ /* 0 < length <= 0x7f * 512, historically */
length = rom[2] * 512; length = c * 512;
/* but accept any length that fits if checksum okay */ /* but accept any length that fits if checksum okay */
if (!length || start + length > upper || !romchecksum(rom, length)) if (!length || start + length > upper || !romchecksum(rom, length))
...@@ -386,10 +393,8 @@ int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map) ...@@ -386,10 +393,8 @@ int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
____________________33__ ____________________33__
______________________4_ ______________________4_
*/ */
printk("sanitize start\n");
/* if there's only one memory region, don't bother */ /* if there's only one memory region, don't bother */
if (*pnr_map < 2) { if (*pnr_map < 2) {
printk("sanitize bail 0\n");
return -1; return -1;
} }
...@@ -398,7 +403,6 @@ int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map) ...@@ -398,7 +403,6 @@ int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
/* bail out if we find any unreasonable addresses in bios map */ /* bail out if we find any unreasonable addresses in bios map */
for (i=0; i<old_nr; i++) for (i=0; i<old_nr; i++)
if (biosmap[i].addr + biosmap[i].size < biosmap[i].addr) { if (biosmap[i].addr + biosmap[i].size < biosmap[i].addr) {
printk("sanitize bail 1\n");
return -1; return -1;
} }
...@@ -494,7 +498,6 @@ int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map) ...@@ -494,7 +498,6 @@ int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
memcpy(biosmap, new_bios, new_nr*sizeof(struct e820entry)); memcpy(biosmap, new_bios, new_nr*sizeof(struct e820entry));
*pnr_map = new_nr; *pnr_map = new_nr;
printk("sanitize end\n");
return 0; return 0;
} }
...@@ -525,7 +528,6 @@ int __init copy_e820_map(struct e820entry * biosmap, int nr_map) ...@@ -525,7 +528,6 @@ int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
unsigned long long size = biosmap->size; unsigned long long size = biosmap->size;
unsigned long long end = start + size; unsigned long long end = start + size;
unsigned long type = biosmap->type; unsigned long type = biosmap->type;
printk("copy_e820_map() start: %016Lx size: %016Lx end: %016Lx type: %ld\n", start, size, end, type);
/* Overflow in 64 bits? Ignore the memory map. */ /* Overflow in 64 bits? Ignore the memory map. */
if (start > end) if (start > end)
...@@ -536,17 +538,11 @@ int __init copy_e820_map(struct e820entry * biosmap, int nr_map) ...@@ -536,17 +538,11 @@ int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
* Not right. Fix it up. * Not right. Fix it up.
*/ */
if (type == E820_RAM) { if (type == E820_RAM) {
printk("copy_e820_map() type is E820_RAM\n");
if (start < 0x100000ULL && end > 0xA0000ULL) { if (start < 0x100000ULL && end > 0xA0000ULL) {
printk("copy_e820_map() lies in range...\n"); if (start < 0xA0000ULL)
if (start < 0xA0000ULL) {
printk("copy_e820_map() start < 0xA0000ULL\n");
add_memory_region(start, 0xA0000ULL-start, type); add_memory_region(start, 0xA0000ULL-start, type);
} if (end <= 0x100000ULL)
if (end <= 0x100000ULL) {
printk("copy_e820_map() end <= 0x100000ULL\n");
continue; continue;
}
start = 0x100000ULL; start = 0x100000ULL;
size = end - start; size = end - start;
} }
...@@ -818,6 +814,26 @@ void __init limit_regions(unsigned long long size) ...@@ -818,6 +814,26 @@ void __init limit_regions(unsigned long long size)
print_memory_map("limit_regions endfunc"); print_memory_map("limit_regions endfunc");
} }
/*
* This function checks if any part of the range <start,end> is mapped
* with type.
*/
int
e820_any_mapped(u64 start, u64 end, unsigned type)
{
int i;
for (i = 0; i < e820.nr_map; i++) {
const struct e820entry *ei = &e820.map[i];
if (type && ei->type != type)
continue;
if (ei->addr >= end || ei->addr + ei->size <= start)
continue;
return 1;
}
return 0;
}
EXPORT_SYMBOL_GPL(e820_any_mapped);
/* /*
* This function checks if the entire range <start,end> is mapped with type. * This function checks if the entire range <start,end> is mapped with type.
* *
......
...@@ -69,13 +69,11 @@ static void efi_call_phys_prelog(void) __acquires(efi_rt_lock) ...@@ -69,13 +69,11 @@ static void efi_call_phys_prelog(void) __acquires(efi_rt_lock)
{ {
unsigned long cr4; unsigned long cr4;
unsigned long temp; unsigned long temp;
struct Xgt_desc_struct *cpu_gdt_descr; struct Xgt_desc_struct gdt_descr;
spin_lock(&efi_rt_lock); spin_lock(&efi_rt_lock);
local_irq_save(efi_rt_eflags); local_irq_save(efi_rt_eflags);
cpu_gdt_descr = &per_cpu(cpu_gdt_descr, 0);
/* /*
* If I don't have PSE, I should just duplicate two entries in page * If I don't have PSE, I should just duplicate two entries in page
* directory. If I have PSE, I just need to duplicate one entry in * directory. If I have PSE, I just need to duplicate one entry in
...@@ -105,17 +103,19 @@ static void efi_call_phys_prelog(void) __acquires(efi_rt_lock) ...@@ -105,17 +103,19 @@ static void efi_call_phys_prelog(void) __acquires(efi_rt_lock)
*/ */
local_flush_tlb(); local_flush_tlb();
cpu_gdt_descr->address = __pa(cpu_gdt_descr->address); gdt_descr.address = __pa(get_cpu_gdt_table(0));
load_gdt(cpu_gdt_descr); gdt_descr.size = GDT_SIZE - 1;
load_gdt(&gdt_descr);
} }
static void efi_call_phys_epilog(void) __releases(efi_rt_lock) static void efi_call_phys_epilog(void) __releases(efi_rt_lock)
{ {
unsigned long cr4; unsigned long cr4;
struct Xgt_desc_struct *cpu_gdt_descr = &per_cpu(cpu_gdt_descr, 0); struct Xgt_desc_struct gdt_descr;
cpu_gdt_descr->address = (unsigned long)__va(cpu_gdt_descr->address); gdt_descr.address = (unsigned long)get_cpu_gdt_table(0);
load_gdt(cpu_gdt_descr); gdt_descr.size = GDT_SIZE - 1;
load_gdt(&gdt_descr);
cr4 = read_cr4(); cr4 = read_cr4();
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
* I changed all the .align's to 4 (16 byte alignment), as that's faster * I changed all the .align's to 4 (16 byte alignment), as that's faster
* on a 486. * on a 486.
* *
* Stack layout in 'ret_from_system_call': * Stack layout in 'syscall_exit':
* ptrace needs to have all regs on the stack. * ptrace needs to have all regs on the stack.
* if the order here is changed, it needs to be * if the order here is changed, it needs to be
* updated in fork.c:copy_process, signal.c:do_signal, * updated in fork.c:copy_process, signal.c:do_signal,
...@@ -132,7 +132,7 @@ VM_MASK = 0x00020000 ...@@ -132,7 +132,7 @@ VM_MASK = 0x00020000
movl $(__USER_DS), %edx; \ movl $(__USER_DS), %edx; \
movl %edx, %ds; \ movl %edx, %ds; \
movl %edx, %es; \ movl %edx, %es; \
movl $(__KERNEL_PDA), %edx; \ movl $(__KERNEL_PERCPU), %edx; \
movl %edx, %fs movl %edx, %fs
#define RESTORE_INT_REGS \ #define RESTORE_INT_REGS \
...@@ -305,16 +305,12 @@ sysenter_past_esp: ...@@ -305,16 +305,12 @@ sysenter_past_esp:
pushl $(__USER_CS) pushl $(__USER_CS)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET cs, 0*/ /*CFI_REL_OFFSET cs, 0*/
#ifndef CONFIG_COMPAT_VDSO
/* /*
* Push current_thread_info()->sysenter_return to the stack. * Push current_thread_info()->sysenter_return to the stack.
* A tiny bit of offset fixup is necessary - 4*4 means the 4 words * A tiny bit of offset fixup is necessary - 4*4 means the 4 words
* pushed above; +8 corresponds to copy_thread's esp0 setting. * pushed above; +8 corresponds to copy_thread's esp0 setting.
*/ */
pushl (TI_sysenter_return-THREAD_SIZE+8+4*4)(%esp) pushl (TI_sysenter_return-THREAD_SIZE+8+4*4)(%esp)
#else
pushl $SYSENTER_RETURN
#endif
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET eip, 0 CFI_REL_OFFSET eip, 0
...@@ -342,7 +338,7 @@ sysenter_past_esp: ...@@ -342,7 +338,7 @@ sysenter_past_esp:
jae syscall_badsys jae syscall_badsys
call *sys_call_table(,%eax,4) call *sys_call_table(,%eax,4)
movl %eax,PT_EAX(%esp) movl %eax,PT_EAX(%esp)
DISABLE_INTERRUPTS(CLBR_ECX|CLBR_EDX) DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx testw $_TIF_ALLWORK_MASK, %cx
...@@ -560,9 +556,7 @@ END(syscall_badsys) ...@@ -560,9 +556,7 @@ END(syscall_badsys)
#define FIXUP_ESPFIX_STACK \ #define FIXUP_ESPFIX_STACK \
/* since we are on a wrong stack, we cant make it a C code :( */ \ /* since we are on a wrong stack, we cant make it a C code :( */ \
movl %fs:PDA_cpu, %ebx; \ PER_CPU(gdt_page, %ebx); \
PER_CPU(cpu_gdt_descr, %ebx); \
movl GDS_address(%ebx), %ebx; \
GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah); \ GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah); \
addl %esp, %eax; \ addl %esp, %eax; \
pushl $__KERNEL_DS; \ pushl $__KERNEL_DS; \
...@@ -635,7 +629,7 @@ ENTRY(name) \ ...@@ -635,7 +629,7 @@ ENTRY(name) \
SAVE_ALL; \ SAVE_ALL; \
TRACE_IRQS_OFF \ TRACE_IRQS_OFF \
movl %esp,%eax; \ movl %esp,%eax; \
call smp_/**/name; \ call smp_##name; \
jmp ret_from_intr; \ jmp ret_from_intr; \
CFI_ENDPROC; \ CFI_ENDPROC; \
ENDPROC(name) ENDPROC(name)
...@@ -643,11 +637,6 @@ ENDPROC(name) ...@@ -643,11 +637,6 @@ ENDPROC(name)
/* The include is where all of the SMP etc. interrupts come from */ /* The include is where all of the SMP etc. interrupts come from */
#include "entry_arch.h" #include "entry_arch.h"
/* This alternate entry is needed because we hijack the apic LVTT */
#if defined(CONFIG_VMI) && defined(CONFIG_X86_LOCAL_APIC)
BUILD_INTERRUPT(apic_vmi_timer_interrupt,LOCAL_TIMER_VECTOR)
#endif
KPROBE_ENTRY(page_fault) KPROBE_ENTRY(page_fault)
RING0_EC_FRAME RING0_EC_FRAME
pushl $do_page_fault pushl $do_page_fault
...@@ -686,7 +675,7 @@ error_code: ...@@ -686,7 +675,7 @@ error_code:
pushl %fs pushl %fs
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET fs, 0*/ /*CFI_REL_OFFSET fs, 0*/
movl $(__KERNEL_PDA), %ecx movl $(__KERNEL_PERCPU), %ecx
movl %ecx, %fs movl %ecx, %fs
UNWIND_ESPFIX_STACK UNWIND_ESPFIX_STACK
popl %ecx popl %ecx
......
...@@ -34,17 +34,32 @@ ...@@ -34,17 +34,32 @@
/* /*
* This is how much memory *in addition to the memory covered up to * This is how much memory *in addition to the memory covered up to
* and including _end* we need mapped initially. We need one bit for * and including _end* we need mapped initially.
* each possible page, but only in low memory, which means * We need:
* - one bit for each possible page, but only in low memory, which means
* 2^32/4096/8 = 128K worst case (4G/4G split.) * 2^32/4096/8 = 128K worst case (4G/4G split.)
* - enough space to map all low memory, which means
* (2^32/4096) / 1024 pages (worst case, non PAE)
* (2^32/4096) / 512 + 4 pages (worst case for PAE)
* - a few pages for allocator use before the kernel pagetable has
* been set up
* *
* Modulo rounding, each megabyte assigned here requires a kilobyte of * Modulo rounding, each megabyte assigned here requires a kilobyte of
* memory, which is currently unreclaimed. * memory, which is currently unreclaimed.
* *
* This should be a multiple of a page. * This should be a multiple of a page.
*/ */
#define INIT_MAP_BEYOND_END (128*1024) LOW_PAGES = 1<<(32-PAGE_SHIFT_asm)
#if PTRS_PER_PMD > 1
PAGE_TABLE_SIZE = (LOW_PAGES / PTRS_PER_PMD) + PTRS_PER_PGD
#else
PAGE_TABLE_SIZE = (LOW_PAGES / PTRS_PER_PGD)
#endif
BOOTBITMAP_SIZE = LOW_PAGES / 8
ALLOCATOR_SLOP = 4
INIT_MAP_BEYOND_END = BOOTBITMAP_SIZE + (PAGE_TABLE_SIZE + ALLOCATOR_SLOP)*PAGE_SIZE_asm
/* /*
* 32-bit kernel entrypoint; only used by the boot CPU. On entry, * 32-bit kernel entrypoint; only used by the boot CPU. On entry,
...@@ -147,8 +162,7 @@ page_pde_offset = (__PAGE_OFFSET >> 20); ...@@ -147,8 +162,7 @@ page_pde_offset = (__PAGE_OFFSET >> 20);
/* /*
* Non-boot CPU entry point; entered from trampoline.S * Non-boot CPU entry point; entered from trampoline.S
* We can't lgdt here, because lgdt itself uses a data segment, but * We can't lgdt here, because lgdt itself uses a data segment, but
* we know the trampoline has already loaded the boot_gdt_table GDT * we know the trampoline has already loaded the boot_gdt for us.
* for us.
* *
* If cpu hotplug is not supported then this code can go in init section * If cpu hotplug is not supported then this code can go in init section
* which will be freed later * which will be freed later
...@@ -318,12 +332,12 @@ is386: movl $2,%ecx # set MP ...@@ -318,12 +332,12 @@ is386: movl $2,%ecx # set MP
movl %eax,%cr0 movl %eax,%cr0
call check_x87 call check_x87
call setup_pda
lgdt early_gdt_descr lgdt early_gdt_descr
lidt idt_descr lidt idt_descr
ljmp $(__KERNEL_CS),$1f ljmp $(__KERNEL_CS),$1f
1: movl $(__KERNEL_DS),%eax # reload all the segment registers 1: movl $(__KERNEL_DS),%eax # reload all the segment registers
movl %eax,%ss # after changing gdt. movl %eax,%ss # after changing gdt.
movl %eax,%fs # gets reset once there's real percpu
movl $(__USER_DS),%eax # DS/ES contains default USER segment movl $(__USER_DS),%eax # DS/ES contains default USER segment
movl %eax,%ds movl %eax,%ds
...@@ -333,16 +347,17 @@ is386: movl $2,%ecx # set MP ...@@ -333,16 +347,17 @@ is386: movl $2,%ecx # set MP
movl %eax,%gs movl %eax,%gs
lldt %ax lldt %ax
movl $(__KERNEL_PDA),%eax
mov %eax,%fs
cld # gcc2 wants the direction flag cleared at all times cld # gcc2 wants the direction flag cleared at all times
pushl $0 # fake return address for unwinder pushl $0 # fake return address for unwinder
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
movb ready, %cl movb ready, %cl
movb $1, ready movb $1, ready
cmpb $0,%cl # the first CPU calls start_kernel cmpb $0,%cl # the first CPU calls start_kernel
jne initialize_secondary # all other CPUs call initialize_secondary je 1f
movl $(__KERNEL_PERCPU), %eax
movl %eax,%fs # set this cpu's percpu
jmp initialize_secondary # all other CPUs call initialize_secondary
1:
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
jmp start_kernel jmp start_kernel
...@@ -365,23 +380,6 @@ check_x87: ...@@ -365,23 +380,6 @@ check_x87:
.byte 0xDB,0xE4 /* fsetpm for 287, ignored by 387 */ .byte 0xDB,0xE4 /* fsetpm for 287, ignored by 387 */
ret ret
/*
* Point the GDT at this CPU's PDA. On boot this will be
* cpu_gdt_table and boot_pda; for secondary CPUs, these will be
* that CPU's GDT and PDA.
*/
ENTRY(setup_pda)
/* get the PDA pointer */
movl start_pda, %eax
/* slot the PDA address into the GDT */
mov early_gdt_descr+2, %ecx
mov %ax, (__KERNEL_PDA+0+2)(%ecx) /* base & 0x0000ffff */
shr $16, %eax
mov %al, (__KERNEL_PDA+4+0)(%ecx) /* base & 0x00ff0000 */
mov %ah, (__KERNEL_PDA+4+3)(%ecx) /* base & 0xff000000 */
ret
/* /*
* setup_idt * setup_idt
* *
...@@ -554,9 +552,6 @@ ENTRY(empty_zero_page) ...@@ -554,9 +552,6 @@ ENTRY(empty_zero_page)
* This starts the data section. * This starts the data section.
*/ */
.data .data
ENTRY(start_pda)
.long boot_pda
ENTRY(stack_start) ENTRY(stack_start)
.long init_thread_union+THREAD_SIZE .long init_thread_union+THREAD_SIZE
.long __BOOT_DS .long __BOOT_DS
...@@ -588,7 +583,7 @@ fault_msg: ...@@ -588,7 +583,7 @@ fault_msg:
.word 0 # 32 bit align gdt_desc.address .word 0 # 32 bit align gdt_desc.address
boot_gdt_descr: boot_gdt_descr:
.word __BOOT_DS+7 .word __BOOT_DS+7
.long boot_gdt_table - __PAGE_OFFSET .long boot_gdt - __PAGE_OFFSET
.word 0 # 32-bit align idt_desc.address .word 0 # 32-bit align idt_desc.address
idt_descr: idt_descr:
...@@ -599,67 +594,14 @@ idt_descr: ...@@ -599,67 +594,14 @@ idt_descr:
.word 0 # 32 bit align gdt_desc.address .word 0 # 32 bit align gdt_desc.address
ENTRY(early_gdt_descr) ENTRY(early_gdt_descr)
.word GDT_ENTRIES*8-1 .word GDT_ENTRIES*8-1
.long cpu_gdt_table .long per_cpu__gdt_page /* Overwritten for secondary CPUs */
/* /*
* The boot_gdt_table must mirror the equivalent in setup.S and is * The boot_gdt must mirror the equivalent in setup.S and is
* used only for booting. * used only for booting.
*/ */
.align L1_CACHE_BYTES .align L1_CACHE_BYTES
ENTRY(boot_gdt_table) ENTRY(boot_gdt)
.fill GDT_ENTRY_BOOT_CS,8,0 .fill GDT_ENTRY_BOOT_CS,8,0
.quad 0x00cf9a000000ffff /* kernel 4GB code at 0x00000000 */ .quad 0x00cf9a000000ffff /* kernel 4GB code at 0x00000000 */
.quad 0x00cf92000000ffff /* kernel 4GB data at 0x00000000 */ .quad 0x00cf92000000ffff /* kernel 4GB data at 0x00000000 */
/*
* The Global Descriptor Table contains 28 quadwords, per-CPU.
*/
.align L1_CACHE_BYTES
ENTRY(cpu_gdt_table)
.quad 0x0000000000000000 /* NULL descriptor */
.quad 0x0000000000000000 /* 0x0b reserved */
.quad 0x0000000000000000 /* 0x13 reserved */
.quad 0x0000000000000000 /* 0x1b reserved */
.quad 0x0000000000000000 /* 0x20 unused */
.quad 0x0000000000000000 /* 0x28 unused */
.quad 0x0000000000000000 /* 0x33 TLS entry 1 */
.quad 0x0000000000000000 /* 0x3b TLS entry 2 */
.quad 0x0000000000000000 /* 0x43 TLS entry 3 */
.quad 0x0000000000000000 /* 0x4b reserved */
.quad 0x0000000000000000 /* 0x53 reserved */
.quad 0x0000000000000000 /* 0x5b reserved */
.quad 0x00cf9a000000ffff /* 0x60 kernel 4GB code at 0x00000000 */
.quad 0x00cf92000000ffff /* 0x68 kernel 4GB data at 0x00000000 */
.quad 0x00cffa000000ffff /* 0x73 user 4GB code at 0x00000000 */
.quad 0x00cff2000000ffff /* 0x7b user 4GB data at 0x00000000 */
.quad 0x0000000000000000 /* 0x80 TSS descriptor */
.quad 0x0000000000000000 /* 0x88 LDT descriptor */
/*
* Segments used for calling PnP BIOS have byte granularity.
* They code segments and data segments have fixed 64k limits,
* the transfer segment sizes are set at run time.
*/
.quad 0x00409a000000ffff /* 0x90 32-bit code */
.quad 0x00009a000000ffff /* 0x98 16-bit code */
.quad 0x000092000000ffff /* 0xa0 16-bit data */
.quad 0x0000920000000000 /* 0xa8 16-bit data */
.quad 0x0000920000000000 /* 0xb0 16-bit data */
/*
* The APM segments have byte granularity and their bases
* are set at run time. All have 64k limits.
*/
.quad 0x00409a000000ffff /* 0xb8 APM CS code */
.quad 0x00009a000000ffff /* 0xc0 APM CS 16 code (16 bit) */
.quad 0x004092000000ffff /* 0xc8 APM DS data */
.quad 0x00c0920000000000 /* 0xd0 - ESPFIX SS */
.quad 0x00cf92000000ffff /* 0xd8 - PDA */
.quad 0x0000000000000000 /* 0xe0 - unused */
.quad 0x0000000000000000 /* 0xe8 - unused */
.quad 0x0000000000000000 /* 0xf0 - unused */
.quad 0x0000000000000000 /* 0xf8 - GDT entry 31: double-fault TSS */
...@@ -28,5 +28,3 @@ EXPORT_SYMBOL(__read_lock_failed); ...@@ -28,5 +28,3 @@ EXPORT_SYMBOL(__read_lock_failed);
#endif #endif
EXPORT_SYMBOL(csum_partial); EXPORT_SYMBOL(csum_partial);
EXPORT_SYMBOL(_proxy_pda);
...@@ -35,6 +35,7 @@ ...@@ -35,6 +35,7 @@
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/htirq.h> #include <linux/htirq.h>
#include <linux/freezer.h> #include <linux/freezer.h>
#include <linux/kthread.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/smp.h> #include <asm/smp.h>
...@@ -661,8 +662,6 @@ static int balanced_irq(void *unused) ...@@ -661,8 +662,6 @@ static int balanced_irq(void *unused)
unsigned long prev_balance_time = jiffies; unsigned long prev_balance_time = jiffies;
long time_remaining = balanced_irq_interval; long time_remaining = balanced_irq_interval;
daemonize("kirqd");
/* push everything to CPU 0 to give us a starting point. */ /* push everything to CPU 0 to give us a starting point. */
for (i = 0 ; i < NR_IRQS ; i++) { for (i = 0 ; i < NR_IRQS ; i++) {
irq_desc[i].pending_mask = cpumask_of_cpu(0); irq_desc[i].pending_mask = cpumask_of_cpu(0);
...@@ -722,9 +721,8 @@ static int __init balanced_irq_init(void) ...@@ -722,9 +721,8 @@ static int __init balanced_irq_init(void)
} }
printk(KERN_INFO "Starting balanced_irq\n"); printk(KERN_INFO "Starting balanced_irq\n");
if (kernel_thread(balanced_irq, NULL, CLONE_KERNEL) >= 0) if (!IS_ERR(kthread_run(balanced_irq, NULL, "kirqd")))
return 0; return 0;
else
printk(KERN_ERR "balanced_irq_init: failed to spawn balanced_irq"); printk(KERN_ERR "balanced_irq_init: failed to spawn balanced_irq");
failed: failed:
for_each_possible_cpu(i) { for_each_possible_cpu(i) {
...@@ -1403,10 +1401,6 @@ static void __init setup_ExtINT_IRQ0_pin(unsigned int apic, unsigned int pin, in ...@@ -1403,10 +1401,6 @@ static void __init setup_ExtINT_IRQ0_pin(unsigned int apic, unsigned int pin, in
enable_8259A_irq(0); enable_8259A_irq(0);
} }
static inline void UNEXPECTED_IO_APIC(void)
{
}
void __init print_IO_APIC(void) void __init print_IO_APIC(void)
{ {
int apic, i; int apic, i;
...@@ -1446,34 +1440,12 @@ void __init print_IO_APIC(void) ...@@ -1446,34 +1440,12 @@ void __init print_IO_APIC(void)
printk(KERN_DEBUG "....... : physical APIC id: %02X\n", reg_00.bits.ID); printk(KERN_DEBUG "....... : physical APIC id: %02X\n", reg_00.bits.ID);
printk(KERN_DEBUG "....... : Delivery Type: %X\n", reg_00.bits.delivery_type); printk(KERN_DEBUG "....... : Delivery Type: %X\n", reg_00.bits.delivery_type);
printk(KERN_DEBUG "....... : LTS : %X\n", reg_00.bits.LTS); printk(KERN_DEBUG "....... : LTS : %X\n", reg_00.bits.LTS);
if (reg_00.bits.ID >= get_physical_broadcast())
UNEXPECTED_IO_APIC();
if (reg_00.bits.__reserved_1 || reg_00.bits.__reserved_2)
UNEXPECTED_IO_APIC();
printk(KERN_DEBUG ".... register #01: %08X\n", reg_01.raw); printk(KERN_DEBUG ".... register #01: %08X\n", reg_01.raw);
printk(KERN_DEBUG "....... : max redirection entries: %04X\n", reg_01.bits.entries); printk(KERN_DEBUG "....... : max redirection entries: %04X\n", reg_01.bits.entries);
if ( (reg_01.bits.entries != 0x0f) && /* older (Neptune) boards */
(reg_01.bits.entries != 0x17) && /* typical ISA+PCI boards */
(reg_01.bits.entries != 0x1b) && /* Compaq Proliant boards */
(reg_01.bits.entries != 0x1f) && /* dual Xeon boards */
(reg_01.bits.entries != 0x22) && /* bigger Xeon boards */
(reg_01.bits.entries != 0x2E) &&
(reg_01.bits.entries != 0x3F)
)
UNEXPECTED_IO_APIC();
printk(KERN_DEBUG "....... : PRQ implemented: %X\n", reg_01.bits.PRQ); printk(KERN_DEBUG "....... : PRQ implemented: %X\n", reg_01.bits.PRQ);
printk(KERN_DEBUG "....... : IO APIC version: %04X\n", reg_01.bits.version); printk(KERN_DEBUG "....... : IO APIC version: %04X\n", reg_01.bits.version);
if ( (reg_01.bits.version != 0x01) && /* 82489DX IO-APICs */
(reg_01.bits.version != 0x10) && /* oldest IO-APICs */
(reg_01.bits.version != 0x11) && /* Pentium/Pro IO-APICs */
(reg_01.bits.version != 0x13) && /* Xeon IO-APICs */
(reg_01.bits.version != 0x20) /* Intel P64H (82806 AA) */
)
UNEXPECTED_IO_APIC();
if (reg_01.bits.__reserved_1 || reg_01.bits.__reserved_2)
UNEXPECTED_IO_APIC();
/* /*
* Some Intel chipsets with IO APIC VERSION of 0x1? don't have reg_02, * Some Intel chipsets with IO APIC VERSION of 0x1? don't have reg_02,
...@@ -1483,8 +1455,6 @@ void __init print_IO_APIC(void) ...@@ -1483,8 +1455,6 @@ void __init print_IO_APIC(void)
if (reg_01.bits.version >= 0x10 && reg_02.raw != reg_01.raw) { if (reg_01.bits.version >= 0x10 && reg_02.raw != reg_01.raw) {
printk(KERN_DEBUG ".... register #02: %08X\n", reg_02.raw); printk(KERN_DEBUG ".... register #02: %08X\n", reg_02.raw);
printk(KERN_DEBUG "....... : arbitration: %02X\n", reg_02.bits.arbitration); printk(KERN_DEBUG "....... : arbitration: %02X\n", reg_02.bits.arbitration);
if (reg_02.bits.__reserved_1 || reg_02.bits.__reserved_2)
UNEXPECTED_IO_APIC();
} }
/* /*
...@@ -1496,8 +1466,6 @@ void __init print_IO_APIC(void) ...@@ -1496,8 +1466,6 @@ void __init print_IO_APIC(void)
reg_03.raw != reg_01.raw) { reg_03.raw != reg_01.raw) {
printk(KERN_DEBUG ".... register #03: %08X\n", reg_03.raw); printk(KERN_DEBUG ".... register #03: %08X\n", reg_03.raw);
printk(KERN_DEBUG "....... : Boot DT : %X\n", reg_03.bits.boot_DT); printk(KERN_DEBUG "....... : Boot DT : %X\n", reg_03.bits.boot_DT);
if (reg_03.bits.__reserved_1)
UNEXPECTED_IO_APIC();
} }
printk(KERN_DEBUG ".... IRQ redirection table:\n"); printk(KERN_DEBUG ".... IRQ redirection table:\n");
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/syscalls.h>
/* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */ /* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */
static void set_bitmap(unsigned long *bitmap, unsigned int base, unsigned int extent, int new_value) static void set_bitmap(unsigned long *bitmap, unsigned int base, unsigned int extent, int new_value)
...@@ -113,7 +114,7 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on) ...@@ -113,7 +114,7 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
* Reset the owner so that a process switch will not set * Reset the owner so that a process switch will not set
* tss->io_bitmap_base to IO_BITMAP_OFFSET. * tss->io_bitmap_base to IO_BITMAP_OFFSET.
*/ */
tss->io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY; tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
tss->io_bitmap_owner = NULL; tss->io_bitmap_owner = NULL;
put_cpu(); put_cpu();
......
...@@ -24,6 +24,9 @@ ...@@ -24,6 +24,9 @@
DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_internodealigned_in_smp; DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_internodealigned_in_smp;
EXPORT_PER_CPU_SYMBOL(irq_stat); EXPORT_PER_CPU_SYMBOL(irq_stat);
DEFINE_PER_CPU(struct pt_regs *, irq_regs);
EXPORT_PER_CPU_SYMBOL(irq_regs);
/* /*
* 'what should we do if we get a hw irq event on an illegal vector'. * 'what should we do if we get a hw irq event on an illegal vector'.
* each architecture has to answer this themselves. * each architecture has to answer this themselves.
......
...@@ -477,7 +477,7 @@ static int __init smp_read_mpc(struct mp_config_table *mpc) ...@@ -477,7 +477,7 @@ static int __init smp_read_mpc(struct mp_config_table *mpc)
} }
++mpc_record; ++mpc_record;
} }
clustered_apic_check(); setup_apic_routing();
if (!num_processors) if (!num_processors)
printk(KERN_ERR "SMP mptable: no processors registered!\n"); printk(KERN_ERR "SMP mptable: no processors registered!\n");
return num_processors; return num_processors;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
#include <asm/delay.h> #include <asm/delay.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/reboot_fixups.h> #include <asm/reboot_fixups.h>
static void cs5530a_warm_reset(struct pci_dev *dev) static void cs5530a_warm_reset(struct pci_dev *dev)
{ {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -70,8 +70,6 @@ ...@@ -70,8 +70,6 @@
#include <asm/i8259.h> #include <asm/i8259.h>
int pit_latch_buggy; /* extern */
#include "do_timer.h" #include "do_timer.h"
unsigned int cpu_khz; /* Detected as we calibrate the TSC */ unsigned int cpu_khz; /* Detected as we calibrate the TSC */
......
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
* *
* TYPE VALUE * TYPE VALUE
* R_386_32 startup_32_smp * R_386_32 startup_32_smp
* R_386_32 boot_gdt_table * R_386_32 boot_gdt
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
...@@ -62,8 +62,8 @@ r_base = . ...@@ -62,8 +62,8 @@ r_base = .
* to 32 bit. * to 32 bit.
*/ */
lidtl boot_idt - r_base # load idt with 0, 0 lidtl boot_idt_descr - r_base # load idt with 0, 0
lgdtl boot_gdt - r_base # load gdt with whatever is appropriate lgdtl boot_gdt_descr - r_base # load gdt with whatever is appropriate
xor %ax, %ax xor %ax, %ax
inc %ax # protected mode (PE) bit inc %ax # protected mode (PE) bit
...@@ -73,11 +73,11 @@ r_base = . ...@@ -73,11 +73,11 @@ r_base = .
# These need to be in the same 64K segment as the above; # These need to be in the same 64K segment as the above;
# hence we don't use the boot_gdt_descr defined in head.S # hence we don't use the boot_gdt_descr defined in head.S
boot_gdt: boot_gdt_descr:
.word __BOOT_DS + 7 # gdt limit .word __BOOT_DS + 7 # gdt limit
.long boot_gdt_table-__PAGE_OFFSET # gdt base .long boot_gdt - __PAGE_OFFSET # gdt base
boot_idt: boot_idt_descr:
.word 0 # idt limit = 0 .word 0 # idt limit = 0
.long 0 # idt base = 0L .long 0 # idt base = 0L
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
SECTIONS SECTIONS
{ {
. = VDSO_PRELINK + SIZEOF_HEADERS; . = VDSO_PRELINK_asm + SIZEOF_HEADERS;
.hash : { *(.hash) } :text .hash : { *(.hash) } :text
.gnu.hash : { *(.gnu.hash) } .gnu.hash : { *(.gnu.hash) }
...@@ -21,7 +21,7 @@ SECTIONS ...@@ -21,7 +21,7 @@ SECTIONS
For the layouts to match, we need to skip more than enough For the layouts to match, we need to skip more than enough
space for the dynamic symbol table et al. If this amount space for the dynamic symbol table et al. If this amount
is insufficient, ld -shared will barf. Just increase it here. */ is insufficient, ld -shared will barf. Just increase it here. */
. = VDSO_PRELINK + 0x400; . = VDSO_PRELINK_asm + 0x400;
.text : { *(.text) } :text =0x90909090 .text : { *(.text) } :text =0x90909090
.note : { *(.note.*) } :text :note .note : { *(.note.*) } :text :note
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -45,7 +45,7 @@ static struct dmi_system_id __initdata bigsmp_dmi_table[] = { ...@@ -45,7 +45,7 @@ static struct dmi_system_id __initdata bigsmp_dmi_table[] = {
}; };
static int probe_bigsmp(void) static int __init probe_bigsmp(void)
{ {
if (def_to_bigsmp) if (def_to_bigsmp)
dmi_bigsmp = 1; dmi_bigsmp = 1;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment