Commit 6f053df1 authored by Linus Torvalds's avatar Linus Torvalds

Import 2.3.23pre3

parent bd79a781
......@@ -175,18 +175,25 @@ CONFIG_MATHEMU
on the Alpha. The only time you would ever not say Y is to say M in
order to debug the code. Say Y unless you know what you are doing.
Support for over 1Gig of memory
CONFIG_BIGMEM
Linux can use up to 1 Gigabytes (= 2^30 bytes) of physical memory.
If you are compiling a kernel which will never run on a machine with
more than 1 Gigabyte, answer N here. Otherwise, say Y.
The actual amount of physical memory may need to be specified using a
kernel command line option such as "mem=256M". (Try "man bootparam"
or see the documentation of your boot loader (lilo or loadlin) about
how to pass options to the kernel at boot time. The lilo procedure
is also explained in the SCSI-HOWTO, available from
http://metalab.unc.edu/mdw/linux.html#howto .)
High Memory support
CONFIG_NOHIGHMEM
If you are compiling a kernel which will never run on a machine
with more than 1 Gigabyte total physical RAM, answer "off"
here (default choice).
Linux can use up to 64 Gigabytes of physical memory on x86 systems.
High memory is all the physical RAM that could not be directly
mapped by the kernel - ie. 3GB if there is 4GB RAM in the system,
7GB if there is 8GB RAM in the system.
If 4 Gigabytes physical RAM or less is used then answer "4GB" here.
If more than 4 Gigabytes is used then answer "64GB" here. This
selection turns Intel PAE (Physical Address Extension) mode on.
PAE implements 3-level paging on IA32 processors. PAE is fully
supported by Linux, PAE mode is implemented on all recent Intel
processors (PPro and better). NOTE: The "64GB" kernel will not
boot CPUs that not support PAE!
Normal PC floppy disk support
CONFIG_BLK_DEV_FD
......@@ -12180,18 +12187,44 @@ Include support for the NetWinder
CONFIG_ARCH_NETWINDER
Say Y here if you intend to run this kernel on the NetWinder.
Maximum Physical Memory
Virtual/Physical Memory Split
CONFIG_1GB
Linux can use up to 2 Gigabytes (= 2^31 bytes) of physical memory.
If you are compiling a kernel which will never run on a machine with
more than 1 Gigabyte, answer "1GB" here. Otherwise, say "2GB".
The actual amount of physical memory should be specified using a
kernel command line option such as "mem=256M". (Try "man bootparam"
or see the documentation of your boot loader (lilo or loadlin) about
how to pass options to the kernel at boot time. The lilo procedure
is also explained in the SCSI-HOWTO, available from
http://metalab.unc.edu/mdw/linux.html#howto .)
If you are compiling a kernel which will never run on a machine
with more than 1 Gigabyte total physical RAM, answer "3GB/1GB"
here (default choice).
On 32-bit x86 systems Linux can use up to 64 Gigabytes of physical
memory. However 32-bit x86 processors have only 4 Gigabytes of
virtual memory space. This option specifies the maximum amount of
virtual memory space one process can potentially use. Certain types
of applications (eg. database servers) perform better if they have
as much virtual memory per process as possible.
The remaining part of the 4G virtual memory space is used by the
kernel to 'permanently map' as much physical memory as possible.
Certain types of applications perform better if there is more
'permanently mapped' kernel memory.
[WARNING! Certain boards do not support PCI DMA to physical addresses
bigger than 2 Gigabytes. Non-DMA-able memory must not be permanently
mapped by the kernel, thus a 1G/3G split will not work on such boxes.]
As you can see there is no 'perfect split' - the fundamental
problem is that 4G of 32-bit virtual memory space is short. So
you'll have to pick your own choice - depending on the application
load of your box. A 2G/2G split is typically a good choice for a
generic Linux server with lots of RAM.
Any potentially remaining (not permanently mapped) part of physical
memory is called 'high memory'. How much total high memory the kernel
can handle is influenced by the (next) High Memory configuration option.
The actual amount of total physical memory will either be
autodetected or can be forced by using a kernel command line option
such as "mem=256M". (Try "man bootparam" or see the documentation of
your boot loader (lilo or loadlin) about how to pass options to the
kernel at boot time. The lilo procedure is also explained in the
SCSI-HOWTO, available from http://metalab.unc.edu/mdw/linux.html#howto .)
Math emulation
CONFIG_NWFPE
......@@ -12802,7 +12835,7 @@ CONFIG_KHTTPD
# LocalWords: KERNNAME kname ktype kernelname Kerneltype KERNTYPE Alt RX mdafb
# LocalWords: dataless kerneltype SYSNAME Comtrol Rocketport palmtop fbset EGS
# LocalWords: nvram SYSRQ SysRq PrintScreen sysrq NVRAMs NvRAM Shortwave RTTY
# LocalWords: Sitor Amtor Pactor GTOR hayes TX TMOUT JFdocs BIGMEM DAC IRQ's
# LocalWords: Sitor Amtor Pactor GTOR hayes TX TMOUT JFdocs HIGHMEM DAC IRQ's
# LocalWords: IDEPCI IDEDMA idedma PDC pdc TRM trm raidtools luthien nuclecu
# LocalWords: unam mx miguel koobera uic EMUL solaris pp ieee lpsg co DMAs TOS
# LocalWords: BLDCONFIG preloading jumperless BOOTINIT modutils multipath GRE
......
......@@ -42,6 +42,18 @@ if [ "$CONFIG_MK7" = "y" ]; then
define_bool CONFIG_X86_USE_3DNOW y
fi
choice 'High Memory Support' \
"off CONFIG_NOHIGHMEM \
4GB CONFIG_HIGHMEM4G \
64GB CONFIG_HIGHMEM64G" off
if [ "$CONFIG_HIGHMEM4G" = "y" ]; then
define_bool CONFIG_HIGHMEM y
fi
if [ "$CONFIG_HIGHMEM64G" = "y" ]; then
define_bool CONFIG_HIGHMEM y
define_bool CONFIG_X86_PAE y
fi
bool 'Math emulation' CONFIG_MATH_EMULATION
bool 'MTRR (Memory Type Range Register) support' CONFIG_MTRR
bool 'Symmetric multi-processing support' CONFIG_SMP
......@@ -59,7 +71,6 @@ endmenu
mainmenu_option next_comment
comment 'General setup'
bool 'Support for over 1Gig of memory' CONFIG_BIGMEM
bool 'Networking support' CONFIG_NET
bool 'SGI Visual Workstation support' CONFIG_VISWS
if [ "$CONFIG_VISWS" = "y" ]; then
......
......@@ -24,8 +24,9 @@ CONFIG_X86_BSWAP=y
CONFIG_X86_POPAD_OK=y
CONFIG_X86_TSC=y
CONFIG_X86_GOOD_APIC=y
CONFIG_1GB=y
# CONFIG_2GB is not set
CONFIG_NOHIGHMEM=y
# CONFIG_HIGHMEM4G is not set
# CONFIG_HIGHMEM64G is not set
# CONFIG_MATH_EMULATION is not set
# CONFIG_MTRR is not set
CONFIG_SMP=y
......@@ -40,7 +41,6 @@ CONFIG_MODULES=y
#
# General setup
#
# CONFIG_BIGMEM is not set
CONFIG_NET=y
# CONFIG_VISWS is not set
CONFIG_X86_IO_APIC=y
......@@ -111,7 +111,7 @@ CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_BLK_DEV_AEC6210 is not set
CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_SIS5513 is not set
# CONFIG_BLK_DEV_PIIX_TUNING is not set
# CONFIG_IDE_CHIPSETS is not set
# CONFIG_BLK_CPQ_DA is not set
......
This diff is collapsed.
......@@ -20,6 +20,7 @@
* Naturally it's not a 1:1 relation, but there are similarities.
*/
#include <linux/config.h>
#include <linux/ptrace.h>
#include <linux/errno.h>
#include <linux/signal.h>
......
......@@ -54,7 +54,8 @@
#ifdef CONFIG_BLK_DEV_RAM
#include <linux/blk.h>
#endif
#include <linux/bigmem.h>
#include <linux/highmem.h>
#include <linux/bootmem.h>
#include <asm/processor.h>
#include <linux/console.h>
#include <asm/uaccess.h>
......@@ -403,10 +404,9 @@ void __init add_memory_region(unsigned long start,
#define LOWMEMSIZE() ((*(unsigned short *)__va(0x413)) * 1024)
void __init setup_memory_region(void)
{
#define E820_DEBUG 0
#define E820_DEBUG 1
#ifdef E820_DEBUG
int i;
#endif
......@@ -432,9 +432,8 @@ void __init setup_memory_region(void)
memcpy(e820.map, E820_MAP, e820.nr_map * sizeof e820.map[0]);
#ifdef E820_DEBUG
for (i=0; i < e820.nr_map; i++) {
printk("e820: %ld @ %08lx ",
(unsigned long)(e820.map[i].size),
(unsigned long)(e820.map[i].addr));
printk("e820: %08x @ %08x ", (int)e820.map[i].size,
(int)e820.map[i].addr);
switch (e820.map[i].type) {
case E820_RAM: printk("(usable)\n");
break;
......@@ -464,48 +463,11 @@ void __init setup_memory_region(void)
} /* setup_memory_region */
void __init setup_arch(char **cmdline_p, unsigned long * memory_start_p, unsigned long * memory_end_p)
static inline void parse_mem_cmdline (char ** cmdline_p)
{
unsigned long high_pfn, max_pfn;
char c = ' ', *to = command_line, *from = COMMAND_LINE;
int len = 0;
int i;
int usermem=0;
#ifdef CONFIG_VISWS
visws_get_board_type_and_rev();
#endif
ROOT_DEV = to_kdev_t(ORIG_ROOT_DEV);
drive_info = DRIVE_INFO;
screen_info = SCREEN_INFO;
apm_bios_info = APM_BIOS_INFO;
if( SYS_DESC_TABLE.length != 0 ) {
MCA_bus = SYS_DESC_TABLE.table[3] &0x2;
machine_id = SYS_DESC_TABLE.table[0];
machine_submodel_id = SYS_DESC_TABLE.table[1];
BIOS_revision = SYS_DESC_TABLE.table[2];
}
aux_device_present = AUX_DEVICE_INFO;
#ifdef CONFIG_BLK_DEV_RAM
rd_image_start = RAMDISK_FLAGS & RAMDISK_IMAGE_START_MASK;
rd_prompt = ((RAMDISK_FLAGS & RAMDISK_PROMPT_FLAG) != 0);
rd_doload = ((RAMDISK_FLAGS & RAMDISK_LOAD_FLAG) != 0);
#endif
setup_memory_region();
if (!MOUNT_ROOT_RDONLY)
root_mountflags &= ~MS_RDONLY;
init_mm.start_code = (unsigned long) &_text;
init_mm.end_code = (unsigned long) &_etext;
init_mm.end_data = (unsigned long) &_edata;
init_mm.brk = (unsigned long) &_end;
code_resource.start = virt_to_bus(&_text);
code_resource.end = virt_to_bus(&_etext)-1;
data_resource.start = virt_to_bus(&_etext);
data_resource.end = virt_to_bus(&_edata)-1;
int usermem = 0;
/* Save unparsed command line copy for /proc/cmdline */
memcpy(saved_command_line, COMMAND_LINE, COMMAND_LINE_SIZE);
......@@ -519,8 +481,9 @@ void __init setup_arch(char **cmdline_p, unsigned long * memory_start_p, unsigne
* "mem=XXX[KkmM]@XXX[KkmM]" defines a memory region from
* <start> to <start>+<mem>, overriding the bios size.
*/
if (c == ' ' && *(const unsigned long *)from == *(const unsigned long *)"mem=") {
if (to != command_line) to--;
if (c == ' ' && !memcmp(from, "mem=", 4)) {
if (to != command_line)
to--;
if (!memcmp(from+4, "nopentium", 9)) {
from += 9+4;
boot_cpu_data.x86_capability &= ~X86_FEATURE_PSE;
......@@ -542,7 +505,7 @@ void __init setup_arch(char **cmdline_p, unsigned long * memory_start_p, unsigne
}
mem_size = memparse(from+4, &from);
if (*from == '@')
start_at = memparse(from+1,&from);
start_at = memparse(from+1, &from);
else {
start_at = HIGH_MEMORY;
mem_size -= HIGH_MEMORY;
......@@ -559,54 +522,158 @@ void __init setup_arch(char **cmdline_p, unsigned long * memory_start_p, unsigne
}
*to = '\0';
*cmdline_p = command_line;
}
/* Find the highest page frame number we have available */
max_pfn = 0;
for (i=0; i < e820.nr_map; i++) {
/* RAM? */
if (e820.map[i].type == E820_RAM) {
unsigned long end_pfn = (e820.map[i].addr + e820.map[i].size) >> PAGE_SHIFT;
void __init setup_arch(char **cmdline_p)
{
unsigned long bootmap_size;
unsigned long start_pfn, max_pfn, max_low_pfn;
int i;
if (end_pfn > max_pfn)
max_pfn = end_pfn;
}
#ifdef CONFIG_VISWS
visws_get_board_type_and_rev();
#endif
ROOT_DEV = to_kdev_t(ORIG_ROOT_DEV);
drive_info = DRIVE_INFO;
screen_info = SCREEN_INFO;
apm_bios_info = APM_BIOS_INFO;
if( SYS_DESC_TABLE.length != 0 ) {
MCA_bus = SYS_DESC_TABLE.table[3] &0x2;
machine_id = SYS_DESC_TABLE.table[0];
machine_submodel_id = SYS_DESC_TABLE.table[1];
BIOS_revision = SYS_DESC_TABLE.table[2];
}
aux_device_present = AUX_DEVICE_INFO;
/*
* We can only allocate a limited amount of direct-mapped memory
*/
#define VMALLOC_RESERVE (128 << 20) /* 128MB for vmalloc and initrd */
#define MAXMEM ((unsigned long)(-PAGE_OFFSET-VMALLOC_RESERVE))
#define MAXMEM_PFN (MAXMEM >> PAGE_SHIFT)
#ifdef CONFIG_BLK_DEV_RAM
rd_image_start = RAMDISK_FLAGS & RAMDISK_IMAGE_START_MASK;
rd_prompt = ((RAMDISK_FLAGS & RAMDISK_PROMPT_FLAG) != 0);
rd_doload = ((RAMDISK_FLAGS & RAMDISK_LOAD_FLAG) != 0);
#endif
setup_memory_region();
high_pfn = MAXMEM_PFN;
if (max_pfn < high_pfn)
high_pfn = max_pfn;
if (!MOUNT_ROOT_RDONLY)
root_mountflags &= ~MS_RDONLY;
init_mm.start_code = (unsigned long) &_text;
init_mm.end_code = (unsigned long) &_etext;
init_mm.end_data = (unsigned long) &_edata;
init_mm.brk = (unsigned long) &_end;
code_resource.start = virt_to_bus(&_text);
code_resource.end = virt_to_bus(&_etext)-1;
data_resource.start = virt_to_bus(&_etext);
data_resource.end = virt_to_bus(&_edata)-1;
parse_mem_cmdline(cmdline_p);
#define PFN_UP(x) (((x) + PAGE_SIZE-1) >> PAGE_SHIFT)
#define PFN_DOWN(x) ((x) >> PAGE_SHIFT)
#define PFN_PHYS(x) ((x) << PAGE_SHIFT)
/*
* But the bigmem stuff may be able to use more of it
* (but currently only up to about 4GB)
* 128MB for vmalloc and initrd
*/
#ifdef CONFIG_BIGMEM
#define MAXBIGMEM ((unsigned long)(~(VMALLOC_RESERVE-1)))
#define MAXBIGMEM_PFN (MAXBIGMEM >> PAGE_SHIFT)
if (max_pfn > MAX_PFN)
max_pfn = MAX_PFN;
/* When debugging, make half of "normal" memory be BIGMEM memory instead */
#ifdef BIGMEM_DEBUG
high_pfn >>= 1;
#endif
#define VMALLOC_RESERVE (unsigned long)(128 << 20)
#define MAXMEM (unsigned long)(-PAGE_OFFSET-VMALLOC_RESERVE)
#define MAXMEM_PFN PFN_DOWN(MAXMEM)
/*
* partially used pages are not usable - thus
* we are rounding upwards:
*/
start_pfn = PFN_UP(__pa(&_end));
/*
* Find the highest page frame number we have available
*/
max_pfn = 0;
for (i = 0; i < e820.nr_map; i++) {
unsigned long curr_pfn;
/* RAM? */
if (e820.map[i].type != E820_RAM)
continue;
curr_pfn = PFN_DOWN(e820.map[i].addr + e820.map[i].size);
if (curr_pfn > max_pfn)
max_pfn = curr_pfn;
}
bigmem_start = high_pfn << PAGE_SHIFT;
bigmem_end = max_pfn << PAGE_SHIFT;
printk(KERN_NOTICE "%ldMB BIGMEM available.\n", (bigmem_end-bigmem_start) >> 20);
/*
* Determine low and high memory ranges:
*/
max_low_pfn = max_pfn;
if (max_low_pfn > MAXMEM_PFN)
max_low_pfn = MAXMEM_PFN;
#ifdef CONFIG_HIGHMEM
highstart_pfn = highend_pfn = max_pfn;
if (max_pfn > MAXMEM_PFN) {
highstart_pfn = MAXMEM_PFN;
highend_pfn = max_pfn;
printk(KERN_NOTICE "%ldMB HIGHMEM available.\n",
pages_to_mb(highend_pfn - highstart_pfn));
}
#endif
/*
* Initialize the boot-time allocator (with low memory only):
*/
bootmap_size = init_bootmem(start_pfn, max_low_pfn);
ram_resources[1].end = (high_pfn << PAGE_SHIFT)-1;
/*
* FIXME: what about high memory?
*/
ram_resources[1].end = PFN_PHYS(max_low_pfn);
*memory_start_p = (unsigned long) &_end;
*memory_end_p = PAGE_OFFSET + (high_pfn << PAGE_SHIFT);
/*
* Register fully available low RAM pages with the bootmem allocator.
*/
for (i = 0; i < e820.nr_map; i++) {
unsigned long curr_pfn, last_pfn, size;
/*
* Reserve usable low memory
*/
if (e820.map[i].type != E820_RAM)
continue;
/*
* We are rounding up the start address of usable memory:
*/
curr_pfn = PFN_UP(e820.map[i].addr);
if (curr_pfn >= max_low_pfn)
continue;
/*
* ... and at the end of the usable range downwards:
*/
last_pfn = PFN_DOWN(e820.map[i].addr + e820.map[i].size);
if (last_pfn > max_low_pfn)
last_pfn = max_low_pfn;
size = last_pfn - curr_pfn;
free_bootmem(PFN_PHYS(curr_pfn), PFN_PHYS(size));
}
/*
* Reserve the bootmem bitmap itself as well. We do this in two
* steps (first step was init_bootmem()) because this catches
* the (very unlikely) case of us accidentally initializing the
* bootmem allocator with an invalid RAM area.
*/
reserve_bootmem(HIGH_MEMORY, (PFN_PHYS(start_pfn) +
bootmap_size + PAGE_SIZE-1) - (HIGH_MEMORY));
/*
* reserve physical page 0 - it's a special BIOS page on many boxes,
* enabling clean reboots, SMP operation, laptop functions.
*/
reserve_bootmem(0, PAGE_SIZE);
#ifdef __SMP__
/*
* But first pinch a few for the stack/trampoline stuff
* FIXME: Don't need the extra page at 4K, but need to fix
* trampoline before removing it. (see the GDT stuff)
*/
reserve_bootmem(PAGE_SIZE, PAGE_SIZE);
smp_alloc_memory(); /* AP processor realmode stacks in low memory*/
#endif
#ifdef __SMP__
/*
......@@ -616,10 +683,11 @@ void __init setup_arch(char **cmdline_p, unsigned long * memory_start_p, unsigne
#endif
#ifdef CONFIG_BLK_DEV_INITRD
// FIXME needs to do the new bootmem alloc stuff
if (LOADER_TYPE) {
initrd_start = INITRD_START ? INITRD_START + PAGE_OFFSET : 0;
initrd_end = initrd_start+INITRD_SIZE;
if (initrd_end > memory_end) {
if (initrd_end > (max_low_pfn << PAGE_SHIFT)) {
printk("initrd extends beyond end of memory "
"(0x%08lx > 0x%08lx)\ndisabling initrd\n",
initrd_end,memory_end);
......
......@@ -39,6 +39,7 @@
#include <linux/kernel_stat.h>
#include <linux/smp_lock.h>
#include <linux/irq.h>
#include <linux/bootmem.h>
#include <linux/delay.h>
#include <linux/mc146818rtc.h>
......@@ -630,12 +631,15 @@ static unsigned long __init setup_trampoline(void)
* We are called very early to get the low memory for the
* SMP bootup trampoline page.
*/
unsigned long __init smp_alloc_memory(unsigned long mem_base)
void __init smp_alloc_memory(void)
{
if (virt_to_phys((void *)mem_base) >= 0x9F000)
trampoline_base = (void *) alloc_bootmem_pages(PAGE_SIZE);
/*
* Has to be in very low memory so we can execute
* real-mode AP code.
*/
if (__pa(trampoline_base) >= 0x9F000)
BUG();
trampoline_base = (void *)mem_base;
return mem_base + PAGE_SIZE;
}
/*
......@@ -804,11 +808,10 @@ void __init setup_local_APIC(void)
apic_write(APIC_DFR, value);
}
unsigned long __init init_smp_mappings(unsigned long memory_start)
void __init init_smp_mappings(void)
{
unsigned long apic_phys;
memory_start = PAGE_ALIGN(memory_start);
if (smp_found_config) {
apic_phys = mp_lapic_addr;
} else {
......@@ -818,11 +821,10 @@ unsigned long __init init_smp_mappings(unsigned long memory_start)
* could use the real zero-page, but it's safer
* this way if some buggy code writes to this page ...
*/
apic_phys = __pa(memory_start);
memset((void *)memory_start, 0, PAGE_SIZE);
memory_start += PAGE_SIZE;
apic_phys = __pa(alloc_bootmem_pages(PAGE_SIZE));
memset((void *)apic_phys, 0, PAGE_SIZE);
}
set_fixmap(FIX_APIC_BASE,apic_phys);
set_fixmap(FIX_APIC_BASE, apic_phys);
dprintk("mapped APIC to %08lx (%08lx)\n", APIC_BASE, apic_phys);
#ifdef CONFIG_X86_IO_APIC
......@@ -834,9 +836,8 @@ unsigned long __init init_smp_mappings(unsigned long memory_start)
if (smp_found_config) {
ioapic_phys = mp_ioapics[i].mpc_apicaddr;
} else {
ioapic_phys = __pa(memory_start);
memset((void *)memory_start, 0, PAGE_SIZE);
memory_start += PAGE_SIZE;
ioapic_phys = __pa(alloc_bootmem_pages(PAGE_SIZE));
memset((void *)ioapic_phys, 0, PAGE_SIZE);
}
set_fixmap(idx,ioapic_phys);
dprintk("mapped IOAPIC to %08lx (%08lx)\n",
......@@ -845,8 +846,6 @@ unsigned long __init init_smp_mappings(unsigned long memory_start)
}
}
#endif
return memory_start;
}
/*
......@@ -1112,6 +1111,12 @@ int __init start_secondary(void *unused)
smp_callin();
while (!atomic_read(&smp_commenced))
/* nothing */ ;
/*
* low-memory mappings have been cleared, flush them from
* the local TLBs too.
*/
local_flush_tlb();
return cpu_idle();
}
......@@ -1153,7 +1158,6 @@ static int __init fork_by_hand(void)
static void __init do_boot_cpu(int i)
{
unsigned long cfg;
pgd_t maincfg;
struct task_struct *idle;
unsigned long send_status, accept_status;
int timeout, num_starts, j;
......@@ -1207,9 +1211,6 @@ static void __init do_boot_cpu(int i)
*((volatile unsigned short *) phys_to_virt(0x467)) = start_eip & 0xf;
dprintk("3.\n");
maincfg=swapper_pg_dir[0];
((unsigned long *)swapper_pg_dir)[0]=0x102007;
/*
* Be paranoid about clearing APIC errors.
*/
......@@ -1367,9 +1368,6 @@ static void __init do_boot_cpu(int i)
cpucount--;
}
swapper_pg_dir[0]=maincfg;
local_flush_tlb();
/* mark "stuck" area as not stuck */
*((volatile unsigned long *)phys_to_virt(8192)) = 0;
}
......@@ -1567,14 +1565,9 @@ void __init smp_boot_cpus(void)
#ifndef CONFIG_VISWS
{
unsigned long cfg;
/*
* Install writable page 0 entry to set BIOS data area.
*/
cfg = pg0[0];
/* writeable, present, addr 0 */
pg0[0] = _PAGE_RW | _PAGE_PRESENT | 0;
local_flush_tlb();
/*
......@@ -1584,12 +1577,6 @@ void __init smp_boot_cpus(void)
CMOS_WRITE(0, 0xf);
*((volatile long *) phys_to_virt(0x467)) = 0;
/*
* Restore old page 0 entry.
*/
pg0[0] = cfg;
local_flush_tlb();
}
#endif
......@@ -1646,5 +1633,7 @@ void __init smp_boot_cpus(void)
*/
if (cpu_has_tsc && cpucount)
synchronize_tsc_bp();
zap_low_mappings();
}
......@@ -581,6 +581,7 @@ asmlinkage void math_emulate(long arg)
#endif /* CONFIG_MATH_EMULATION */
#ifndef CONFIG_M686
void __init trap_init_f00f_bug(void)
{
unsigned long page;
......@@ -596,8 +597,8 @@ void __init trap_init_f00f_bug(void)
pgd = pgd_offset(&init_mm, page);
pmd = pmd_offset(pgd, page);
pte = pte_offset(pmd, page);
free_page(pte_page(*pte));
*pte = mk_pte(&idt_table, PAGE_KERNEL_RO);
__free_page(pte_page(*pte));
*pte = mk_pte_phys(__pa(&idt_table), PAGE_KERNEL_RO);
local_flush_tlb();
/*
......@@ -608,6 +609,7 @@ void __init trap_init_f00f_bug(void)
idt = (struct desc_struct *)page;
__asm__ __volatile__("lidt %0": "=m" (idt_descr));
}
#endif
#define _set_gate(gate_addr,type,dpl,addr) \
do { \
......@@ -772,7 +774,7 @@ cobalt_init(void)
#endif
void __init trap_init(void)
{
if (readl(0x0FFFD9) == 'E' + ('I'<<8) + ('S'<<16) + ('A'<<24))
if (isa_readl(0x0FFFD9) == 'E'+('I'<<8)+('S'<<16)+('A'<<24))
EISA_bus = 1;
set_trap_gate(0,&divide_error);
......
......@@ -102,7 +102,7 @@ static void mark_screen_rdonly(struct task_struct * tsk)
if (pgd_none(*pgd))
return;
if (pgd_bad(*pgd)) {
printk("vm86: bad pgd entry [%p]:%08lx\n", pgd, pgd_val(*pgd));
pgd_ERROR(*pgd);
pgd_clear(pgd);
return;
}
......@@ -110,7 +110,7 @@ static void mark_screen_rdonly(struct task_struct * tsk)
if (pmd_none(*pmd))
return;
if (pmd_bad(*pmd)) {
printk("vm86: bad pmd entry [%p]:%08lx\n", pmd, pmd_val(*pmd));
pmd_ERROR(*pmd);
pmd_clear(pmd);
return;
}
......
......@@ -10,8 +10,4 @@
O_TARGET := mm.o
O_OBJS := init.o fault.o ioremap.o extable.o
ifeq ($(CONFIG_BIGMEM),y)
O_OBJS += bigmem.o
endif
include $(TOPDIR)/Rules.make
/*
* BIGMEM IA32 code and variables.
*
* (C) 1999 Andrea Arcangeli, SuSE GmbH, andrea@suse.de
* Gerhard Wichert, Siemens AG, Gerhard.Wichert@pdb.siemens.de
*/
#include <linux/mm.h>
#include <linux/bigmem.h>
unsigned long bigmem_start, bigmem_end;
/* NOTE: fixmap_init alloc all the fixmap pagetables contigous on the
physical space so we can cache the place of the first one and move
around without checking the pgd every time. */
pte_t *kmap_pte;
pgprot_t kmap_prot;
#define kmap_get_fixmap_pte(vaddr) \
pte_offset(pmd_offset(pgd_offset_k(vaddr), (vaddr)), (vaddr))
void __init kmap_init(void)
{
unsigned long kmap_vstart;
/* cache the first kmap pte */
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
kmap_prot = PAGE_KERNEL;
if (boot_cpu_data.x86_capability & X86_FEATURE_PGE)
pgprot_val(kmap_prot) |= _PAGE_GLOBAL;
}
......@@ -76,6 +76,31 @@ int __verify_write(const void * addr, unsigned long size)
return 0;
}
static inline void handle_wp_test (void)
{
const unsigned long vaddr = PAGE_OFFSET;
pgd_t *pgd;
pmd_t *pmd;
pte_t *pte;
/*
* make it read/writable temporarily, so that the fault
* can be handled.
*/
pgd = swapper_pg_dir + __pgd_offset(vaddr);
pmd = pmd_offset(pgd, vaddr);
pte = pte_offset(pmd, vaddr);
*pte = mk_pte_phys(0, PAGE_KERNEL);
local_flush_tlb();
boot_cpu_data.wp_works_ok = 1;
/*
* Beware: Black magic here. The printk is needed here to flush
* CPU state on certain buggy processors.
*/
printk("Ok");
}
asmlinkage void do_invalid_op(struct pt_regs *, unsigned long);
extern unsigned long idt;
......@@ -226,15 +251,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code)
* First we check if it was the bootup rw-test, though..
*/
if (boot_cpu_data.wp_works_ok < 0 &&
address == PAGE_OFFSET && (error_code & 1)) {
boot_cpu_data.wp_works_ok = 1;
pg0[0] = pte_val(mk_pte(PAGE_OFFSET, PAGE_KERNEL));
local_flush_tlb();
/*
* Beware: Black magic here. The printk is needed here to flush
* CPU state on certain buggy processors.
*/
printk("Ok");
address == PAGE_OFFSET && (error_code & 1)) {
handle_wp_test();
return;
}
......
This diff is collapsed.
......@@ -20,15 +20,19 @@ static inline void remap_area_pte(pte_t * pte, unsigned long address, unsigned l
end = address + size;
if (end > PMD_SIZE)
end = PMD_SIZE;
if (address >= end)
BUG();
do {
if (!pte_none(*pte))
if (!pte_none(*pte)) {
printk("remap_area_pte: page already exists\n");
BUG();
}
set_pte(pte, mk_pte_phys(phys_addr, __pgprot(_PAGE_PRESENT | _PAGE_RW |
_PAGE_DIRTY | _PAGE_ACCESSED | flags)));
address += PAGE_SIZE;
phys_addr += PAGE_SIZE;
pte++;
} while (address < end);
} while (address && (address < end));
}
static inline int remap_area_pmd(pmd_t * pmd, unsigned long address, unsigned long size,
......@@ -41,6 +45,8 @@ static inline int remap_area_pmd(pmd_t * pmd, unsigned long address, unsigned lo
if (end > PGDIR_SIZE)
end = PGDIR_SIZE;
phys_addr -= address;
if (address >= end)
BUG();
do {
pte_t * pte = pte_alloc_kernel(pmd, address);
if (!pte)
......@@ -48,7 +54,7 @@ static inline int remap_area_pmd(pmd_t * pmd, unsigned long address, unsigned lo
remap_area_pte(pte, address, end - address, address + phys_addr, flags);
address = (address + PMD_SIZE) & PMD_MASK;
pmd++;
} while (address < end);
} while (address && (address < end));
return 0;
}
......@@ -61,8 +67,11 @@ static int remap_area_pages(unsigned long address, unsigned long phys_addr,
phys_addr -= address;
dir = pgd_offset(&init_mm, address);
flush_cache_all();
while (address < end) {
pmd_t *pmd = pmd_alloc_kernel(dir, address);
if (address >= end)
BUG();
do {
pmd_t *pmd;
pmd = pmd_alloc_kernel(dir, address);
if (!pmd)
return -ENOMEM;
if (remap_area_pmd(pmd, address, end - address,
......@@ -71,7 +80,7 @@ static int remap_area_pages(unsigned long address, unsigned long phys_addr,
set_pgdir(address, *dir);
address = (address + PGDIR_SIZE) & PGDIR_MASK;
dir++;
}
} while (address && (address < end));
flush_tlb_all();
return 0;
}
......
......@@ -461,7 +461,7 @@ int ide_dmaproc (ide_dma_action_t func, ide_drive_t *drive)
int ide_release_dma (ide_hwif_t *hwif)
{
if (hwif->dmatable) {
clear_page((unsigned long)hwif->dmatable); /* clear PRD 1st */
clear_page((void *)hwif->dmatable); /* clear PRD 1st */
free_page((unsigned long)hwif->dmatable); /* free PRD 2nd */
}
if ((hwif->dma_extra) && (hwif->channel == 0))
......
......@@ -923,6 +923,7 @@ void ide_error (ide_drive_t *drive, const char *msg, byte stat)
*/
void ide_cmd(ide_drive_t *drive, byte cmd, byte nsect, ide_handler_t *handler)
{
drive->timeout = WAIT_CMD;
ide_set_handler (drive, handler);
if (IDE_CONTROL_REG)
OUT_BYTE(drive->ctl,IDE_CONTROL_REG); /* clear nIEN */
......
......@@ -94,6 +94,7 @@
#ifdef CONFIG_APM
#include <linux/apm_bios.h>
#endif
#include <linux/bootmem.h>
#include <asm/io.h>
#include <asm/system.h>
......@@ -2286,7 +2287,7 @@ static void vc_init(unsigned int currcons, unsigned int rows, unsigned int cols,
struct tty_driver console_driver;
static int console_refcount;
unsigned long __init con_init(unsigned long kmem_start)
void __init con_init(void)
{
const char *display_desc = NULL;
unsigned int currcons = 0;
......@@ -2295,7 +2296,7 @@ unsigned long __init con_init(unsigned long kmem_start)
display_desc = conswitchp->con_startup();
if (!display_desc) {
fg_console = 0;
return kmem_start;
return;
}
memset(&console_driver, 0, sizeof(struct tty_driver));
......@@ -2336,19 +2337,18 @@ unsigned long __init con_init(unsigned long kmem_start)
timer_active |= 1<<BLANK_TIMER;
}
/* Unfortunately, kmalloc is not running yet */
/* Due to kmalloc roundup allocating statically is more efficient -
so provide MIN_NR_CONSOLES for people with very little memory */
/*
* kmalloc is not running yet - we use the bootmem allocator.
*/
for (currcons = 0; currcons < MIN_NR_CONSOLES; currcons++) {
int j, k ;
vc_cons[currcons].d = (struct vc_data *) kmem_start;
kmem_start += sizeof(struct vc_data);
vt_cons[currcons] = (struct vt_struct *) kmem_start;
kmem_start += sizeof(struct vt_struct);
vc_cons[currcons].d = (struct vc_data *)
alloc_bootmem(sizeof(struct vc_data));
vt_cons[currcons] = (struct vt_struct *)
alloc_bootmem(sizeof(struct vt_struct));
visual_init(currcons, 1);
screenbuf = (unsigned short *) kmem_start;
kmem_start += screenbuf_size;
screenbuf = (unsigned short *) alloc_bootmem(screenbuf_size);
kmalloced = 0;
vc_init(currcons, video_num_lines, video_num_columns,
currcons || !sw->con_save_screen);
......@@ -2376,8 +2376,6 @@ unsigned long __init con_init(unsigned long kmem_start)
#endif
init_bh(CONSOLE_BH, console_bh);
return kmem_start;
}
#ifndef VT_SINGLE_DRIVER
......
......@@ -811,7 +811,7 @@ static int n_tty_open(struct tty_struct *tty)
if (!tty->read_buf) {
tty->read_buf = (unsigned char *)
get_free_page(in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
get_zeroed_page(in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
if (!tty->read_buf)
return -ENOMEM;
}
......
......@@ -1127,7 +1127,7 @@ static int startup(struct async_struct * info)
unsigned short ICP;
#endif
page = get_free_page(GFP_KERNEL);
page = get_zeroed_page(GFP_KERNEL);
if (!page)
return -ENOMEM;
......@@ -2974,7 +2974,7 @@ static int rs_open(struct tty_struct *tty, struct file * filp)
#endif
if (!tmp_buf) {
page = get_free_page(GFP_KERNEL);
page = get_zeroed_page(GFP_KERNEL);
if (!page) {
return -ENOMEM;
}
......@@ -4359,10 +4359,9 @@ static struct console sercons = {
/*
* Register console.
*/
long __init serial_console_init(long kmem_start, long kmem_end)
void __init serial_console_init(void)
{
register_console(&sercons);
return kmem_start;
}
#endif
......
......@@ -129,7 +129,7 @@ static int tty_fasync(int fd, struct file * filp, int on);
extern int sx_init (void);
#endif
#ifdef CONFIG_8xx
extern long console_8xx_init(long, long);
extern console_8xx_init(void);
extern int rs_8xx_init(void);
#endif /* CONFIG_8xx */
......@@ -798,7 +798,7 @@ static int init_dev(kdev_t device, struct tty_struct **ret_tty)
tp = o_tp = NULL;
ltp = o_ltp = NULL;
tty = (struct tty_struct*) get_free_page(GFP_KERNEL);
tty = (struct tty_struct*) get_zeroed_page(GFP_KERNEL);
if(!tty)
goto fail_no_mem;
initialize_tty_struct(tty);
......@@ -824,7 +824,7 @@ static int init_dev(kdev_t device, struct tty_struct **ret_tty)
}
if (driver->type == TTY_DRIVER_TYPE_PTY) {
o_tty = (struct tty_struct *) get_free_page(GFP_KERNEL);
o_tty = (struct tty_struct *) get_zeroed_page(GFP_KERNEL);
if (!o_tty)
goto free_mem_out;
initialize_tty_struct(o_tty);
......@@ -2062,7 +2062,7 @@ int tty_unregister_driver(struct tty_driver *driver)
* Just do some early initializations, and do the complex setup
* later.
*/
long __init console_init(long kmem_start, long kmem_end)
void __init console_init(void)
{
/* Setup the default TTY line discipline. */
memset(ldiscs, 0, sizeof(ldiscs));
......@@ -2085,16 +2085,15 @@ long __init console_init(long kmem_start, long kmem_end)
* inform about problems etc..
*/
#ifdef CONFIG_VT
kmem_start = con_init(kmem_start);
con_init();
#endif
#ifdef CONFIG_SERIAL_CONSOLE
#ifdef CONFIG_8xx
kmem_start = console_8xx_init(kmem_start, kmem_end);
console_8xx_init();
#else
kmem_start = serial_console_init(kmem_start, kmem_end);
serial_console_init();
#endif /* CONFIG_8xx */
#endif
return kmem_start;
}
static struct tty_driver dev_tty_driver, dev_syscons_driver;
......@@ -2109,7 +2108,7 @@ static struct tty_driver dev_console_driver;
* Ok, now we can initialize the rest of the tty devices and can count
* on memory allocations, interrupts etc..
*/
int __init tty_init(void)
void __init tty_init(void)
{
if (sizeof(struct tty_struct) > PAGE_SIZE)
panic("size of tty structure > PAGE_SIZE!");
......@@ -2220,5 +2219,4 @@ int __init tty_init(void)
#ifdef CONFIG_VT
vcs_init();
#endif
return 0;
}
......@@ -1495,7 +1495,7 @@ speedo_rx(struct net_device *dev)
rxf = sp->rx_ringp[entry] = (struct RxFD *)skb->tail;
skb->dev = dev;
skb_reserve(skb, sizeof(struct RxFD));
rxf->rx_buf_addr = virt_to_le32bus(skb->tail);
rxf->rx_buf_addr = virt_to_bus(skb->tail);
} else {
rxf = sp->rx_ringp[entry];
}
......
......@@ -81,6 +81,7 @@ static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
#endif
#include <linux/kernel.h>
#include <linux/version.h>
#include <linux/sched.h>
#include <linux/string.h>
#include <linux/timer.h>
......
......@@ -111,11 +111,6 @@ static const int multicast_filter_limit = 32;
#ifdef MODULE
char kernel_version[] = UTS_RELEASE;
#else
#ifndef __alpha__
#define ioremap vremap
#define iounmap vfree
#endif
#endif
#if defined(MODULE) && LINUX_VERSION_CODE > 0x20115
MODULE_AUTHOR("Donald Becker <becker@cesdis.gsfc.nasa.gov>");
......
This diff is collapsed.
......@@ -97,7 +97,7 @@ static kmem_cache_t *bh_cachep;
static int grow_buffers(int size);
/* This is used by some architectures to estimate available memory. */
atomic_t buffermem = ATOMIC_INIT(0);
atomic_t buffermem_pages = ATOMIC_INIT(0);
/* Here is the parameter block for the bdflush process. If you add or
* remove any of the parameters, make sure to update kernel/sysctl.c.
......@@ -827,7 +827,7 @@ static int balance_dirty_state(kdev_t dev)
unsigned long dirty, tot, hard_dirty_limit, soft_dirty_limit;
dirty = size_buffers_type[BUF_DIRTY] >> PAGE_SHIFT;
tot = nr_lru_pages + nr_free_pages - nr_free_bigpages;
tot = nr_lru_pages + nr_free_pages + nr_free_highpages;
hard_dirty_limit = tot * bdf_prm.b_un.nfract / 100;
soft_dirty_limit = hard_dirty_limit >> 1;
......@@ -1267,7 +1267,7 @@ int block_flushpage(struct inode *inode, struct page *page, unsigned long offset
*/
if (!offset) {
if (!try_to_free_buffers(page)) {
atomic_add(PAGE_CACHE_SIZE, &buffermem);
atomic_inc(&buffermem_pages);
return 0;
}
}
......@@ -1834,12 +1834,12 @@ int brw_kiovec(int rw, int nr, struct kiobuf *iovec[],
dprintk ("iobuf %d %d %d\n", offset, length, size);
for (pageind = 0; pageind < iobuf->nr_pages; pageind++) {
page = iobuf->pagelist[pageind];
map = iobuf->maplist[pageind];
if (map && PageBIGMEM(map)) {
if (map && PageHighMem(map)) {
err = -EIO;
goto error;
}
page = page_address(map);
while (length > 0) {
blocknr = b[bufind++];
......@@ -2115,7 +2115,7 @@ static int grow_buffers(int size)
page_map = mem_map + MAP_NR(page);
page_map->buffers = bh;
lru_cache_add(page_map);
atomic_add(PAGE_SIZE, &buffermem);
atomic_inc(&buffermem_pages);
return 1;
no_buffer_head:
......@@ -2208,7 +2208,8 @@ void show_buffers(void)
int nlist;
static char *buf_types[NR_LIST] = { "CLEAN", "LOCKED", "DIRTY" };
printk("Buffer memory: %6dkB\n", atomic_read(&buffermem) >> 10);
printk("Buffer memory: %6dkB\n",
atomic_read(&buffermem_pages) << (PAGE_SHIFT-10));
#ifdef __SMP__ /* trylock does nothing on UP and so we could deadlock */
if (!spin_trylock(&lru_list_lock))
......@@ -2246,7 +2247,7 @@ void show_buffers(void)
* Use gfp() for the hash table to decrease TLB misses, use
* SLAB cache for buffer heads.
*/
void __init buffer_init(unsigned long memory_size)
void __init buffer_init(unsigned long mempages)
{
int order, i;
unsigned int nr_hash;
......@@ -2254,9 +2255,11 @@ void __init buffer_init(unsigned long memory_size)
/* The buffer cache hash table is less important these days,
* trim it a bit.
*/
memory_size >>= 14;
memory_size *= sizeof(struct buffer_head *);
for (order = 0; (PAGE_SIZE << order) < memory_size; order++)
mempages >>= 14;
mempages *= sizeof(struct buffer_head *);
for (order = 0; (1 << order) < mempages; order++)
;
/* try to allocate something until we get it or we're asking
......
......@@ -420,7 +420,7 @@ int shrink_dcache_memory(int priority, unsigned int gfp_mask)
unlock_kernel();
/* FIXME: kmem_cache_shrink here should tell us
the number of pages freed, and it should
work in a __GFP_DMA/__GFP_BIGMEM behaviour
work in a __GFP_DMA/__GFP_HIGHMEM behaviour
to free only the interesting pages in
function of the needs of the current allocation. */
kmem_cache_shrink(dentry_cache);
......
......@@ -31,6 +31,8 @@
#include <linux/fcntl.h>
#include <linux/smp_lock.h>
#include <linux/init.h>
#include <linux/pagemap.h>
#include <linux/highmem.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
......@@ -212,20 +214,42 @@ int copy_strings(int argc,char ** argv, struct linux_binprm *bprm)
/* XXX: add architecture specific overflow check here. */
pos = bprm->p;
while (len>0) {
char *pag;
while (len > 0) {
char *kaddr;
int i, new, err;
struct page *page;
int offset, bytes_to_copy;
offset = pos % PAGE_SIZE;
if (!(pag = (char *) bprm->page[pos/PAGE_SIZE]) &&
!(pag = (char *) bprm->page[pos/PAGE_SIZE] =
(unsigned long *) get_free_page(GFP_USER)))
return -ENOMEM;
i = pos/PAGE_SIZE;
page = bprm->page[i];
new = 0;
if (!page) {
/*
* Cannot yet use highmem page because
* we cannot sleep with a kmap held.
*/
page = __get_pages(GFP_USER, 0);
bprm->page[i] = page;
if (!page)
return -ENOMEM;
new = 1;
}
kaddr = (char *)kmap(page, KM_WRITE);
if (new && offset)
memset(kaddr, 0, offset);
bytes_to_copy = PAGE_SIZE - offset;
if (bytes_to_copy > len)
if (bytes_to_copy > len) {
bytes_to_copy = len;
if (copy_from_user(pag + offset, str, bytes_to_copy))
if (new)
memset(kaddr+offset+len, 0, PAGE_SIZE-offset-len);
}
err = copy_from_user(kaddr + offset, str, bytes_to_copy);
flush_page_to_ram(kaddr);
kunmap((unsigned long)kaddr, KM_WRITE);
if (err)
return -EFAULT;
pos += bytes_to_copy;
......@@ -647,14 +671,22 @@ void remove_arg_zero(struct linux_binprm *bprm)
{
if (bprm->argc) {
unsigned long offset;
char * page;
char * kaddr;
struct page *page;
offset = bprm->p % PAGE_SIZE;
page = (char*)bprm->page[bprm->p/PAGE_SIZE];
while(bprm->p++,*(page+offset++))
if(offset==PAGE_SIZE){
offset=0;
page = (char*)bprm->page[bprm->p/PAGE_SIZE];
}
goto inside;
while (bprm->p++, *(kaddr+offset++)) {
if (offset != PAGE_SIZE)
continue;
offset = 0;
kunmap((unsigned long)kaddr, KM_WRITE);
inside:
page = bprm->page[bprm->p/PAGE_SIZE];
kaddr = (char *)kmap(page, KM_WRITE);
}
kunmap((unsigned long)kaddr, KM_WRITE);
bprm->argc--;
}
}
......@@ -683,8 +715,8 @@ int search_binary_handler(struct linux_binprm *bprm,struct pt_regs *regs)
bprm->dentry = NULL;
bprm_loader.p = PAGE_SIZE*MAX_ARG_PAGES-sizeof(void *);
for (i=0 ; i<MAX_ARG_PAGES ; i++) /* clear page-table */
bprm_loader.page[i] = 0;
for (i = 0 ; i < MAX_ARG_PAGES ; i++) /* clear page-table */
bprm_loader.page[i] = NULL;
dentry = open_namei(dynloader[0], 0, 0);
retval = PTR_ERR(dentry);
......@@ -800,8 +832,9 @@ int do_execve(char * filename, char ** argv, char ** envp, struct pt_regs * regs
/* Assumes that free_page() can take a NULL argument. */
/* I hope this is ok for all architectures */
for (i=0 ; i<MAX_ARG_PAGES ; i++)
free_page(bprm.page[i]);
for (i = 0 ; i < MAX_ARG_PAGES ; i++)
if (bprm.page[i])
__free_page(bprm.page[i]);
return retval;
}
......
......@@ -16,7 +16,7 @@
/*
* Allocate an fd array, using get_free_page() if possible.
* Allocate an fd array, using __get_free_page() if possible.
* Note: the array isn't cleared at allocation time.
*/
struct file ** alloc_fd_array(int num)
......@@ -129,7 +129,7 @@ int expand_fd_array(struct files_struct *files, int nr)
}
/*
* Allocate an fdset array, using get_free_page() if possible.
* Allocate an fdset array, using __get_free_page() if possible.
* Note: the array isn't cleared at allocation time.
*/
fd_set * alloc_fdset(int num)
......
......@@ -89,6 +89,7 @@ static void init_once(void * foo, kmem_cache_t * cachep, unsigned long flags)
memset(inode, 0, sizeof(*inode));
init_waitqueue_head(&inode->i_wait);
INIT_LIST_HEAD(&inode->i_hash);
INIT_LIST_HEAD(&inode->i_pages);
INIT_LIST_HEAD(&inode->i_dentry);
sema_init(&inode->i_sem, 1);
spin_lock_init(&inode->i_shared_lock);
......@@ -401,7 +402,7 @@ int shrink_icache_memory(int priority, int gfp_mask)
prune_icache(count);
/* FIXME: kmem_cache_shrink here should tell us
the number of pages freed, and it should
work in a __GFP_DMA/__GFP_BIGMEM behaviour
work in a __GFP_DMA/__GFP_HIGHMEM behaviour
to free only the interesting pages in
function of the needs of the current allocation. */
kmem_cache_shrink(inode_cachep);
......
......@@ -50,7 +50,6 @@ int alloc_kiovec(int nr, struct kiobuf **bufp)
init_waitqueue_head(&iobuf->wait_queue);
iobuf->end_io = simple_wakeup_kiobuf;
iobuf->array_len = KIO_STATIC_PAGES;
iobuf->pagelist = iobuf->page_array;
iobuf->maplist = iobuf->map_array;
*bufp++ = iobuf;
}
......@@ -65,50 +64,35 @@ void free_kiovec(int nr, struct kiobuf **bufp)
for (i = 0; i < nr; i++) {
iobuf = bufp[i];
if (iobuf->array_len > KIO_STATIC_PAGES) {
kfree (iobuf->pagelist);
if (iobuf->array_len > KIO_STATIC_PAGES)
kfree (iobuf->maplist);
}
kmem_cache_free(kiobuf_cachep, bufp[i]);
}
}
int expand_kiobuf(struct kiobuf *iobuf, int wanted)
{
unsigned long * pagelist;
struct page ** maplist;
if (iobuf->array_len >= wanted)
return 0;
pagelist = (unsigned long *)
kmalloc(wanted * sizeof(unsigned long), GFP_KERNEL);
if (!pagelist)
return -ENOMEM;
maplist = (struct page **)
kmalloc(wanted * sizeof(struct page **), GFP_KERNEL);
if (!maplist) {
kfree(pagelist);
if (!maplist)
return -ENOMEM;
}
/* Did it grow while we waited? */
if (iobuf->array_len >= wanted) {
kfree(pagelist);
kfree(maplist);
return 0;
}
memcpy (pagelist, iobuf->pagelist, wanted * sizeof(unsigned long));
memcpy (maplist, iobuf->maplist, wanted * sizeof(struct page **));
if (iobuf->array_len > KIO_STATIC_PAGES) {
kfree (iobuf->pagelist);
if (iobuf->array_len > KIO_STATIC_PAGES)
kfree (iobuf->maplist);
}
iobuf->pagelist = pagelist;
iobuf->maplist = maplist;
iobuf->array_len = wanted;
return 0;
......
......@@ -308,8 +308,7 @@ static struct page *try_to_get_dirent_page(struct file *file, __u32 cookie, int
struct nfs_readdirres rd_res;
struct dentry *dentry = file->f_dentry;
struct inode *inode = dentry->d_inode;
struct page *page, **hash;
unsigned long page_cache;
struct page *page, **hash, *page_cache;
long offset;
__u32 *cookiep;
......@@ -341,14 +340,14 @@ static struct page *try_to_get_dirent_page(struct file *file, __u32 cookie, int
goto unlock_out;
}
page = page_cache_entry(page_cache);
page = page_cache;
if (add_to_page_cache_unique(page, inode, offset, hash)) {
page_cache_release(page);
goto repeat;
}
rd_args.fh = NFS_FH(dentry);
rd_res.buffer = (char *)page_cache;
rd_res.buffer = (char *)page_address(page_cache);
rd_res.bufsiz = PAGE_CACHE_SIZE;
rd_res.cookie = *cookiep;
do {
......
......@@ -59,8 +59,7 @@ struct inode_operations nfs_symlink_inode_operations = {
static struct page *try_to_get_symlink_page(struct dentry *dentry, struct inode *inode)
{
struct nfs_readlinkargs rl_args;
struct page *page, **hash;
unsigned long page_cache;
struct page *page, **hash, *page_cache;
page = NULL;
page_cache = page_cache_alloc();
......@@ -75,7 +74,7 @@ static struct page *try_to_get_symlink_page(struct dentry *dentry, struct inode
goto unlock_out;
}
page = page_cache_entry(page_cache);
page = page_cache;
if (add_to_page_cache_unique(page, inode, 0, hash)) {
page_cache_release(page);
goto repeat;
......@@ -86,7 +85,7 @@ static struct page *try_to_get_symlink_page(struct dentry *dentry, struct inode
* XDR response verification will NULL terminate it.
*/
rl_args.fh = NFS_FH(dentry);
rl_args.buffer = (const void *)page_cache;
rl_args.buffer = (const void *)page_address(page_cache);
if (rpc_call(NFS_CLIENT(inode), NFSPROC_READLINK,
&rl_args, NULL, 0) < 0)
goto error;
......
......@@ -386,8 +386,8 @@ static int get_meminfo(char * buffer)
i.sharedram >> 10,
i.bufferram >> 10,
atomic_read(&page_cache_size) << (PAGE_SHIFT - 10),
i.totalbig >> 10,
i.freebig >> 10,
i.totalhigh >> 10,
i.freehigh >> 10,
i.totalswap >> 10,
i.freeswap >> 10);
}
......@@ -407,7 +407,7 @@ static int get_cmdline(char * buffer)
return sprintf(buffer, "%s\n", saved_command_line);
}
static unsigned long get_phys_addr(struct mm_struct * mm, unsigned long ptr)
static struct page * get_phys_page(struct mm_struct * mm, unsigned long ptr)
{
pgd_t *page_dir;
pmd_t *page_middle;
......@@ -434,41 +434,41 @@ static unsigned long get_phys_addr(struct mm_struct * mm, unsigned long ptr)
pte = *pte_offset(page_middle,ptr);
if (!pte_present(pte))
return 0;
return pte_page(pte) + (ptr & ~PAGE_MASK);
return pte_page(pte);
}
#include <linux/bigmem.h>
#include <linux/highmem.h>
static int get_array(struct mm_struct *mm, unsigned long start, unsigned long end, char * buffer)
{
unsigned long addr;
int size = 0, result = 0;
char c;
char *buf, c;
if (start >= end)
return result;
for (;;) {
addr = get_phys_addr(mm, start);
if (!addr)
struct page *page = get_phys_page(mm, start);
if (!page)
return result;
addr = kmap(addr, KM_READ);
addr = kmap(page, KM_READ);
buf = (char *) (addr + (start & ~PAGE_MASK));
do {
c = *(char *) addr;
c = *buf;
if (!c)
result = size;
if (size < PAGE_SIZE)
buffer[size++] = c;
else {
if (size >= PAGE_SIZE) {
kunmap(addr, KM_READ);
return result;
}
addr++;
buffer[size++] = c;
buf++;
start++;
if (!c && start >= end) {
kunmap(addr, KM_READ);
return result;
}
} while (addr & ~PAGE_MASK);
} while (~PAGE_MASK & (unsigned long)buf);
kunmap(addr, KM_READ);
}
return result;
......
......@@ -10,7 +10,7 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/proc_fs.h>
#include <linux/bigmem.h>
#include <linux/highmem.h>
#include <asm/page.h>
#include <asm/uaccess.h>
......@@ -79,9 +79,10 @@ static ssize_t mem_read(struct file * file, char * buf,
pgd_t *page_dir;
pmd_t *page_middle;
pte_t pte;
char * page;
struct page * page;
struct task_struct * tsk;
unsigned long addr;
unsigned long maddr; /* temporary mapped address */
char *tmp;
ssize_t scount, i;
......@@ -102,7 +103,7 @@ static ssize_t mem_read(struct file * file, char * buf,
if (pgd_none(*page_dir))
break;
if (pgd_bad(*page_dir)) {
printk("Bad page dir entry %08lx\n", pgd_val(*page_dir));
pgd_ERROR(*page_dir);
pgd_clear(page_dir);
break;
}
......@@ -110,20 +111,20 @@ static ssize_t mem_read(struct file * file, char * buf,
if (pmd_none(*page_middle))
break;
if (pmd_bad(*page_middle)) {
printk("Bad page middle entry %08lx\n", pmd_val(*page_middle));
pmd_ERROR(*page_middle);
pmd_clear(page_middle);
break;
}
pte = *pte_offset(page_middle,addr);
if (!pte_present(pte))
break;
page = (char *) pte_page(pte) + (addr & ~PAGE_MASK);
page = pte_page(pte);
i = PAGE_SIZE-(addr & ~PAGE_MASK);
if (i > scount)
i = scount;
page = (char *) kmap((unsigned long) page, KM_READ);
copy_to_user(tmp, page, i);
kunmap((unsigned long) page, KM_READ);
maddr = kmap(page, KM_READ);
copy_to_user(tmp, (char *)maddr + (addr & ~PAGE_MASK), i);
kunmap(maddr, KM_READ);
addr += i;
tmp += i;
scount -= i;
......@@ -141,9 +142,10 @@ static ssize_t mem_write(struct file * file, char * buf,
pgd_t *page_dir;
pmd_t *page_middle;
pte_t pte;
char * page;
struct page * page;
struct task_struct * tsk;
unsigned long addr;
unsigned long maddr; /* temporary mapped address */
char *tmp;
long i;
......@@ -159,7 +161,7 @@ static ssize_t mem_write(struct file * file, char * buf,
if (pgd_none(*page_dir))
break;
if (pgd_bad(*page_dir)) {
printk("Bad page dir entry %08lx\n", pgd_val(*page_dir));
pgd_ERROR(*page_dir);
pgd_clear(page_dir);
break;
}
......@@ -167,7 +169,7 @@ static ssize_t mem_write(struct file * file, char * buf,
if (pmd_none(*page_middle))
break;
if (pmd_bad(*page_middle)) {
printk("Bad page middle entry %08lx\n", pmd_val(*page_middle));
pmd_ERROR(*page_middle);
pmd_clear(page_middle);
break;
}
......@@ -176,13 +178,13 @@ static ssize_t mem_write(struct file * file, char * buf,
break;
if (!pte_write(pte))
break;
page = (char *) pte_page(pte) + (addr & ~PAGE_MASK);
page = pte_page(pte);
i = PAGE_SIZE-(addr & ~PAGE_MASK);
if (i > count)
i = count;
page = (unsigned long) kmap((unsigned long) page, KM_WRITE);
copy_from_user(page, tmp, i);
kunmap((unsigned long) page, KM_WRITE);
maddr = kmap(page, KM_WRITE);
copy_from_user((char *)maddr + (addr & ~PAGE_MASK), tmp, i);
kunmap(maddr, KM_WRITE);
addr += i;
tmp += i;
count -= i;
......@@ -248,14 +250,14 @@ int mem_mmap(struct file * file, struct vm_area_struct * vma)
if (pgd_none(*src_dir))
return -EINVAL;
if (pgd_bad(*src_dir)) {
printk("Bad source page dir entry %08lx\n", pgd_val(*src_dir));
pgd_ERROR(*src_dir);
return -EINVAL;
}
src_middle = pmd_offset(src_dir, stmp);
if (pmd_none(*src_middle))
return -EINVAL;
if (pmd_bad(*src_middle)) {
printk("Bad source page middle entry %08lx\n", pmd_val(*src_middle));
pmd_ERROR(*src_middle);
return -EINVAL;
}
src_table = pte_offset(src_middle, stmp);
......@@ -301,9 +303,9 @@ int mem_mmap(struct file * file, struct vm_area_struct * vma)
set_pte(src_table, pte_mkdirty(*src_table));
set_pte(dest_table, *src_table);
mapnr = MAP_NR(pte_page(*src_table));
mapnr = pte_pagenr(*src_table);
if (mapnr < max_mapnr)
get_page(mem_map + MAP_NR(pte_page(*src_table)));
get_page(mem_map + pte_pagenr(*src_table));
stmp += PAGE_SIZE;
dtmp += PAGE_SIZE;
......
......@@ -236,6 +236,7 @@ static void __init check_amd_k6(void)
* have the F0 0F bug, which lets nonpriviledged users lock up the system:
*/
#ifndef CONFIG_M686
extern void trap_init_f00f_bug(void);
static void __init check_pentium_f00f(void)
......@@ -250,6 +251,7 @@ static void __init check_pentium_f00f(void)
trap_init_f00f_bug();
}
}
#endif
/*
* Perform the Cyrix 5/2 test. A Cyrix won't change
......@@ -424,7 +426,9 @@ static void __init check_bugs(void)
check_hlt();
check_popad();
check_amd_k6();
#ifndef CONFIG_M686
check_pentium_f00f();
#endif
check_cyrix_coma();
system_utsname.machine[1] = '0' + boot_cpu_data.x86;
}
......@@ -17,7 +17,7 @@
#include <linux/kernel.h>
#include <asm/apic.h>
#include <asm/page.h>
#ifdef CONFIG_BIGMEM
#ifdef CONFIG_HIGHMEM
#include <linux/threads.h>
#include <asm/kmap_types.h>
#endif
......@@ -34,7 +34,7 @@
*
* these 'compile-time allocated' memory buffers are
* fixed-size 4k pages. (or larger if used with an increment
* bigger than 1) use fixmap_set(idx,phys) to associate
* highger than 1) use fixmap_set(idx,phys) to associate
* physical memory with fixmap indices.
*
* TLB entries of such buffers will not be flushed across
......@@ -61,7 +61,7 @@ enum fixed_addresses {
FIX_LI_PCIA, /* Lithium PCI Bridge A */
FIX_LI_PCIB, /* Lithium PCI Bridge B */
#endif
#ifdef CONFIG_BIGMEM
#ifdef CONFIG_HIGHMEM
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
#endif
......
/*
* bigmem.h: virtual kernel memory mappings for big memory
* highmem.h: virtual kernel memory mappings for high memory
*
* Used in CONFIG_BIGMEM systems for memory pages which are not
* addressable by direct kernel virtual adresses.
* Used in CONFIG_HIGHMEM systems for memory pages which
* are not addressable by direct kernel virtual adresses.
*
* Copyright (C) 1999 Gerhard Wichert, Siemens AG
* Gerhard.Wichert@pdb.siemens.de
*
*
* Redesigned the x86 32-bit VM architecture to deal with
* up to 16 Terrabyte physical memory. With current x86 CPUs
* we now support up to 64 Gigabytes physical RAM.
*
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
*/
#ifndef _ASM_BIGMEM_H
#define _ASM_BIGMEM_H
#ifndef _ASM_HIGHMEM_H
#define _ASM_HIGHMEM_H
#include <linux/init.h>
#define BIGMEM_DEBUG /* undef for production */
/* undef for production */
#define HIGHMEM_DEBUG 1
/* declarations for bigmem.c */
extern unsigned long bigmem_start, bigmem_end;
extern int nr_free_bigpages;
/* declarations for highmem.c */
extern unsigned long highstart_pfn, highend_pfn;
extern pte_t *kmap_pte;
extern pgprot_t kmap_prot;
extern void kmap_init(void) __init;
/* kmap helper functions necessary to access the bigmem pages in kernel */
/* kmap helper functions necessary to access the highmem pages in kernel */
#include <asm/pgtable.h>
#include <asm/kmap_types.h>
extern inline unsigned long kmap(unsigned long kaddr, enum km_type type)
extern inline unsigned long kmap(struct page *page, enum km_type type)
{
if (__pa(kaddr) < bigmem_start)
return kaddr;
if (page < highmem_start_page)
return page_address(page);
{
enum fixed_addresses idx = type+KM_TYPE_NR*smp_processor_id();
unsigned long vaddr = __fix_to_virt(FIX_KMAP_BEGIN+idx);
#ifdef BIGMEM_DEBUG
#if HIGHMEM_DEBUG
if (!pte_none(*(kmap_pte-idx)))
{
__label__ here;
......@@ -45,16 +52,16 @@ extern inline unsigned long kmap(unsigned long kaddr, enum km_type type)
smp_processor_id(), &&here);
}
#endif
set_pte(kmap_pte-idx, mk_pte(kaddr & PAGE_MASK, kmap_prot));
set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
__flush_tlb_one(vaddr);
return vaddr | (kaddr & ~PAGE_MASK);
return vaddr;
}
}
extern inline void kunmap(unsigned long vaddr, enum km_type type)
{
#ifdef BIGMEM_DEBUG
#if HIGHMEM_DEBUG
enum fixed_addresses idx = type+KM_TYPE_NR*smp_processor_id();
if ((vaddr & PAGE_MASK) == __fix_to_virt(FIX_KMAP_BEGIN+idx))
{
......@@ -66,4 +73,13 @@ extern inline void kunmap(unsigned long vaddr, enum km_type type)
#endif
}
#endif /* _ASM_BIGMEM_H */
extern inline void kmap_check(void)
{
#if HIGHMEM_DEBUG
int idx_base = KM_TYPE_NR*smp_processor_id(), i;
for (i = idx_base; i < idx_base+KM_TYPE_NR; i++)
if (!pte_none(*(kmap_pte-i)))
BUG();
#endif
}
#endif /* _ASM_HIGHMEM_H */
......@@ -103,28 +103,27 @@ __OUTS(l)
#include <linux/vmalloc.h>
#include <asm/page.h>
#define __io_virt(x) ((void *)(PAGE_OFFSET | (unsigned long)(x)))
#define __io_phys(x) ((unsigned long)(x) & ~PAGE_OFFSET)
/*
* Temporary debugging check to catch old code using
* unmapped ISA addresses. Will be removed in 2.4.
*/
#define __io_virt(x) ((unsigned long)(x) < PAGE_OFFSET ? \
({ __label__ __l; __l: printk("io mapaddr %p not valid at %p!\n", (char *)(x), &&__l); __va(x); }) : (char *)(x))
#define __io_phys(x) ((unsigned long)(x) < PAGE_OFFSET ? \
({ __label__ __l; __l: printk("io mapaddr %p not valid at %p!\n", (char *)(x), &&__l); (unsigned long)(x); }) : __pa(x))
/*
* Change virtual addresses to physical addresses and vv.
* These are pretty trivial
*/
extern inline unsigned long virt_to_phys(volatile void * address)
{
#ifdef CONFIG_BIGMEM
return __pa(address);
#else
return __io_phys(address);
#endif
}
extern inline void * phys_to_virt(unsigned long address)
{
#ifdef CONFIG_BIGMEM
return __va(address);
#else
return __io_virt(address);
#endif
}
extern void * __ioremap(unsigned long offset, unsigned long size, unsigned long flags);
......@@ -177,6 +176,23 @@ extern void iounmap(void *addr);
#define memcpy_fromio(a,b,c) memcpy((a),__io_virt(b),(c))
#define memcpy_toio(a,b,c) memcpy(__io_virt(a),(b),(c))
/*
* ISA space is 'always mapped' on a typical x86 system, no need to
* explicitly ioremap() it. The fact that the ISA IO space is mapped
* to PAGE_OFFSET is pure coincidence - it does not mean ISA values
* are physical addresses. The following constant pointer can be
* used as the IO-area pointer (it can be iounmapped as well, so the
* analogy with PCI is quite large):
*/
#define __ISA_IO_base ((char *)(PAGE_OFFSET))
#define isa_readb(a) readb(__ISA_IO_base + (a))
#define isa_readw(a) readb(__ISA_IO_base + (a))
#define isa_readl(a) readb(__ISA_IO_base + (a))
#define isa_writeb(b,a) writeb(b,__ISA_IO_base + (a))
#define isa_writew(w,a) writeb(w,__ISA_IO_base + (a))
#define isa_writel(l,a) writeb(l,__ISA_IO_base + (a))
/*
* Again, i386 does not require mem IO specific function.
*/
......
......@@ -9,8 +9,6 @@
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
#define STRICT_MM_TYPECHECKS
#include <linux/config.h>
#ifdef CONFIG_X86_USE_3DNOW
......@@ -32,13 +30,19 @@
#endif
#ifdef STRICT_MM_TYPECHECKS
/*
* These are used to make use of C type-checking..
*/
#if CONFIG_X86_PAE
typedef struct { unsigned long long pte; } pte_t;
typedef struct { unsigned long long pmd; } pmd_t;
typedef struct { unsigned long long pgd; } pgd_t;
#else
typedef struct { unsigned long pte; } pte_t;
typedef struct { unsigned long pmd; } pmd_t;
typedef struct { unsigned long pgd; } pgd_t;
#endif
typedef struct { unsigned long pgprot; } pgprot_t;
#define pte_val(x) ((x).pte)
......@@ -51,26 +55,6 @@ typedef struct { unsigned long pgprot; } pgprot_t;
#define __pgd(x) ((pgd_t) { (x) } )
#define __pgprot(x) ((pgprot_t) { (x) } )
#else
/*
* .. while these make it easier on the compiler
*/
typedef unsigned long pte_t;
typedef unsigned long pmd_t;
typedef unsigned long pgd_t;
typedef unsigned long pgprot_t;
#define pte_val(x) (x)
#define pmd_val(x) (x)
#define pgd_val(x) (x)
#define pgprot_val(x) (x)
#define __pte(x) (x)
#define __pmd(x) (x)
#define __pgd(x) (x)
#define __pgprot(x) (x)
#endif
#endif /* !__ASSEMBLY__ */
/* to align the pointer to the (next) page boundary */
......@@ -93,8 +77,16 @@ typedef unsigned long pgprot_t;
#ifndef __ASSEMBLY__
extern int console_loglevel;
/*
* Tell the user there is some problem. Beep too, so we can
* see^H^H^Hhear bugs in early bootup as well!
*/
#define BUG() do { \
__asm__ __volatile__ ("movb $0x3,%al; outb %al,$0x61"); \
printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); \
console_loglevel = 0; \
__asm__ __volatile__(".byte 0x0f,0x0b"); \
} while (0)
......
#ifndef _I386_PGTABLE_2LEVEL_H
#define _I386_PGTABLE_2LEVEL_H
/*
* traditional i386 two-level paging structure:
*/
#define PGDIR_SHIFT 22
#define PTRS_PER_PGD 1024
/*
* the i386 is two-level, so we don't really have any
* PMD directory physically.
*/
#define PMD_SHIFT 22
#define PTRS_PER_PMD 1
#define PTRS_PER_PTE 1024
#define pte_ERROR(e) \
printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
#define pmd_ERROR(e) \
printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
#define pgd_ERROR(e) \
printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
/*
* The "pgd_xxx()" functions here are trivial for a folded two-level
* setup: the pgd is never bad, and a pmd always exists (as it's folded
* into the pgd entry)
*/
extern inline int pgd_none(pgd_t pgd) { return 0; }
extern inline int pgd_bad(pgd_t pgd) { return 0; }
extern inline int pgd_present(pgd_t pgd) { return 1; }
#define pgd_clear(xp) do { pgd_val(*(xp)) = 0; } while (0)
#define pgd_page(pgd) \
((unsigned long) __va(pgd_val(pgd) & PAGE_MASK))
extern inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
{
return (pmd_t *) dir;
}
extern __inline__ pmd_t *get_pmd_fast(void)
{
return (pmd_t *)0;
}
extern __inline__ void free_pmd_fast(pmd_t *pmd) { }
extern __inline__ void free_pmd_slow(pmd_t *pmd) { }
extern inline pmd_t * pmd_alloc(pgd_t *pgd, unsigned long address)
{
if (!pgd)
BUG();
return (pmd_t *) pgd;
}
#define SWP_ENTRY(type,offset) __pte((((type) << 1) | ((offset) << 8)))
#endif /* _I386_PGTABLE_2LEVEL_H */
#ifndef _I386_PGTABLE_3LEVEL_H
#define _I386_PGTABLE_3LEVEL_H
/*
* Intel Physical Address Extension (PAE) Mode - three-level page
* tables on PPro+ CPUs.
*
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
*/
/*
* PGDIR_SHIFT determines what a top-level page table entry can map
*/
#define PGDIR_SHIFT 30
#define PTRS_PER_PGD 4
/*
* PMD_SHIFT determines the size of the area a middle-level
* page table can map
*/
#define PMD_SHIFT 21
#define PTRS_PER_PMD 512
/*
* entries per page directory level
*/
#define PTRS_PER_PTE 512
#define pte_ERROR(e) \
printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
#define pmd_ERROR(e) \
printk("%s:%d: bad pmd %016Lx.\n", __FILE__, __LINE__, pmd_val(e))
#define pgd_ERROR(e) \
printk("%s:%d: bad pgd %016Lx.\n", __FILE__, __LINE__, pgd_val(e))
/*
* Subtle, in PAE mode we cannot have zeroes in the top level
* page directory, the CPU enforces this.
*/
#define pgd_none(x) (pgd_val(x) == 1ULL)
extern inline int pgd_bad(pgd_t pgd) { return 0; }
extern inline int pgd_present(pgd_t pgd) { return !pgd_none(pgd); }
/*
* Pentium-II errata A13: in PAE mode we explicitly have to flush
* the TLB via cr3 if the top-level pgd is changed... This was one tough
* thing to find out - guess i should first read all the documentation
* next time around ;)
*/
extern inline void __pgd_clear (pgd_t * pgd)
{
pgd_val(*pgd) = 1; // no zero allowed!
}
extern inline void pgd_clear (pgd_t * pgd)
{
__pgd_clear(pgd);
__flush_tlb();
}
#define pgd_page(pgd) \
((unsigned long) __va(pgd_val(pgd) & PAGE_MASK))
/* Find an entry in the second-level page table.. */
#define pmd_offset(dir, address) ((pmd_t *) pgd_page(*(dir)) + \
__pmd_offset(address))
extern __inline__ pmd_t *get_pmd_slow(void)
{
pmd_t *ret = (pmd_t *)__get_free_page(GFP_KERNEL);
if (ret)
memset(ret, 0, PAGE_SIZE);
return ret;
}
extern __inline__ pmd_t *get_pmd_fast(void)
{
unsigned long *ret;
if ((ret = pmd_quicklist) != NULL) {
pmd_quicklist = (unsigned long *)(*ret);
ret[0] = 0;
pgtable_cache_size--;
} else
ret = (unsigned long *)get_pmd_slow();
return (pmd_t *)ret;
}
extern __inline__ void free_pmd_fast(pmd_t *pmd)
{
*(unsigned long *)pmd = (unsigned long) pmd_quicklist;
pmd_quicklist = (unsigned long *) pmd;
pgtable_cache_size++;
}
extern __inline__ void free_pmd_slow(pmd_t *pmd)
{
free_page((unsigned long)pmd);
}
extern inline pmd_t * pmd_alloc(pgd_t *pgd, unsigned long address)
{
if (!pgd)
BUG();
address = (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
if (pgd_none(*pgd)) {
pmd_t *page = get_pmd_fast();
if (!page)
page = get_pmd_slow();
if (page) {
if (pgd_none(*pgd)) {
pgd_val(*pgd) = 1 + __pa(page);
__flush_tlb();
return page + address;
} else
free_pmd_fast(page);
} else
return NULL;
}
return (pmd_t *)pgd_page(*pgd) + address;
}
/*
* Subtle. offset can overflow 32 bits and that's a feature - we can do
* up to 16 TB swap on PAE. (Not that anyone should need that much
* swapspace, but who knows?)
*/
#define SWP_ENTRY(type,offset) __pte((((type) << 1) | ((offset) << 8ULL)))
#endif /* _I386_PGTABLE_3LEVEL_H */
This diff is collapsed.
......@@ -46,6 +46,7 @@ struct cpuinfo_x86 {
int coma_bug;
unsigned long loops_per_sec;
unsigned long *pgd_quick;
unsigned long *pmd_quick;
unsigned long *pte_quick;
unsigned long pgtable_cache_sz;
};
......@@ -106,6 +107,12 @@ extern struct cpuinfo_x86 cpu_data[];
#define current_cpu_data boot_cpu_data
#endif
#define cpu_has_pge \
(boot_cpu_data.x86_capability & X86_FEATURE_PGE)
#define cpu_has_pse \
(boot_cpu_data.x86_capability & X86_FEATURE_PSE)
#define cpu_has_pae \
(boot_cpu_data.x86_capability & X86_FEATURE_PAE)
#define cpu_has_tsc \
(cpu_data[smp_processor_id()].x86_capability & X86_FEATURE_TSC)
......
......@@ -166,7 +166,8 @@ struct mpc_config_lintsrc
extern int smp_found_config;
extern void init_smp_config(void);
extern unsigned long smp_alloc_memory(unsigned long mem_base);
extern void init_smp_mappings(void);
extern void smp_alloc_memory(void);
extern unsigned long cpu_present_map;
extern unsigned long cpu_online_map;
extern volatile unsigned long smp_invalidate_needed;
......@@ -179,6 +180,7 @@ extern void smp_invalidate_rcv(void); /* Process an NMI */
extern void smp_local_timer_interrupt(struct pt_regs * regs);
extern void (*mtrr_hook) (void);
extern void setup_APIC_clocks(void);
extern void zap_low_mappings (void);
extern volatile int cpu_number_map[NR_CPUS];
extern volatile int __cpu_logical_map[NR_CPUS];
extern inline int cpu_logical_map(int cpu)
......
#ifndef _LINUX_BIGMEM_H
#define _LINUX_BIGMEM_H
#include <linux/config.h>
#ifdef CONFIG_BIGMEM
#include <asm/bigmem.h>
/* declarations for linux/mm/bigmem.c */
extern unsigned long bigmem_mapnr;
extern int nr_free_bigpages;
extern struct page * prepare_bigmem_swapout(struct page *);
extern struct page * replace_with_bigmem(struct page *);
#else /* CONFIG_BIGMEM */
#define prepare_bigmem_swapout(page) page
#define replace_with_bigmem(page) page
#define kmap(kaddr, type) kaddr
#define kunmap(vaddr, type) do { } while (0)
#define nr_free_bigpages 0
#endif /* CONFIG_BIGMEM */
/* when CONFIG_BIGMEM is not set these will be plain clear/copy_page */
extern inline void clear_bigpage(unsigned long kaddr)
{
unsigned long vaddr;
vaddr = kmap(kaddr, KM_WRITE);
clear_page(vaddr);
kunmap(vaddr, KM_WRITE);
}
extern inline void copy_bigpage(unsigned long to, unsigned long from)
{
unsigned long vfrom, vto;
vfrom = kmap(from, KM_READ);
vto = kmap(to, KM_WRITE);
copy_page(vto, vfrom);
kunmap(vfrom, KM_READ);
kunmap(vto, KM_WRITE);
}
#endif /* _LINUX_BIGMEM_H */
......@@ -18,7 +18,7 @@
*/
struct linux_binprm{
char buf[128];
unsigned long page[MAX_ARG_PAGES];
struct page *page[MAX_ARG_PAGES];
unsigned long p; /* current top of mem */
int sh_bang;
struct dentry * dentry;
......
#ifndef _LINUX_BOOTMEM_H
#define _LINUX_BOOTMEM_H
#include <linux/config.h>
#include <asm/pgtable.h>
/*
* simple boot-time physical memory area allocator.
*/
extern unsigned long max_low_pfn;
extern unsigned long __init init_bootmem (unsigned long addr, unsigned long memend);
extern void __init reserve_bootmem (unsigned long addr, unsigned long size);
extern void __init free_bootmem (unsigned long addr, unsigned long size);
extern void * __init __alloc_bootmem (unsigned long size, unsigned long align);
#define alloc_bootmem(x) __alloc_bootmem((x), SMP_CACHE_BYTES)
#define alloc_bootmem_pages(x) __alloc_bootmem((x), PAGE_SIZE)
extern unsigned long __init free_all_bootmem (void);
#endif /* _LINUX_BOOTMEM_H */
......@@ -323,6 +323,11 @@ struct iattr {
#include <linux/quota.h>
#include <linux/mount.h>
/*
* oh the beauties of C type declarations.
*/
struct page;
struct inode {
struct list_head i_hash;
struct list_head i_list;
......@@ -350,7 +355,7 @@ struct inode {
wait_queue_head_t i_wait;
struct file_lock *i_flock;
struct vm_area_struct *i_mmap;
struct page *i_pages;
struct list_head i_pages;
spinlock_t i_shared_lock;
struct dquot *i_dquot[MAXQUOTAS];
struct pipe_inode_info *i_pipe;
......@@ -769,8 +774,6 @@ extern int fs_may_mount(kdev_t);
extern int try_to_free_buffers(struct page *);
extern void refile_buffer(struct buffer_head * buf);
extern atomic_t buffermem;
#define BUF_CLEAN 0
#define BUF_LOCKED 1 /* Buffers scheduled for write */
#define BUF_DIRTY 2 /* Dirty buffers, not yet scheduled for write */
......@@ -874,7 +877,7 @@ typedef struct {
int error;
} read_descriptor_t;
typedef int (*read_actor_t)(read_descriptor_t *, const char *, unsigned long);
typedef int (*read_actor_t)(read_descriptor_t *, struct page *, unsigned long, unsigned long);
extern struct dentry * lookup_dentry(const char *, struct dentry *, unsigned int);
......
#ifndef _LINUX_HIGHMEM_H
#define _LINUX_HIGHMEM_H
#include <linux/config.h>
#include <asm/pgtable.h>
#ifdef CONFIG_HIGHMEM
extern struct page *highmem_start_page;
#include <asm/highmem.h>
/* declarations for linux/mm/highmem.c */
extern unsigned long highmem_mapnr;
extern unsigned long nr_free_highpages;
extern struct page * prepare_highmem_swapout(struct page *);
extern struct page * replace_with_highmem(struct page *);
#else /* CONFIG_HIGHMEM */
#define prepare_highmem_swapout(page) page
#define replace_with_highmem(page) page
#define kmap(page, type) page_address(page)
#define kunmap(vaddr, type) do { } while (0)
#define nr_free_highpages 0UL
#endif /* CONFIG_HIGHMEM */
/* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */
extern inline void clear_highpage(struct page *page)
{
unsigned long kaddr;
kaddr = kmap(page, KM_WRITE);
clear_page((void *)kaddr);
kunmap(kaddr, KM_WRITE);
}
extern inline void memclear_highpage(struct page *page, unsigned int offset, unsigned int size)
{
unsigned long kaddr;
if (offset + size > PAGE_SIZE)
BUG();
kaddr = kmap(page, KM_WRITE);
memset((void *)(kaddr + offset), 0, size);
kunmap(kaddr, KM_WRITE);
}
/*
* Same but also flushes aliased cache contents to RAM.
*/
extern inline void memclear_highpage_flush(struct page *page, unsigned int offset, unsigned int size)
{
unsigned long kaddr;
if (offset + size > PAGE_SIZE)
BUG();
kaddr = kmap(page, KM_WRITE);
memset((void *)(kaddr + offset), 0, size);
flush_page_to_ram(kaddr);
kunmap(kaddr, KM_WRITE);
}
extern inline void copy_highpage(struct page *to, struct page *from)
{
unsigned long vfrom, vto;
vfrom = kmap(from, KM_READ);
vto = kmap(to, KM_WRITE);
copy_page((void *)vto, (void *)vfrom);
kunmap(vfrom, KM_READ);
kunmap(vto, KM_WRITE);
}
#endif /* _LINUX_HIGHMEM_H */
......@@ -41,7 +41,6 @@ struct kiobuf
* region, there won't necessarily be page structs defined for
* every address. */
unsigned long * pagelist;
struct page ** maplist;
unsigned int locked : 1; /* If set, pages has been locked */
......
......@@ -94,9 +94,10 @@ struct sysinfo {
unsigned long totalswap; /* Total swap space size */
unsigned long freeswap; /* swap space still available */
unsigned short procs; /* Number of current processes */
unsigned long totalbig; /* Total big memory size */
unsigned long freebig; /* Available big memory size */
char _f[20-2*sizeof(long)]; /* Padding: libc5 uses this.. */
unsigned long totalhigh; /* Total high memory size */
unsigned long freehigh; /* Available high memory size */
unsigned int mem_unit; /* Memory unit size in bytes */
char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding: libc5 uses this.. */
};
#endif
......@@ -8,6 +8,7 @@
#include <linux/config.h>
#include <linux/string.h>
#include <linux/list.h>
extern unsigned long max_mapnr;
extern unsigned long num_physpages;
......@@ -103,9 +104,8 @@ struct vm_operations_struct {
void (*protect)(struct vm_area_struct *area, unsigned long, size_t, unsigned int newprot);
int (*sync)(struct vm_area_struct *area, unsigned long, size_t, unsigned int flags);
void (*advise)(struct vm_area_struct *area, unsigned long, size_t, unsigned int advise);
unsigned long (*nopage)(struct vm_area_struct * area, unsigned long address, int write_access);
unsigned long (*wppage)(struct vm_area_struct * area, unsigned long address,
unsigned long page);
struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int write_access);
struct page * (*wppage)(struct vm_area_struct * area, unsigned long address, struct page * page);
int (*swapout)(struct vm_area_struct *, struct page *);
};
......@@ -119,8 +119,7 @@ struct vm_operations_struct {
*/
typedef struct page {
/* these must be first (free area handling) */
struct page *next;
struct page *prev;
struct list_head list;
struct inode *inode;
unsigned long offset;
struct page *next_hash;
......@@ -149,11 +148,11 @@ typedef struct page {
#define PG_uptodate 3
#define PG_decr_after 5
#define PG_DMA 7
#define PG_Slab 8
#define PG_slab 8
#define PG_swap_cache 9
#define PG_skip 10
#define PG_swap_entry 11
#define PG_BIGMEM 12
#define PG_highmem 12
/* bits 21-30 unused */
#define PG_reserved 31
......@@ -183,27 +182,32 @@ if (!test_and_clear_bit(PG_locked, &(page)->flags)) { \
#define PageReferenced(page) (test_bit(PG_referenced, &(page)->flags))
#define PageDecrAfter(page) (test_bit(PG_decr_after, &(page)->flags))
#define PageDMA(page) (test_bit(PG_DMA, &(page)->flags))
#define PageSlab(page) (test_bit(PG_Slab, &(page)->flags))
#define PageSlab(page) (test_bit(PG_slab, &(page)->flags))
#define PageSwapCache(page) (test_bit(PG_swap_cache, &(page)->flags))
#define PageReserved(page) (test_bit(PG_reserved, &(page)->flags))
#define PageSetSlab(page) (set_bit(PG_Slab, &(page)->flags))
#define PageSetSlab(page) (set_bit(PG_slab, &(page)->flags))
#define PageSetSwapCache(page) (set_bit(PG_swap_cache, &(page)->flags))
#define PageTestandSetSwapCache(page) \
(test_and_set_bit(PG_swap_cache, &(page)->flags))
#define PageClearSlab(page) (clear_bit(PG_Slab, &(page)->flags))
#define PageClearSlab(page) (clear_bit(PG_slab, &(page)->flags))
#define PageClearSwapCache(page)(clear_bit(PG_swap_cache, &(page)->flags))
#define PageTestandClearSwapCache(page) \
(test_and_clear_bit(PG_swap_cache, &(page)->flags))
#ifdef CONFIG_BIGMEM
#define PageBIGMEM(page) (test_bit(PG_BIGMEM, &(page)->flags))
#ifdef CONFIG_HIGHMEM
#define PageHighMem(page) (test_bit(PG_highmem, &(page)->flags))
#else
#define PageBIGMEM(page) 0 /* needed to optimize away at compile time */
#define PageHighMem(page) 0 /* needed to optimize away at compile time */
#endif
#define SetPageReserved(page) do { set_bit(PG_reserved, &(page)->flags); \
} while (0)
#define ClearPageReserved(page) do { test_and_clear_bit(PG_reserved, &(page)->flags); } while (0)
/*
* Various page->flags bits:
*
......@@ -224,7 +228,7 @@ if (!test_and_clear_bit(PG_locked, &(page)->flags)) { \
* (e.g. a private data page of one process).
*
* A page may be used for kmalloc() or anyone else who does a
* get_free_page(). In this case the page->count is at least 1, and
* __get_free_page(). In this case the page->count is at least 1, and
* all other fields are unused but should be 0 or NULL. The
* management of this page is the responsibility of the one who uses
* it.
......@@ -281,20 +285,27 @@ extern mem_map_t * mem_map;
* goes to clearing the page. If you want a page without the clearing
* overhead, just use __get_free_page() directly..
*/
extern struct page * __get_pages(int gfp_mask, unsigned long order);
#define __get_free_page(gfp_mask) __get_free_pages((gfp_mask),0)
#define __get_dma_pages(gfp_mask, order) __get_free_pages((gfp_mask) | GFP_DMA,(order))
extern unsigned long FASTCALL(__get_free_pages(int gfp_mask, unsigned long gfp_order));
extern struct page * get_free_highpage(int gfp_mask);
extern inline unsigned long get_free_page(int gfp_mask)
extern inline unsigned long get_zeroed_page(int gfp_mask)
{
unsigned long page;
page = __get_free_page(gfp_mask);
if (page)
clear_page(page);
clear_page((void *)page);
return page;
}
/*
* The old interface name will be removed in 2.5:
*/
#define get_free_page get_zeroed_page
/* memory.c & swap.c*/
#define free_page(addr) free_pages((addr),0)
......@@ -302,7 +313,7 @@ extern int FASTCALL(free_pages(unsigned long addr, unsigned long order));
extern int FASTCALL(__free_page(struct page *));
extern void show_free_areas(void);
extern unsigned long put_dirty_page(struct task_struct * tsk,unsigned long page,
extern struct page * put_dirty_page(struct task_struct * tsk, struct page *page,
unsigned long address);
extern void clear_page_tables(struct mm_struct *, unsigned long, int);
......@@ -322,12 +333,13 @@ extern int ptrace_writedata(struct task_struct *tsk, char * src, unsigned long d
extern int pgt_cache_water[2];
extern int check_pgt_cache(void);
extern unsigned long paging_init(unsigned long start_mem, unsigned long end_mem);
extern void mem_init(unsigned long start_mem, unsigned long end_mem);
extern void paging_init(void);
extern void free_area_init(unsigned long);
extern void mem_init(void);
extern void show_mem(void);
extern void oom(struct task_struct * tsk);
extern void si_meminfo(struct sysinfo * val);
extern void swapin_readahead(unsigned long);
extern void swapin_readahead(pte_t);
/* mmap.c */
extern void vma_init(void);
......@@ -359,18 +371,18 @@ extern void put_cached_page(unsigned long);
#define __GFP_HIGH 0x08
#define __GFP_IO 0x10
#define __GFP_SWAP 0x20
#ifdef CONFIG_BIGMEM
#define __GFP_BIGMEM 0x40
#ifdef CONFIG_HIGHMEM
#define __GFP_HIGHMEM 0x40
#else
#define __GFP_BIGMEM 0x0 /* noop */
#define __GFP_HIGHMEM 0x0 /* noop */
#endif
#define __GFP_DMA 0x80
#define GFP_BUFFER (__GFP_LOW | __GFP_WAIT)
#define GFP_ATOMIC (__GFP_HIGH)
#define GFP_BIGUSER (__GFP_LOW | __GFP_WAIT | __GFP_IO | __GFP_BIGMEM)
#define GFP_USER (__GFP_LOW | __GFP_WAIT | __GFP_IO)
#define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM)
#define GFP_KERNEL (__GFP_MED | __GFP_WAIT | __GFP_IO)
#define GFP_NFS (__GFP_HIGH | __GFP_WAIT | __GFP_IO)
#define GFP_KSWAPD (__GFP_IO | __GFP_SWAP)
......@@ -380,10 +392,10 @@ extern void put_cached_page(unsigned long);
#define GFP_DMA __GFP_DMA
/* Flag - indicates that the buffer can be taken from big memory which is not
/* Flag - indicates that the buffer can be taken from high memory which is not
directly addressable by the kernel */
#define GFP_BIGMEM __GFP_BIGMEM
#define GFP_HIGHMEM __GFP_HIGHMEM
/* vma is the first one with address < vma->vm_end,
* and even address < vma->vm_start. Have to extend vma. */
......@@ -422,7 +434,7 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m
extern struct vm_area_struct *find_extend_vma(struct task_struct *tsk, unsigned long addr);
#define buffer_under_min() ((atomic_read(&buffermem) >> PAGE_SHIFT) * 100 < \
#define buffer_under_min() (atomic_read(&buffermem_pages) * 100 < \
buffer_mem.min_percent * num_physpages)
#define pgcache_under_min() (atomic_read(&page_cache_size) * 100 < \
page_cache.min_percent * num_physpages)
......
......@@ -11,10 +11,16 @@
#include <linux/mm.h>
#include <linux/fs.h>
#include <linux/highmem.h>
#include <linux/list.h>
static inline unsigned long page_address(struct page * page)
extern inline pte_t get_pagecache_pte(struct page *page)
{
return PAGE_OFFSET + ((page - mem_map) << PAGE_SHIFT);
/*
* the pagecache is still machineword sized. The rest of the VM
* can deal with arbitrary sized ptes.
*/
return __pte(page->offset);
}
/*
......@@ -30,8 +36,8 @@ static inline unsigned long page_address(struct page * page)
#define PAGE_CACHE_MASK PAGE_MASK
#define PAGE_CACHE_ALIGN(addr) (((addr)+PAGE_CACHE_SIZE-1)&PAGE_CACHE_MASK)
#define page_cache_alloc() __get_free_page(GFP_USER)
#define page_cache_free(x) free_page(x)
#define page_cache_alloc() __get_pages(GFP_USER, 0)
#define page_cache_free(x) __free_page(x)
#define page_cache_release(x) __free_page(x)
/*
......@@ -54,7 +60,7 @@ extern void page_cache_init(unsigned long);
* inode pointer and offsets are distributed (ie, we
* roughly know which bits are "significant")
*/
static inline unsigned long _page_hashfn(struct inode * inode, unsigned long offset)
extern inline unsigned long _page_hashfn(struct inode * inode, unsigned long offset)
{
#define i (((unsigned long) inode)/(sizeof(struct inode) & ~ (sizeof(struct inode) - 1)))
#define o (offset >> PAGE_SHIFT)
......@@ -82,26 +88,37 @@ extern void __add_page_to_hash_queue(struct page * page, struct page **p);
extern void add_to_page_cache(struct page * page, struct inode * inode, unsigned long offset);
extern int add_to_page_cache_unique(struct page * page, struct inode * inode, unsigned long offset, struct page **hash);
static inline void add_page_to_hash_queue(struct page * page, struct inode * inode, unsigned long offset)
extern inline void add_page_to_hash_queue(struct page * page, struct inode * inode, unsigned long offset)
{
__add_page_to_hash_queue(page, page_hash(inode,offset));
}
static inline void add_page_to_inode_queue(struct inode * inode, struct page * page)
extern inline void add_page_to_inode_queue(struct inode * inode, struct page * page)
{
struct page **p = &inode->i_pages;
inode->i_nrpages++;
struct list_head *head = &inode->i_pages;
if (!inode->i_nrpages++) {
if (!list_empty(head))
BUG();
} else {
if (list_empty(head))
BUG();
}
list_add(&page->list, head);
page->inode = inode;
page->prev = NULL;
if ((page->next = *p) != NULL)
page->next->prev = page;
*p = page;
}
extern inline void remove_page_from_inode_queue(struct page * page)
{
struct inode * inode = page->inode;
inode->i_nrpages--;
list_del(&page->list);
}
extern void ___wait_on_page(struct page *);
static inline void wait_on_page(struct page * page)
extern inline void wait_on_page(struct page * page)
{
if (PageLocked(page))
___wait_on_page(page);
......
......@@ -426,7 +426,7 @@ struct task_struct {
/* files */ &init_files, \
/* mm */ NULL, &init_mm, \
/* signals */ SPIN_LOCK_UNLOCKED, &init_signals, {{0}}, {{0}}, NULL, &init_task.sigqueue, 0, 0, \
/* exec cts */ 0,0,0, \
/* exec cts */ 0,0, \
}
#ifndef INIT_TASK_SIZE
......
......@@ -24,7 +24,7 @@ struct shmid_kernel
struct shmid_ds u;
/* the following are private */
unsigned long shm_npages; /* size of segment (pages) */
unsigned long *shm_pages; /* array of ptrs to frames -> SHMMAX */
pte_t *shm_pages; /* array of ptrs to frames -> SHMMAX */
struct vm_area_struct *attaches; /* descriptors for attaches */
};
......@@ -72,7 +72,7 @@ asmlinkage long sys_shmget (key_t key, int size, int flag);
asmlinkage long sys_shmat (int shmid, char *shmaddr, int shmflg, unsigned long *addr);
asmlinkage long sys_shmdt (char *shmaddr);
asmlinkage long sys_shmctl (int shmid, int cmd, struct shmid_ds *buf);
extern void shm_unuse(unsigned long entry, unsigned long page);
extern void shm_unuse(pte_t entry, struct page *page);
#endif /* __KERNEL__ */
......
......@@ -45,7 +45,7 @@ typedef struct kmem_cache_s kmem_cache_t;
#define SLAB_CTOR_VERIFY 0x004UL /* tell constructor it's a verify call */
/* prototypes */
extern long kmem_cache_init(long, long);
extern void kmem_cache_init(void);
extern void kmem_cache_sizes_init(void);
extern kmem_cache_t *kmem_find_general_cachep(size_t);
extern kmem_cache_t *kmem_cache_create(const char *, size_t, size_t, unsigned long,
......
......@@ -35,8 +35,6 @@ union swap_header {
#define MAX_SWAP_BADPAGES \
((__swapoffset(magic.magic) - __swapoffset(info.badpages)) / sizeof(int))
#undef DEBUG_SWAP
#include <asm/atomic.h>
#define SWP_USED 1
......@@ -69,7 +67,7 @@ extern struct list_head lru_cache;
extern atomic_t nr_async_pages;
extern struct inode swapper_inode;
extern atomic_t page_cache_size;
extern atomic_t buffermem;
extern atomic_t buffermem_pages;
/* Incomplete types for prototype declarations: */
struct task_struct;
......@@ -87,36 +85,35 @@ extern int try_to_free_pages(unsigned int gfp_mask);
/* linux/mm/page_io.c */
extern void rw_swap_page(int, struct page *, int);
extern void rw_swap_page_nolock(int, unsigned long, char *, int);
extern void swap_after_unlock_page (unsigned long entry);
extern void rw_swap_page_nolock(int, pte_t, char *, int);
/* linux/mm/page_alloc.c */
/* linux/mm/swap_state.c */
extern void show_swap_cache_info(void);
extern void add_to_swap_cache(struct page *, unsigned long);
extern int swap_duplicate(unsigned long);
extern void add_to_swap_cache(struct page *, pte_t);
extern int swap_duplicate(pte_t);
extern int swap_check_entry(unsigned long);
struct page * lookup_swap_cache(unsigned long);
extern struct page * read_swap_cache_async(unsigned long, int);
struct page * lookup_swap_cache(pte_t);
extern struct page * read_swap_cache_async(pte_t, int);
#define read_swap_cache(entry) read_swap_cache_async(entry, 1);
extern int FASTCALL(swap_count(unsigned long));
extern unsigned long acquire_swap_entry(struct page *page);
extern int swap_count(struct page *);
extern pte_t acquire_swap_entry(struct page *page);
/*
* Make these inline later once they are working properly.
*/
extern void __delete_from_swap_cache(struct page *page);
extern void delete_from_swap_cache(struct page *page);
extern void free_page_and_swap_cache(unsigned long addr);
extern void free_page_and_swap_cache(struct page *page);
/* linux/mm/swapfile.c */
extern unsigned int nr_swapfiles;
extern struct swap_info_struct swap_info[];
extern int is_swap_partition(kdev_t);
void si_swapinfo(struct sysinfo *);
unsigned long get_swap_page(void);
extern void FASTCALL(swap_free(unsigned long));
pte_t get_swap_page(void);
extern void swap_free(pte_t);
struct swap_list_t {
int head; /* head of priority-ordered swapfile list */
int next; /* swapfile to be used next */
......@@ -158,7 +155,7 @@ static inline int is_page_shared(struct page *page)
return 1;
count = page_count(page);
if (PageSwapCache(page))
count += swap_count(page->offset) - 2;
count += swap_count(page) - 2;
return count > 1;
}
......
......@@ -339,12 +339,13 @@ extern int fg_console, last_console, want_console;
extern int kmsg_redirect;
extern unsigned long con_init(unsigned long);
extern void con_init(void);
extern void console_init(void);
extern int rs_init(void);
extern int lp_init(void);
extern int pty_init(void);
extern int tty_init(void);
extern void tty_init(void);
extern int ip2_init(void);
extern int pcxe_init(void);
extern int pc_init(void);
......@@ -393,7 +394,7 @@ extern int n_tty_ioctl(struct tty_struct * tty, struct file * file,
/* serial.c */
extern long serial_console_init(long kmem_start, long kmem_end);
extern void serial_console_init(void);
/* pcxx.c */
......
......@@ -24,6 +24,7 @@
#include <linux/blk.h>
#include <linux/hdreg.h>
#include <linux/iobuf.h>
#include <linux/bootmem.h>
#include <asm/io.h>
#include <asm/bugs.h>
......@@ -79,7 +80,6 @@ static int init(void *);
extern void init_IRQ(void);
extern void init_modules(void);
extern long console_init(long, long);
extern void sock_init(void);
extern void fork_init(unsigned long);
extern void mca_init(void);
......@@ -110,9 +110,6 @@ extern void dquot_init_hash(void);
extern void time_init(void);
static unsigned long memory_start = 0;
static unsigned long memory_end = 0;
int rows, cols;
#ifdef CONFIG_BLK_DEV_INITRD
......@@ -423,7 +420,7 @@ static void __init parse_options(char *line)
}
extern void setup_arch(char **, unsigned long *, unsigned long *);
extern void setup_arch(char **);
extern void cpu_idle(void);
#ifndef __SMP__
......@@ -450,15 +447,15 @@ static void __init smp_init(void)
asmlinkage void __init start_kernel(void)
{
char * command_line;
unsigned long mempages;
/*
* Interrupts are still disabled. Do necessary setups, then
* enable them
*/
lock_kernel();
printk(linux_banner);
setup_arch(&command_line, &memory_start, &memory_end);
memory_start = paging_init(memory_start,memory_end);
setup_arch(&command_line);
paging_init();
trap_init();
init_IRQ();
sched_init();
......@@ -470,40 +467,45 @@ asmlinkage void __init start_kernel(void)
* we've done PCI setups etc, and console_init() must be aware of
* this. But we do want output early, in case something goes wrong.
*/
memory_start = console_init(memory_start,memory_end);
console_init();
#ifdef CONFIG_MODULES
init_modules();
#endif
if (prof_shift) {
prof_buffer = (unsigned int *) memory_start;
unsigned int size;
/* only text is profiled */
prof_len = (unsigned long) &_etext - (unsigned long) &_stext;
prof_len >>= prof_shift;
memory_start += prof_len * sizeof(unsigned int);
memset(prof_buffer, 0, prof_len * sizeof(unsigned int));
size = prof_len * sizeof(unsigned int) + PAGE_SIZE-1;
prof_buffer = (unsigned int *) alloc_bootmem(size);
memset(prof_buffer, 0, size);
}
memory_start = kmem_cache_init(memory_start, memory_end);
kmem_cache_init();
sti();
calibrate_delay();
#ifdef CONFIG_BLK_DEV_INITRD
// FIXME, use the bootmem.h interface.
if (initrd_start && !initrd_below_start_ok && initrd_start < memory_start) {
printk(KERN_CRIT "initrd overwritten (0x%08lx < 0x%08lx) - "
"disabling it.\n",initrd_start,memory_start);
initrd_start = 0;
}
#endif
mem_init(memory_start,memory_end);
mem_init();
kmem_cache_sizes_init();
#ifdef CONFIG_PROC_FS
proc_root_init();
#endif
fork_init(memory_end-memory_start);
mempages = num_physpages;
fork_init(mempages);
filescache_init();
dcache_init();
vma_init();
buffer_init(memory_end-memory_start);
page_cache_init(memory_end-memory_start);
buffer_init(mempages);
page_cache_init(mempages);
kiobuf_init();
signals_init();
inode_init();
......
This diff is collapsed.
......@@ -157,7 +157,7 @@ int alloc_uid(struct task_struct *p)
return 0;
}
void __init fork_init(unsigned long memsize)
void __init fork_init(unsigned long mempages)
{
int i;
......@@ -175,7 +175,7 @@ void __init fork_init(unsigned long memsize)
* value: the thread structures can take up at most half
* of memory.
*/
max_threads = memsize / THREAD_SIZE / 2;
max_threads = mempages / (THREAD_SIZE/PAGE_SIZE) / 2;
init_task.rlim[RLIMIT_NPROC].rlim_cur = max_threads/2;
init_task.rlim[RLIMIT_NPROC].rlim_max = max_threads/2;
......
......@@ -22,7 +22,7 @@
#include <asm/uaccess.h>
#define LOG_BUF_LEN (16384)
#define LOG_BUF_LEN (16384*16)
#define LOG_BUF_MASK (LOG_BUF_LEN-1)
static char buf[1024];
......
......@@ -10,7 +10,7 @@
#include <linux/sched.h>
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/bigmem.h>
#include <linux/highmem.h>
#include <asm/pgtable.h>
#include <asm/uaccess.h>
......@@ -23,7 +23,9 @@ static int access_one_page(struct task_struct * tsk, struct vm_area_struct * vma
pgd_t * pgdir;
pmd_t * pgmiddle;
pte_t * pgtable;
unsigned long page;
unsigned long mapnr;
unsigned long maddr;
struct page *page;
repeat:
pgdir = pgd_offset(vma->vm_mm, addr);
......@@ -39,27 +41,25 @@ static int access_one_page(struct task_struct * tsk, struct vm_area_struct * vma
pgtable = pte_offset(pgmiddle, addr);
if (!pte_present(*pgtable))
goto fault_in_page;
page = pte_page(*pgtable);
mapnr = pte_pagenr(*pgtable);
if (write && (!pte_write(*pgtable) || !pte_dirty(*pgtable)))
goto fault_in_page;
if (MAP_NR(page) >= max_mapnr)
if (mapnr >= max_mapnr)
return 0;
page = mem_map + mapnr;
flush_cache_page(vma, addr);
{
void *src = (void *) (page + (addr & ~PAGE_MASK));
void *dst = buf;
if (write) {
dst = src;
src = buf;
}
src = (void *) kmap((unsigned long) src, KM_READ);
dst = (void *) kmap((unsigned long) dst, KM_WRITE);
memcpy(dst, src, len);
kunmap((unsigned long) src, KM_READ);
kunmap((unsigned long) dst, KM_WRITE);
if (write) {
maddr = kmap(page, KM_WRITE);
memcpy((char *)maddr + (addr & ~PAGE_MASK), buf, len);
flush_page_to_ram(maddr);
kunmap(maddr, KM_WRITE);
} else {
maddr = kmap(page, KM_READ);
memcpy(buf, (char *)maddr + (addr & ~PAGE_MASK), len);
flush_page_to_ram(maddr);
kunmap(maddr, KM_READ);
}
flush_page_to_ram(page);
return len;
fault_in_page:
......@@ -69,11 +69,11 @@ static int access_one_page(struct task_struct * tsk, struct vm_area_struct * vma
return 0;
bad_pgd:
printk("ptrace: bad pgd in '%s' at %08lx (%08lx)\n", tsk->comm, addr, pgd_val(*pgdir));
pgd_ERROR(*pgdir);
return 0;
bad_pmd:
printk("ptrace: bad pmd in '%s' at %08lx (%08lx)\n", tsk->comm, addr, pmd_val(*pgmiddle));
pmd_ERROR(*pgmiddle);
return 0;
}
......
......@@ -9,11 +9,11 @@
O_TARGET := mm.o
O_OBJS := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \
vmalloc.o slab.o \
swap.o vmscan.o page_io.o page_alloc.o swap_state.o swapfile.o
vmalloc.o slab.o bootmem.o swap.o vmscan.o page_io.o \
page_alloc.o swap_state.o swapfile.o
ifeq ($(CONFIG_BIGMEM),y)
O_OBJS += bigmem.o
ifeq ($(CONFIG_HIGHMEM),y)
O_OBJS += highmem.o
endif
include $(TOPDIR)/Rules.make
/*
* BIGMEM common code and variables.
*
* (C) 1999 Andrea Arcangeli, SuSE GmbH, andrea@suse.de
* Gerhard Wichert, Siemens AG, Gerhard.Wichert@pdb.siemens.de
*/
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/bigmem.h>
unsigned long bigmem_mapnr;
int nr_free_bigpages = 0;
struct page * prepare_bigmem_swapout(struct page * page)
{
/* if this is a bigmem page so it can't be swapped out directly
otherwise the b_data buffer addresses will break
the lowlevel device drivers. */
if (PageBIGMEM(page)) {
unsigned long regular_page;
unsigned long vaddr;
regular_page = __get_free_page(GFP_ATOMIC);
if (!regular_page)
return NULL;
vaddr = kmap(page_address(page), KM_READ);
copy_page(regular_page, vaddr);
kunmap(vaddr, KM_READ);
/* ok, we can just forget about our bigmem page since
we stored its data into the new regular_page. */
__free_page(page);
page = MAP_NR(regular_page) + mem_map;
}
return page;
}
struct page * replace_with_bigmem(struct page * page)
{
if (!PageBIGMEM(page) && nr_free_bigpages) {
unsigned long kaddr;
kaddr = __get_free_page(GFP_ATOMIC|GFP_BIGMEM);
if (kaddr) {
struct page * bigmem_page;
bigmem_page = MAP_NR(kaddr) + mem_map;
if (PageBIGMEM(bigmem_page)) {
unsigned long vaddr;
vaddr = kmap(kaddr, KM_WRITE);
copy_page(vaddr, page_address(page));
kunmap(vaddr, KM_WRITE);
/* Preserve the caching of the swap_entry. */
bigmem_page->offset = page->offset;
/* We can just forget the old page since
we stored its data into the new
bigmem_page. */
__free_page(page);
page = bigmem_page;
}
}
}
return page;
}
This diff is collapsed.
This diff is collapsed.
/*
* High memory handling common code and variables.
*
* (C) 1999 Andrea Arcangeli, SuSE GmbH, andrea@suse.de
* Gerhard Wichert, Siemens AG, Gerhard.Wichert@pdb.siemens.de
*
* Redesigned the x86 32-bit VM architecture to deal with
* 64-bit physical space. With current x86 CPUs this
* means up to 64 Gigabytes physical RAM.
*
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
*/
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/highmem.h>
unsigned long highmem_mapnr;
unsigned long nr_free_highpages = 0;
struct page * prepare_highmem_swapout(struct page * page)
{
unsigned long regular_page;
unsigned long vaddr;
/*
* If this is a highmem page so it can't be swapped out directly
* otherwise the b_data buffer addresses will break
* the lowlevel device drivers.
*/
if (!PageHighMem(page))
return page;
regular_page = __get_free_page(GFP_ATOMIC);
if (!regular_page)
return NULL;
vaddr = kmap(page, KM_READ);
copy_page((void *)regular_page, (void *)vaddr);
kunmap(vaddr, KM_READ);
/*
* ok, we can just forget about our highmem page since
* we stored its data into the new regular_page.
*/
__free_page(page);
return mem_map + MAP_NR(regular_page);
}
struct page * replace_with_highmem(struct page * page)
{
struct page *highpage;
unsigned long vaddr;
if (PageHighMem(page) || !nr_free_highpages)
return page;
highpage = get_free_highpage(GFP_ATOMIC|__GFP_HIGHMEM);
if (!highpage)
return page;
if (!PageHighMem(highpage)) {
__free_page(highpage);
return page;
}
vaddr = kmap(highpage, KM_WRITE);
copy_page((void *)vaddr, (void *)page_address(page));
kunmap(vaddr, KM_WRITE);
/* Preserve the caching of the swap_entry. */
highpage->offset = page->offset;
highpage->inode = page->inode;
/*
* We can just forget the old page since
* we stored its data into the new highmem-page.
*/
__free_page(page);
return highpage;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment