Commit 81612fc0 authored by David S. Miller's avatar David S. Miller

Merge nuts.davemloft.net:/disk1/BK/sparcwork-2.6

into nuts.davemloft.net:/disk1/BK/sparc-2.6
parents 1814aa68 3d61e387
...@@ -123,14 +123,15 @@ server is not hotplug capable. What do you do? Suspend to disk, ...@@ -123,14 +123,15 @@ server is not hotplug capable. What do you do? Suspend to disk,
replace ethernet card, resume. If you are fast your users will not replace ethernet card, resume. If you are fast your users will not
even see broken connections. even see broken connections.
Q: Maybe I'm missing something, but why doesn't the regular io paths
work?
A: (Basically) you want to replace all kernel data with kernel data saved Q: Maybe I'm missing something, but why don't the regular I/O paths work?
on disk. How do you do that using normal i/o paths? If you'll read
"new" data 4KB at a time, you'll crash... because you still need "old" A: We do use the regular I/O paths. However we cannot restore the data
data to do the reading, and "new" data may fit on same physical spot to its original location as we load it. That would create an
in memory. inconsistent kernel state which would certainly result in an oops.
Instead, we load the image into unused memory and then atomically copy
it back to it original location. This implies, of course, a maximum
image size of half the amount of memory.
There are two solutions to this: There are two solutions to this:
...@@ -141,6 +142,10 @@ read "new" data onto free spots, then cli and copy ...@@ -141,6 +142,10 @@ read "new" data onto free spots, then cli and copy
between 0-640KB. That way, I'd have to make sure that 0-640KB is free between 0-640KB. That way, I'd have to make sure that 0-640KB is free
during suspending, but otherwise it would work... during suspending, but otherwise it would work...
suspend2 shares this fundamental limitation, but does not include user
data and disk caches into "used memory" by saving them in
advance. That means that the limitation goes away in practice.
Q: Does linux support ACPI S4? Q: Does linux support ACPI S4?
A: No. A: No.
...@@ -161,7 +166,7 @@ while machine is suspended-to-disk, Linux will fail to notice that. ...@@ -161,7 +166,7 @@ while machine is suspended-to-disk, Linux will fail to notice that.
Q: My machine doesn't work with ACPI. How can I use swsusp than ? Q: My machine doesn't work with ACPI. How can I use swsusp than ?
A: Do reboot() syscall with right parameters. Warning: glibc gets in A: Do a reboot() syscall with right parameters. Warning: glibc gets in
its way, so check with strace: its way, so check with strace:
reboot(LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, 0xd000fce2) reboot(LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, 0xd000fce2)
...@@ -181,3 +186,16 @@ int main() ...@@ -181,3 +186,16 @@ int main()
LINUX_REBOOT_CMD_SW_SUSPEND, 0); LINUX_REBOOT_CMD_SW_SUSPEND, 0);
return 0; return 0;
} }
Q: What is 'suspend2'?
A: suspend2 is 'Software Suspend 2', a forked implementation of
suspend-to-disk which is available as separate patches for 2.4 and 2.6
kernels from swsusp.sourceforge.net. It includes support for SMP, 4GB
highmem and preemption. It also has a extensible architecture that
allows for arbitrary transformations on the image (compression,
encryption) and arbitrary backends for writing the image (eg to swap
or an NFS share[Work In Progress]). Questions regarding suspend2
should be sent to the mailing list available through the suspend2
website, and not to the Linux Kernel Mailing List. We are working
toward merging suspend2 into the mainline kernel.
...@@ -7,6 +7,8 @@ If you want to trick swsusp/S3 into working, you might want to try: ...@@ -7,6 +7,8 @@ If you want to trick swsusp/S3 into working, you might want to try:
* go with minimal config, turn off drivers like USB, AGP you don't * go with minimal config, turn off drivers like USB, AGP you don't
really need really need
* turn off APIC and preempt
* use ext2. At least it has working fsck. [If something seemes to go * use ext2. At least it has working fsck. [If something seemes to go
wrong, force fsck when you have a chance] wrong, force fsck when you have a chance]
......
...@@ -30,6 +30,10 @@ There are three types of systems where video works after S3 resume: ...@@ -30,6 +30,10 @@ There are three types of systems where video works after S3 resume:
patched X, and plain text console (no vesafb or radeonfb), see patched X, and plain text console (no vesafb or radeonfb), see
http://www.doesi.gmxhome.de/linux/tm800s3/s3.html. (Acer TM 800) http://www.doesi.gmxhome.de/linux/tm800s3/s3.html. (Acer TM 800)
* radeon systems, where X can soft-boot your video card. You'll need
patched X, and plain text console (no vesafb or radeonfb), see
http://www.doesi.gmxhome.de/linux/tm800s3/s3.html. (Acer TM 800)
Now, if you pass acpi_sleep=something, and it does not work with your Now, if you pass acpi_sleep=something, and it does not work with your
bios, you'll get hard crash during resume. Be carefull. bios, you'll get hard crash during resume. Be carefull.
......
...@@ -194,13 +194,8 @@ eiger_swizzle(struct pci_dev *dev, u8 *pinp) ...@@ -194,13 +194,8 @@ eiger_swizzle(struct pci_dev *dev, u8 *pinp)
case 0x0f: bridge_count = 4; break; /* 4 */ case 0x0f: bridge_count = 4; break; /* 4 */
}; };
/* Check first for the built-in bridges on hose 0. */
if (hose->index == 0
&& PCI_SLOT(dev->bus->self->devfn) > 20-bridge_count) {
slot = PCI_SLOT(dev->devfn); slot = PCI_SLOT(dev->devfn);
} else { while (dev->bus->self) {
/* Must be a card-based bridge. */
do {
/* Check for built-in bridges on hose 0. */ /* Check for built-in bridges on hose 0. */
if (hose->index == 0 if (hose->index == 0
&& (PCI_SLOT(dev->bus->self->devfn) && (PCI_SLOT(dev->bus->self->devfn)
...@@ -208,13 +203,11 @@ eiger_swizzle(struct pci_dev *dev, u8 *pinp) ...@@ -208,13 +203,11 @@ eiger_swizzle(struct pci_dev *dev, u8 *pinp)
slot = PCI_SLOT(dev->devfn); slot = PCI_SLOT(dev->devfn);
break; break;
} }
/* Must be a card-based bridge. */
pin = bridge_swizzle(pin, PCI_SLOT(dev->devfn)); pin = bridge_swizzle(pin, PCI_SLOT(dev->devfn));
/* Move up the chain of bridges. */ /* Move up the chain of bridges. */
dev = dev->bus->self; dev = dev->bus->self;
/* Slot of the next bridge. */
slot = PCI_SLOT(dev->devfn);
} while (dev->bus->self);
} }
*pinp = pin; *pinp = pin;
return slot; return slot;
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/8253pit.h>
#include "proto.h" #include "proto.h"
#include "irq_impl.h" #include "irq_impl.h"
...@@ -64,6 +65,8 @@ ruffian_init_irq(void) ...@@ -64,6 +65,8 @@ ruffian_init_irq(void)
common_init_isa_dma(); common_init_isa_dma();
} }
#define RUFFIAN_LATCH ((PIT_TICK_RATE + HZ / 2) / HZ)
static void __init static void __init
ruffian_init_rtc(void) ruffian_init_rtc(void)
{ {
...@@ -72,8 +75,8 @@ ruffian_init_rtc(void) ...@@ -72,8 +75,8 @@ ruffian_init_rtc(void)
/* Setup interval timer. */ /* Setup interval timer. */
outb(0x34, 0x43); /* binary, mode 2, LSB/MSB, ch 0 */ outb(0x34, 0x43); /* binary, mode 2, LSB/MSB, ch 0 */
outb(LATCH & 0xff, 0x40); /* LSB */ outb(RUFFIAN_LATCH & 0xff, 0x40); /* LSB */
outb(LATCH >> 8, 0x40); /* MSB */ outb(RUFFIAN_LATCH >> 8, 0x40); /* MSB */
outb(0xb6, 0x43); /* pit counter 2: speaker */ outb(0xb6, 0x43); /* pit counter 2: speaker */
outb(0x31, 0x42); outb(0x31, 0x42);
......
...@@ -231,8 +231,12 @@ takara_swizzle(struct pci_dev *dev, u8 *pinp) ...@@ -231,8 +231,12 @@ takara_swizzle(struct pci_dev *dev, u8 *pinp)
int slot = PCI_SLOT(dev->devfn); int slot = PCI_SLOT(dev->devfn);
int pin = *pinp; int pin = *pinp;
unsigned int ctlreg = inl(0x500); unsigned int ctlreg = inl(0x500);
unsigned int busslot = PCI_SLOT(dev->bus->self->devfn); unsigned int busslot;
if (!dev->bus->self)
return slot;
busslot = PCI_SLOT(dev->bus->self->devfn);
/* Check for built-in bridges. */ /* Check for built-in bridges. */
if (dev->bus->number != 0 if (dev->bus->number != 0
&& busslot > 16 && busslot > 16
......
...@@ -117,7 +117,7 @@ show_mem(void) ...@@ -117,7 +117,7 @@ show_mem(void)
else if (!page_count(mem_map+i)) else if (!page_count(mem_map+i))
free++; free++;
else else
shared += atomic_read(&mem_map[i].count) - 1; shared += page_count(mem_map + i) - 1;
} }
printk("%ld pages of RAM\n",total); printk("%ld pages of RAM\n",total);
printk("%ld free pages\n",free); printk("%ld free pages\n",free);
......
...@@ -384,7 +384,7 @@ show_mem(void) ...@@ -384,7 +384,7 @@ show_mem(void)
else if (!page_count(lmem_map+i)) else if (!page_count(lmem_map+i))
free++; free++;
else else
shared += atomic_read(&lmem_map[i].count) - 1; shared += page_count(lmem_map + i) - 1;
} }
} }
printk("%ld pages of RAM\n",total); printk("%ld pages of RAM\n",total);
......
...@@ -17,11 +17,11 @@ BOOT_TARGETS = zImage zImage.initrd znetboot znetboot.initrd ...@@ -17,11 +17,11 @@ BOOT_TARGETS = zImage zImage.initrd znetboot znetboot.initrd
bootdir-y := simple bootdir-y := simple
bootdir-$(CONFIG_PPC_OF) += openfirmware bootdir-$(CONFIG_PPC_OF) += openfirmware
subdir-y := lib/ common/ images/ subdir-y := lib common images
subdir-$(CONFIG_PPC_OF) += of1275/ subdir-$(CONFIG_PPC_OF) += of1275
# for cleaning # for cleaning
subdir- += simple/ openfirmware/ subdir- += simple openfirmware
host-progs := $(addprefix utils/, addnote mknote hack-coff mkprep mkbugboot mktree) host-progs := $(addprefix utils/, addnote mknote hack-coff mkprep mkbugboot mktree)
......
...@@ -22,9 +22,10 @@ of1275 := $(boot)/of1275 ...@@ -22,9 +22,10 @@ of1275 := $(boot)/of1275
images := $(boot)/images images := $(boot)/images
OBJCOPY_ARGS := -O aixcoff-rs6000 -R .stab -R .stabstr -R .comment OBJCOPY_ARGS := -O aixcoff-rs6000 -R .stab -R .stabstr -R .comment
COFF_LD_ARGS := -T $(boot)/ld.script -e _start -Ttext 0x00500000 -Bstatic COFF_LD_ARGS := -T $(srctree)/$(boot)/ld.script -e _start -Ttext 0x00500000 \
CHRP_LD_ARGS := -T $(boot)/ld.script -e _start -Ttext 0x00800000 -Bstatic
NEWWORLD_LD_ARGS:= -T $(boot)/ld.script -e _start -Ttext 0x01000000 CHRP_LD_ARGS := -T $(srctree)/$(boot)/ld.script -e _start -Ttext 0x00800000
NEWWORLD_LD_ARGS:= -T $(srctree)/$(boot)/ld.script -e _start -Ttext 0x01000000
COMMONOBJS := start.o misc.o common.o COMMONOBJS := start.o misc.o common.o
COFFOBJS := coffcrt0.o $(COMMONOBJS) coffmain.o COFFOBJS := coffcrt0.o $(COMMONOBJS) coffmain.o
...@@ -92,11 +93,11 @@ quiet_cmd_gencoffb = COFF $@ ...@@ -92,11 +93,11 @@ quiet_cmd_gencoffb = COFF $@
cmd_gencoffb = $(LD) -o $@ $(COFF_LD_ARGS) $(COFFOBJS) $< $(LIBS) && \ cmd_gencoffb = $(LD) -o $@ $(COFF_LD_ARGS) $(COFFOBJS) $< $(LIBS) && \
$(OBJCOPY) $@ $@ -R .comment $(del-ramdisk-sec) $(OBJCOPY) $@ $@ -R .comment $(del-ramdisk-sec)
targets += coffboot targets += coffboot
$(obj)/coffboot: $(obj)/image.o $(COFFOBJS) $(LIBS) $(boot)/ld.script FORCE $(obj)/coffboot: $(obj)/image.o $(COFFOBJS) $(LIBS) $(srctree)/$(boot)/ld.script FORCE
$(call if_changed,gencoffb) $(call if_changed,gencoffb)
targets += coffboot.initrd targets += coffboot.initrd
$(obj)/coffboot.initrd: $(obj)/image.initrd.o $(COFFOBJS) $(LIBS) \ $(obj)/coffboot.initrd: $(obj)/image.initrd.o $(COFFOBJS) $(LIBS) \
$(boot)/ld.script FORCE $(srctree)/$(boot)/ld.script FORCE
$(call if_changed,gencoffb) $(call if_changed,gencoffb)
...@@ -118,20 +119,22 @@ quiet_cmd_gen-elf-pmac = ELF $@ ...@@ -118,20 +119,22 @@ quiet_cmd_gen-elf-pmac = ELF $@
-R .comment $(del-ramdisk-sec) -R .comment $(del-ramdisk-sec)
$(images)/vmlinux.elf-pmac: $(obj)/image.o $(NEWWORLDOBJS) $(LIBS) \ $(images)/vmlinux.elf-pmac: $(obj)/image.o $(NEWWORLDOBJS) $(LIBS) \
$(obj)/note $(boot)/ld.script $(obj)/note $(srctree)/$(boot)/ld.script
$(call cmd,gen-elf-pmac) $(call cmd,gen-elf-pmac)
$(images)/vmlinux.initrd.elf-pmac: $(obj)/image.initrd.o $(NEWWORLDOBJS) \ $(images)/vmlinux.initrd.elf-pmac: $(obj)/image.initrd.o $(NEWWORLDOBJS) \
$(LIBS) $(obj)/note $(boot)/ld.script $(LIBS) $(obj)/note \
$(srctree)/$(boot)/ld.script
$(call cmd,gen-elf-pmac) $(call cmd,gen-elf-pmac)
quiet_cmd_gen-chrp = CHRP $@ quiet_cmd_gen-chrp = CHRP $@
cmd_gen-chrp = $(LD) $(CHRP_LD_ARGS) -o $@ $(CHRPOBJS) $< $(LIBS) && \ cmd_gen-chrp = $(LD) $(CHRP_LD_ARGS) -o $@ $(CHRPOBJS) $< $(LIBS) && \
$(OBJCOPY) $@ $@ -R .comment $(del-ramdisk-sec) $(OBJCOPY) $@ $@ -R .comment $(del-ramdisk-sec)
$(images)/zImage.chrp: $(obj)/image.o $(CHRPOBJS) $(LIBS) $(boot)/ld.script $(images)/zImage.chrp: $(obj)/image.o $(CHRPOBJS) $(LIBS) \
$(srctree)/$(boot)/ld.script
$(call cmd,gen-chrp) $(call cmd,gen-chrp)
$(images)/zImage.initrd.chrp: $(obj)/image.initrd.o $(CHRPOBJS) $(LIBS) \ $(images)/zImage.initrd.chrp: $(obj)/image.initrd.o $(CHRPOBJS) $(LIBS) \
$(boot)/ld.script $(srctree)/$(boot)/ld.script
$(call cmd,gen-chrp) $(call cmd,gen-chrp)
quiet_cmd_addnote = ADDNOTE $@ quiet_cmd_addnote = ADDNOTE $@
......
...@@ -41,7 +41,7 @@ end-y := elf ...@@ -41,7 +41,7 @@ end-y := elf
# if present on 'classic' PPC. # if present on 'classic' PPC.
cacheflag-y := -DCLEAR_CACHES="" cacheflag-y := -DCLEAR_CACHES=""
# This file will flush / disable the L2, and L3 if present. # This file will flush / disable the L2, and L3 if present.
clear_L2_L3 := $(srctree)/$(boot)/simple/clear.S clear_L2_L3 := $(boot)/simple/clear.S
# #
# See arch/ppc/kconfig and arch/ppc/platforms/Kconfig # See arch/ppc/kconfig and arch/ppc/platforms/Kconfig
......
...@@ -541,7 +541,7 @@ char * ppc_rtas_process_error(int error) ...@@ -541,7 +541,7 @@ char * ppc_rtas_process_error(int error)
case SENSOR_BUSY: case SENSOR_BUSY:
return "(busy)"; return "(busy)";
case SENSOR_NOT_EXIST: case SENSOR_NOT_EXIST:
return "(non existant)"; return "(non existent)";
case SENSOR_DR_ENTITY: case SENSOR_DR_ENTITY:
return "(dr entity removed)"; return "(dr entity removed)";
default: default:
...@@ -698,7 +698,7 @@ int ppc_rtas_process_sensor(struct individual_sensor s, int state, ...@@ -698,7 +698,7 @@ int ppc_rtas_process_sensor(struct individual_sensor s, int state,
} }
break; break;
default: default:
n += sprintf(buf+n, "Unkown sensor (type %d), ignoring it\n", n += sprintf(buf+n, "Unknown sensor (type %d), ignoring it\n",
s.token); s.token);
unknown = 1; unknown = 1;
have_strings = 1; have_strings = 1;
......
...@@ -620,7 +620,7 @@ static void xics_set_affinity(unsigned int virq, cpumask_t cpumask) ...@@ -620,7 +620,7 @@ static void xics_set_affinity(unsigned int virq, cpumask_t cpumask)
cpumask_t tmp = CPU_MASK_NONE; cpumask_t tmp = CPU_MASK_NONE;
irq = virt_irq_to_real(irq_offset_down(virq)); irq = virt_irq_to_real(irq_offset_down(virq));
if (irq == XICS_IPI) if (irq == XICS_IPI || irq == NO_IRQ)
return; return;
status = rtas_call(ibm_get_xive, 1, 3, (void *)&xics_status, irq); status = rtas_call(ibm_get_xive, 1, 3, (void *)&xics_status, irq);
......
...@@ -407,7 +407,7 @@ int iounmap_explicit(void *addr, unsigned long size) ...@@ -407,7 +407,7 @@ int iounmap_explicit(void *addr, unsigned long size)
area = im_get_area((unsigned long) addr, size, area = im_get_area((unsigned long) addr, size,
IM_REGION_EXISTS | IM_REGION_SUBSET); IM_REGION_EXISTS | IM_REGION_SUBSET);
if (area == NULL) { if (area == NULL) {
printk(KERN_ERR "%s() cannot unmap nonexistant range 0x%lx\n", printk(KERN_ERR "%s() cannot unmap nonexistent range 0x%lx\n",
__FUNCTION__, (unsigned long) addr); __FUNCTION__, (unsigned long) addr);
return 1; return 1;
} }
......
...@@ -633,8 +633,8 @@ static int __init wdt_init(void) ...@@ -633,8 +633,8 @@ static int __init wdt_init(void)
outmisc: outmisc:
#ifdef CONFIG_WDT_501 #ifdef CONFIG_WDT_501
misc_deregister(&temp_miscdev); misc_deregister(&temp_miscdev);
#endif /* CONFIG_WDT_501 */
outrbt: outrbt:
#endif /* CONFIG_WDT_501 */
unregister_reboot_notifier(&wdt_notifier); unregister_reboot_notifier(&wdt_notifier);
outirq: outirq:
free_irq(irq, NULL); free_irq(irq, NULL);
......
...@@ -358,13 +358,6 @@ ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector ...@@ -358,13 +358,6 @@ ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector
nsectors.all = (u16) rq->nr_sectors; nsectors.all = (u16) rq->nr_sectors;
if (drive->using_tcq && idedisk_start_tag(drive, rq)) {
if (!ata_pending_commands(drive))
BUG();
return ide_started;
}
if (IDE_CONTROL_REG) if (IDE_CONTROL_REG)
hwif->OUTB(drive->ctl, IDE_CONTROL_REG); hwif->OUTB(drive->ctl, IDE_CONTROL_REG);
...@@ -482,7 +475,7 @@ ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector ...@@ -482,7 +475,7 @@ ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector
((lba48) ? WIN_READ_EXT : WIN_READ)); ((lba48) ? WIN_READ_EXT : WIN_READ));
ide_execute_command(drive, command, &read_intr, WAIT_CMD, NULL); ide_execute_command(drive, command, &read_intr, WAIT_CMD, NULL);
return ide_started; return ide_started;
} else if (rq_data_dir(rq) == WRITE) { } else {
ide_startstop_t startstop; ide_startstop_t startstop;
#ifdef CONFIG_BLK_DEV_IDE_TCQ #ifdef CONFIG_BLK_DEV_IDE_TCQ
if (blk_rq_tagged(rq)) if (blk_rq_tagged(rq))
...@@ -520,9 +513,6 @@ ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector ...@@ -520,9 +513,6 @@ ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector
} }
return ide_started; return ide_started;
} }
blk_dump_rq_flags(rq, "__ide_do_rw_disk - bad command");
ide_end_request(drive, 0, 0);
return ide_stopped;
} }
EXPORT_SYMBOL_GPL(__ide_do_rw_disk); EXPORT_SYMBOL_GPL(__ide_do_rw_disk);
...@@ -539,26 +529,11 @@ static ide_startstop_t lba_48_rw_disk(ide_drive_t *, struct request *, unsigned ...@@ -539,26 +529,11 @@ static ide_startstop_t lba_48_rw_disk(ide_drive_t *, struct request *, unsigned
*/ */
ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector_t block) ide_startstop_t __ide_do_rw_disk (ide_drive_t *drive, struct request *rq, sector_t block)
{ {
BUG_ON(drive->blocked);
if (!blk_fs_request(rq)) {
blk_dump_rq_flags(rq, "__ide_do_rw_disk - bad command");
ide_end_request(drive, 0, 0);
return ide_stopped;
}
/* /*
* 268435455 == 137439 MB or 28bit limit * 268435455 == 137439 MB or 28bit limit
* *
* need to add split taskfile operations based on 28bit threshold. * need to add split taskfile operations based on 28bit threshold.
*/ */
if (drive->using_tcq && idedisk_start_tag(drive, rq)) {
if (!ata_pending_commands(drive))
BUG();
return ide_started;
}
if (drive->addressing == 1) /* 48-bit LBA */ if (drive->addressing == 1) /* 48-bit LBA */
return lba_48_rw_disk(drive, rq, (unsigned long long) block); return lba_48_rw_disk(drive, rq, (unsigned long long) block);
if (drive->select.b.lba) /* 28-bit LBA */ if (drive->select.b.lba) /* 28-bit LBA */
...@@ -734,6 +709,21 @@ static ide_startstop_t ide_do_rw_disk (ide_drive_t *drive, struct request *rq, s ...@@ -734,6 +709,21 @@ static ide_startstop_t ide_do_rw_disk (ide_drive_t *drive, struct request *rq, s
{ {
ide_hwif_t *hwif = HWIF(drive); ide_hwif_t *hwif = HWIF(drive);
BUG_ON(drive->blocked);
if (!blk_fs_request(rq)) {
blk_dump_rq_flags(rq, "ide_do_rw_disk - bad command");
ide_end_request(drive, 0, 0);
return ide_stopped;
}
if (drive->using_tcq && idedisk_start_tag(drive, rq)) {
if (!ata_pending_commands(drive))
BUG();
return ide_started;
}
if (hwif->rw_disk) if (hwif->rw_disk)
return hwif->rw_disk(drive, rq, block); return hwif->rw_disk(drive, rq, block);
else else
......
...@@ -767,7 +767,7 @@ int ide_driveid_update (ide_drive_t *drive) ...@@ -767,7 +767,7 @@ int ide_driveid_update (ide_drive_t *drive)
SELECT_MASK(drive, 1); SELECT_MASK(drive, 1);
if (IDE_CONTROL_REG) if (IDE_CONTROL_REG)
hwif->OUTB(drive->ctl,IDE_CONTROL_REG); hwif->OUTB(drive->ctl,IDE_CONTROL_REG);
ide_delay_50ms(); msleep(50);
hwif->OUTB(WIN_IDENTIFY, IDE_COMMAND_REG); hwif->OUTB(WIN_IDENTIFY, IDE_COMMAND_REG);
timeout = jiffies + WAIT_WORSTCASE; timeout = jiffies + WAIT_WORSTCASE;
do { do {
...@@ -775,9 +775,9 @@ int ide_driveid_update (ide_drive_t *drive) ...@@ -775,9 +775,9 @@ int ide_driveid_update (ide_drive_t *drive)
SELECT_MASK(drive, 0); SELECT_MASK(drive, 0);
return 0; /* drive timed-out */ return 0; /* drive timed-out */
} }
ide_delay_50ms(); /* give drive a breather */ msleep(50); /* give drive a breather */
} while (hwif->INB(IDE_ALTSTATUS_REG) & BUSY_STAT); } while (hwif->INB(IDE_ALTSTATUS_REG) & BUSY_STAT);
ide_delay_50ms(); /* wait for IRQ and DRQ_STAT */ msleep(50); /* wait for IRQ and DRQ_STAT */
if (!OK_STAT(hwif->INB(IDE_STATUS_REG),DRQ_STAT,BAD_R_STAT)) { if (!OK_STAT(hwif->INB(IDE_STATUS_REG),DRQ_STAT,BAD_R_STAT)) {
SELECT_MASK(drive, 0); SELECT_MASK(drive, 0);
printk("%s: CHECK for good STATUS\n", drive->name); printk("%s: CHECK for good STATUS\n", drive->name);
...@@ -827,7 +827,7 @@ int ide_config_drive_speed (ide_drive_t *drive, u8 speed) ...@@ -827,7 +827,7 @@ int ide_config_drive_speed (ide_drive_t *drive, u8 speed)
u8 stat; u8 stat;
// while (HWGROUP(drive)->busy) // while (HWGROUP(drive)->busy)
// ide_delay_50ms(); // msleep(50);
#ifdef CONFIG_BLK_DEV_IDEDMA #ifdef CONFIG_BLK_DEV_IDEDMA
if (hwif->ide_dma_check) /* check if host supports DMA */ if (hwif->ide_dma_check) /* check if host supports DMA */
......
...@@ -283,9 +283,10 @@ static int actual_try_to_identify (ide_drive_t *drive, u8 cmd) ...@@ -283,9 +283,10 @@ static int actual_try_to_identify (ide_drive_t *drive, u8 cmd)
unsigned long timeout; unsigned long timeout;
u8 s = 0, a = 0; u8 s = 0, a = 0;
if (IDE_CONTROL_REG) {
/* take a deep breath */ /* take a deep breath */
ide_delay_50ms(); msleep(50);
if (IDE_CONTROL_REG) {
a = hwif->INB(IDE_ALTSTATUS_REG); a = hwif->INB(IDE_ALTSTATUS_REG);
s = hwif->INB(IDE_STATUS_REG); s = hwif->INB(IDE_STATUS_REG);
if ((a ^ s) & ~INDEX_STAT) { if ((a ^ s) & ~INDEX_STAT) {
...@@ -297,10 +298,8 @@ static int actual_try_to_identify (ide_drive_t *drive, u8 cmd) ...@@ -297,10 +298,8 @@ static int actual_try_to_identify (ide_drive_t *drive, u8 cmd)
/* use non-intrusive polling */ /* use non-intrusive polling */
hd_status = IDE_ALTSTATUS_REG; hd_status = IDE_ALTSTATUS_REG;
} }
} else { } else
ide_delay_50ms();
hd_status = IDE_STATUS_REG; hd_status = IDE_STATUS_REG;
}
/* set features register for atapi /* set features register for atapi
* identify command to be sure of reply * identify command to be sure of reply
...@@ -324,11 +323,11 @@ static int actual_try_to_identify (ide_drive_t *drive, u8 cmd) ...@@ -324,11 +323,11 @@ static int actual_try_to_identify (ide_drive_t *drive, u8 cmd)
return 1; return 1;
} }
/* give drive a breather */ /* give drive a breather */
ide_delay_50ms(); msleep(50);
} while ((hwif->INB(hd_status)) & BUSY_STAT); } while ((hwif->INB(hd_status)) & BUSY_STAT);
/* wait for IRQ and DRQ_STAT */ /* wait for IRQ and DRQ_STAT */
ide_delay_50ms(); msleep(50);
if (OK_STAT((hwif->INB(IDE_STATUS_REG)), DRQ_STAT, BAD_R_STAT)) { if (OK_STAT((hwif->INB(IDE_STATUS_REG)), DRQ_STAT, BAD_R_STAT)) {
unsigned long flags; unsigned long flags;
...@@ -457,15 +456,15 @@ static int do_probe (ide_drive_t *drive, u8 cmd) ...@@ -457,15 +456,15 @@ static int do_probe (ide_drive_t *drive, u8 cmd)
/* needed for some systems /* needed for some systems
* (e.g. crw9624 as drive0 with disk as slave) * (e.g. crw9624 as drive0 with disk as slave)
*/ */
ide_delay_50ms(); msleep(50);
SELECT_DRIVE(drive); SELECT_DRIVE(drive);
ide_delay_50ms(); msleep(50);
if (hwif->INB(IDE_SELECT_REG) != drive->select.all && !drive->present) { if (hwif->INB(IDE_SELECT_REG) != drive->select.all && !drive->present) {
if (drive->select.b.unit != 0) { if (drive->select.b.unit != 0) {
/* exit with drive0 selected */ /* exit with drive0 selected */
SELECT_DRIVE(&hwif->drives[0]); SELECT_DRIVE(&hwif->drives[0]);
/* allow BUSY_STAT to assert & clear */ /* allow BUSY_STAT to assert & clear */
ide_delay_50ms(); msleep(50);
} }
/* no i/f present: mmm.. this should be a 4 -ml */ /* no i/f present: mmm.. this should be a 4 -ml */
return 3; return 3;
...@@ -488,14 +487,14 @@ static int do_probe (ide_drive_t *drive, u8 cmd) ...@@ -488,14 +487,14 @@ static int do_probe (ide_drive_t *drive, u8 cmd)
printk("%s: no response (status = 0x%02x), " printk("%s: no response (status = 0x%02x), "
"resetting drive\n", drive->name, "resetting drive\n", drive->name,
hwif->INB(IDE_STATUS_REG)); hwif->INB(IDE_STATUS_REG));
ide_delay_50ms(); msleep(50);
hwif->OUTB(drive->select.all, IDE_SELECT_REG); hwif->OUTB(drive->select.all, IDE_SELECT_REG);
ide_delay_50ms(); msleep(50);
hwif->OUTB(WIN_SRST, IDE_COMMAND_REG); hwif->OUTB(WIN_SRST, IDE_COMMAND_REG);
timeout = jiffies; timeout = jiffies;
while (((hwif->INB(IDE_STATUS_REG)) & BUSY_STAT) && while (((hwif->INB(IDE_STATUS_REG)) & BUSY_STAT) &&
time_before(jiffies, timeout + WAIT_WORSTCASE)) time_before(jiffies, timeout + WAIT_WORSTCASE))
ide_delay_50ms(); msleep(50);
rc = try_to_identify(drive, cmd); rc = try_to_identify(drive, cmd);
} }
if (rc == 1) if (rc == 1)
...@@ -510,7 +509,7 @@ static int do_probe (ide_drive_t *drive, u8 cmd) ...@@ -510,7 +509,7 @@ static int do_probe (ide_drive_t *drive, u8 cmd)
if (drive->select.b.unit != 0) { if (drive->select.b.unit != 0) {
/* exit with drive0 selected */ /* exit with drive0 selected */
SELECT_DRIVE(&hwif->drives[0]); SELECT_DRIVE(&hwif->drives[0]);
ide_delay_50ms(); msleep(50);
/* ensure drive irq is clear */ /* ensure drive irq is clear */
(void) hwif->INB(IDE_STATUS_REG); (void) hwif->INB(IDE_STATUS_REG);
} }
...@@ -527,7 +526,7 @@ static void enable_nest (ide_drive_t *drive) ...@@ -527,7 +526,7 @@ static void enable_nest (ide_drive_t *drive)
printk("%s: enabling %s -- ", hwif->name, drive->id->model); printk("%s: enabling %s -- ", hwif->name, drive->id->model);
SELECT_DRIVE(drive); SELECT_DRIVE(drive);
ide_delay_50ms(); msleep(50);
hwif->OUTB(EXABYTE_ENABLE_NEST, IDE_COMMAND_REG); hwif->OUTB(EXABYTE_ENABLE_NEST, IDE_COMMAND_REG);
timeout = jiffies + WAIT_WORSTCASE; timeout = jiffies + WAIT_WORSTCASE;
do { do {
...@@ -535,10 +534,10 @@ static void enable_nest (ide_drive_t *drive) ...@@ -535,10 +534,10 @@ static void enable_nest (ide_drive_t *drive)
printk("failed (timeout)\n"); printk("failed (timeout)\n");
return; return;
} }
ide_delay_50ms(); msleep(50);
} while ((hwif->INB(IDE_STATUS_REG)) & BUSY_STAT); } while ((hwif->INB(IDE_STATUS_REG)) & BUSY_STAT);
ide_delay_50ms(); msleep(50);
if (!OK_STAT((hwif->INB(IDE_STATUS_REG)), 0, BAD_STAT)) { if (!OK_STAT((hwif->INB(IDE_STATUS_REG)), 0, BAD_STAT)) {
printk("failed (status = 0x%02x)\n", hwif->INB(IDE_STATUS_REG)); printk("failed (status = 0x%02x)\n", hwif->INB(IDE_STATUS_REG));
...@@ -781,7 +780,7 @@ void probe_hwif (ide_hwif_t *hwif) ...@@ -781,7 +780,7 @@ void probe_hwif (ide_hwif_t *hwif)
udelay(10); udelay(10);
hwif->OUTB(8, hwif->io_ports[IDE_CONTROL_OFFSET]); hwif->OUTB(8, hwif->io_ports[IDE_CONTROL_OFFSET]);
do { do {
ide_delay_50ms(); msleep(50);
stat = hwif->INB(hwif->io_ports[IDE_STATUS_OFFSET]); stat = hwif->INB(hwif->io_ports[IDE_STATUS_OFFSET]);
} while ((stat & BUSY_STAT) && time_after(timeout, jiffies)); } while ((stat & BUSY_STAT) && time_after(timeout, jiffies));
......
...@@ -1387,26 +1387,6 @@ void ide_add_generic_settings (ide_drive_t *drive) ...@@ -1387,26 +1387,6 @@ void ide_add_generic_settings (ide_drive_t *drive)
ide_add_setting(drive, "ide-scsi", SETTING_RW, -1, HDIO_SET_IDE_SCSI, TYPE_BYTE, 0, 1, 1, 1, &drive->scsi, ide_atapi_to_scsi); ide_add_setting(drive, "ide-scsi", SETTING_RW, -1, HDIO_SET_IDE_SCSI, TYPE_BYTE, 0, 1, 1, 1, &drive->scsi, ide_atapi_to_scsi);
} }
/*
* Delay for *at least* 50ms. As we don't know how much time is left
* until the next tick occurs, we wait an extra tick to be safe.
* This is used only during the probing/polling for drives at boot time.
*
* However, its usefullness may be needed in other places, thus we export it now.
* The future may change this to a millisecond setable delay.
*/
void ide_delay_50ms (void)
{
#ifndef CONFIG_BLK_DEV_IDECS
mdelay(50);
#else
__set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(1+HZ/20);
#endif /* CONFIG_BLK_DEV_IDECS */
}
EXPORT_SYMBOL(ide_delay_50ms);
int system_bus_clock (void) int system_bus_clock (void)
{ {
return((int) ((!system_bus_speed) ? ide_system_bus_speed() : system_bus_speed )); return((int) ((!system_bus_speed) ? ide_system_bus_speed() : system_bus_speed ));
......
...@@ -283,7 +283,7 @@ int __init detect_pdc4030(ide_hwif_t *hwif) ...@@ -283,7 +283,7 @@ int __init detect_pdc4030(ide_hwif_t *hwif)
hwif->OUTB(0x14, IDE_SELECT_REG); hwif->OUTB(0x14, IDE_SELECT_REG);
hwif->OUTB(PROMISE_EXTENDED_COMMAND, IDE_COMMAND_REG); hwif->OUTB(PROMISE_EXTENDED_COMMAND, IDE_COMMAND_REG);
ide_delay_50ms(); msleep(50);
if (hwif->INB(IDE_ERROR_REG) == 'P' && if (hwif->INB(IDE_ERROR_REG) == 'P' &&
hwif->INB(IDE_NSECTOR_REG) == 'T' && hwif->INB(IDE_NSECTOR_REG) == 'T' &&
...@@ -756,12 +756,6 @@ static ide_startstop_t promise_rw_disk (ide_drive_t *drive, struct request *rq, ...@@ -756,12 +756,6 @@ static ide_startstop_t promise_rw_disk (ide_drive_t *drive, struct request *rq,
BUG_ON(rq->nr_sectors > 127); BUG_ON(rq->nr_sectors > 127);
if (!blk_fs_request(rq)) {
blk_dump_rq_flags(rq, "promise_rw_disk - bad command");
DRIVER(drive)->end_request(drive, 0, 0);
return ide_stopped;
}
#ifdef DEBUG #ifdef DEBUG
printk(KERN_DEBUG "%s: %sing: LBAsect=%lu, sectors=%lu\n", printk(KERN_DEBUG "%s: %sing: LBAsect=%lu, sectors=%lu\n",
drive->name, rq_data_dir(rq) ? "writ" : "read", drive->name, rq_data_dir(rq) ? "writ" : "read",
......
...@@ -1806,7 +1806,7 @@ static int __init olympic_pci_init(void) ...@@ -1806,7 +1806,7 @@ static int __init olympic_pci_init(void)
static void __exit olympic_pci_cleanup(void) static void __exit olympic_pci_cleanup(void)
{ {
return pci_unregister_driver(&olympic_driver) ; pci_unregister_driver(&olympic_driver) ;
} }
......
...@@ -846,7 +846,12 @@ struct dentry * d_alloc_anon(struct inode *inode) ...@@ -846,7 +846,12 @@ struct dentry * d_alloc_anon(struct inode *inode)
res->d_sb = inode->i_sb; res->d_sb = inode->i_sb;
res->d_parent = res; res->d_parent = res;
res->d_inode = inode; res->d_inode = inode;
res->d_bucket = d_hash(res, res->d_name.hash);
/*
* Set d_bucket to an "impossible" bucket address so
* that d_move() doesn't get a false positive
*/
res->d_bucket = dentry_hashtable + D_HASHMASK + 1;
res->d_flags |= DCACHE_DISCONNECTED; res->d_flags |= DCACHE_DISCONNECTED;
res->d_flags &= ~DCACHE_UNHASHED; res->d_flags &= ~DCACHE_UNHASHED;
list_add(&res->d_alias, &inode->i_dentry); list_add(&res->d_alias, &inode->i_dentry);
......
...@@ -1720,6 +1720,9 @@ void locks_remove_flock(struct file *filp) ...@@ -1720,6 +1720,9 @@ void locks_remove_flock(struct file *filp)
lease_modify(before, F_UNLCK); lease_modify(before, F_UNLCK);
continue; continue;
} }
/* FL_POSIX locks of this process have already been
* removed in filp_close->locks_remove_posix.
*/
BUG(); BUG();
} }
before = &fl->fl_next; before = &fl->fl_next;
...@@ -1979,18 +1982,49 @@ int lock_may_write(struct inode *inode, loff_t start, unsigned long len) ...@@ -1979,18 +1982,49 @@ int lock_may_write(struct inode *inode, loff_t start, unsigned long len)
EXPORT_SYMBOL(lock_may_write); EXPORT_SYMBOL(lock_may_write);
static inline void __steal_locks(struct file *file, fl_owner_t from)
{
struct inode *inode = file->f_dentry->d_inode;
struct file_lock *fl = inode->i_flock;
while (fl) {
if (fl->fl_file == file && fl->fl_owner == from)
fl->fl_owner = current->files;
fl = fl->fl_next;
}
}
/* When getting ready for executing a binary, we make sure that current
* has a files_struct on its own. Before dropping the old files_struct,
* we take over ownership of all locks for all file descriptors we own.
* Note that we may accidentally steal a lock for a file that a sibling
* has created since the unshare_files() call.
*/
void steal_locks(fl_owner_t from) void steal_locks(fl_owner_t from)
{ {
struct list_head *tmp; struct files_struct *files = current->files;
int i, j;
if (from == current->files) if (from == files)
return; return;
lock_kernel(); lock_kernel();
list_for_each(tmp, &file_lock_list) { j = 0;
struct file_lock *fl = list_entry(tmp, struct file_lock, fl_link); for (;;) {
if (fl->fl_owner == from) unsigned long set;
fl->fl_owner = current->files; i = j * __NFDBITS;
if (i >= files->max_fdset || i >= files->max_fds)
break;
set = files->open_fds->fds_bits[j++];
while (set) {
if (set & 1) {
struct file *file = files->fd[i];
if (file)
__steal_locks(file, from);
}
i++;
set >>= 1;
}
} }
unlock_kernel(); unlock_kernel();
} }
......
...@@ -1228,8 +1228,8 @@ nfsd_symlink(struct svc_rqst *rqstp, struct svc_fh *fhp, ...@@ -1228,8 +1228,8 @@ nfsd_symlink(struct svc_rqst *rqstp, struct svc_fh *fhp,
err = nfserrno(err); err = nfserrno(err);
fh_unlock(fhp); fh_unlock(fhp);
/* Compose the fh so the dentry will be freed ... */
cerr = fh_compose(resfhp, fhp->fh_export, dnew, fhp); cerr = fh_compose(resfhp, fhp->fh_export, dnew, fhp);
dput(dnew);
if (err==0) err = cerr; if (err==0) err = cerr;
out: out:
return err; return err;
......
...@@ -790,7 +790,6 @@ struct file *dentry_open(struct dentry *dentry, struct vfsmount *mnt, int flags) ...@@ -790,7 +790,6 @@ struct file *dentry_open(struct dentry *dentry, struct vfsmount *mnt, int flags)
} }
f->f_mapping = inode->i_mapping; f->f_mapping = inode->i_mapping;
file_ra_state_init(&f->f_ra, f->f_mapping);
f->f_dentry = dentry; f->f_dentry = dentry;
f->f_vfsmnt = mnt; f->f_vfsmnt = mnt;
f->f_pos = 0; f->f_pos = 0;
...@@ -804,10 +803,11 @@ struct file *dentry_open(struct dentry *dentry, struct vfsmount *mnt, int flags) ...@@ -804,10 +803,11 @@ struct file *dentry_open(struct dentry *dentry, struct vfsmount *mnt, int flags)
} }
f->f_flags &= ~(O_CREAT | O_EXCL | O_NOCTTY | O_TRUNC); f->f_flags &= ~(O_CREAT | O_EXCL | O_NOCTTY | O_TRUNC);
file_ra_state_init(&f->f_ra, f->f_mapping->host->i_mapping);
/* NB: we're sure to have correct a_ops only after f_op->open */ /* NB: we're sure to have correct a_ops only after f_op->open */
if (f->f_flags & O_DIRECT) { if (f->f_flags & O_DIRECT) {
if (!f->f_mapping || !f->f_mapping->a_ops || if (!f->f_mapping->a_ops || !f->f_mapping->a_ops->direct_IO) {
!f->f_mapping->a_ops->direct_IO) {
fput(f); fput(f);
f = ERR_PTR(-EINVAL); f = ERR_PTR(-EINVAL);
} }
......
...@@ -2,22 +2,34 @@ ...@@ -2,22 +2,34 @@
#define _ASM_GENERIC_PGTABLE_H #define _ASM_GENERIC_PGTABLE_H
#ifndef __HAVE_ARCH_PTEP_ESTABLISH #ifndef __HAVE_ARCH_PTEP_ESTABLISH
#ifndef ptep_update_dirty_accessed
#define ptep_update_dirty_accessed(__ptep, __entry, __dirty) set_pte(__ptep, __entry)
#endif
/* /*
* Establish a new mapping: * Establish a new mapping:
* - flush the old one * - flush the old one
* - update the page tables * - update the page tables
* - inform the TLB about the new one * - inform the TLB about the new one
* *
* We hold the mm semaphore for reading and vma->vm_mm->page_table_lock * We hold the mm semaphore for reading and vma->vm_mm->page_table_lock.
*
* Note: the old pte is known to not be writable, so we don't need to
* worry about dirty bits etc getting lost.
*/
#define ptep_establish(__vma, __address, __ptep, __entry) \
do { \
set_pte(__ptep, __entry); \
flush_tlb_page(__vma, __address); \
} while (0)
#endif
#ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
/*
* Largely same as above, but only sets the access flags (dirty,
* accessed, and writable). Furthermore, we know it always gets set
* to a "more permissive" setting, which allows most architectures
* to optimize this.
*/ */
#define ptep_establish(__vma, __address, __ptep, __entry, __dirty) \ #define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
do { \ do { \
ptep_update_dirty_accessed(__ptep, __entry, __dirty); \ set_pte(__ptep, __entry); \
flush_tlb_page(__vma, __address); \ flush_tlb_page(__vma, __address); \
} while (0) } while (0)
#endif #endif
......
...@@ -325,9 +325,13 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) ...@@ -325,9 +325,13 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
* bit at the same time. * bit at the same time.
*/ */
#define update_mmu_cache(vma,address,pte) do { } while (0) #define update_mmu_cache(vma,address,pte) do { } while (0)
#define ptep_update_dirty_accessed(__ptep, __entry, __dirty) \ #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
#define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
do { \ do { \
if (__dirty) set_pte(__ptep, __entry); \ if (__dirty) { \
(__ptep)->pte_low = (__entry).pte_low; \
flush_tlb_page(__vma, __address); \
} \
} while (0) } while (0)
/* Encode and de-code a swap entry */ /* Encode and de-code a swap entry */
......
...@@ -548,6 +548,16 @@ static inline void ptep_mkdirty(pte_t *ptep) ...@@ -548,6 +548,16 @@ static inline void ptep_mkdirty(pte_t *ptep)
pte_update(ptep, 0, _PAGE_DIRTY); pte_update(ptep, 0, _PAGE_DIRTY);
} }
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry, int dirty)
{
unsigned long bits = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW);
pte_update(ptep, 0, bits);
}
#define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
__ptep_set_access_flags(__ptep, __entry, __dirty)
/* /*
* Macro to mark a page protection value as "uncacheable". * Macro to mark a page protection value as "uncacheable".
*/ */
......
...@@ -215,7 +215,7 @@ static inline u8 eeh_inb(unsigned long port) { ...@@ -215,7 +215,7 @@ static inline u8 eeh_inb(unsigned long port) {
static inline void eeh_outb(u8 val, unsigned long port) { static inline void eeh_outb(u8 val, unsigned long port) {
if (_IO_IS_VALID(port)) if (_IO_IS_VALID(port))
return out_8((u8 *)(port+pci_io_base), val); out_8((u8 *)(port+pci_io_base), val);
} }
static inline u16 eeh_inw(unsigned long port) { static inline u16 eeh_inw(unsigned long port) {
...@@ -230,7 +230,7 @@ static inline u16 eeh_inw(unsigned long port) { ...@@ -230,7 +230,7 @@ static inline u16 eeh_inw(unsigned long port) {
static inline void eeh_outw(u16 val, unsigned long port) { static inline void eeh_outw(u16 val, unsigned long port) {
if (_IO_IS_VALID(port)) if (_IO_IS_VALID(port))
return out_le16((u16 *)(port+pci_io_base), val); out_le16((u16 *)(port+pci_io_base), val);
} }
static inline u32 eeh_inl(unsigned long port) { static inline u32 eeh_inl(unsigned long port) {
...@@ -245,7 +245,7 @@ static inline u32 eeh_inl(unsigned long port) { ...@@ -245,7 +245,7 @@ static inline u32 eeh_inl(unsigned long port) {
static inline void eeh_outl(u32 val, unsigned long port) { static inline void eeh_outl(u32 val, unsigned long port) {
if (_IO_IS_VALID(port)) if (_IO_IS_VALID(port))
return out_le32((u32 *)(port+pci_io_base), val); out_le32((u32 *)(port+pci_io_base), val);
} }
/* in-string eeh macros */ /* in-string eeh macros */
......
...@@ -204,7 +204,7 @@ static inline unsigned long hpt_hash(unsigned long vpn, int large) ...@@ -204,7 +204,7 @@ static inline unsigned long hpt_hash(unsigned long vpn, int large)
page = vpn & 0xffff; page = vpn & 0xffff;
} }
return (vsid & 0x7fffffffff) ^ page; return (vsid & 0x7fffffffffUL) ^ page;
} }
static inline void __tlbie(unsigned long va, int large) static inline void __tlbie(unsigned long va, int large)
......
...@@ -175,8 +175,8 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, ...@@ -175,8 +175,8 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
#define activate_mm(active_mm, mm) \ #define activate_mm(active_mm, mm) \
switch_mm(active_mm, mm, current); switch_mm(active_mm, mm, current);
#define VSID_RANDOMIZER 42470972311 #define VSID_RANDOMIZER 42470972311UL
#define VSID_MASK 0xfffffffff #define VSID_MASK 0xfffffffffUL
/* This is only valid for kernel (including vmalloc, imalloc and bolted) EA's /* This is only valid for kernel (including vmalloc, imalloc and bolted) EA's
......
...@@ -12,18 +12,20 @@ ...@@ -12,18 +12,20 @@
#include <linux/config.h> #include <linux/config.h>
/* PAGE_SHIFT determines the page size */ #ifdef __ASSEMBLY__
#define PAGE_SHIFT 12 #define ASM_CONST(x) x
#ifndef __ASSEMBLY__
# define PAGE_SIZE (1UL << PAGE_SHIFT)
#else #else
# define PAGE_SIZE (1 << PAGE_SHIFT) #define ASM_CONST(x) x##UL
#endif #endif
/* PAGE_SHIFT determines the page size */
#define PAGE_SHIFT 12
#define PAGE_SIZE (ASM_CONST(1) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1)) #define PAGE_MASK (~(PAGE_SIZE-1))
#define PAGE_OFFSET_MASK (PAGE_SIZE-1) #define PAGE_OFFSET_MASK (PAGE_SIZE-1)
#define SID_SHIFT 28 #define SID_SHIFT 28
#define SID_MASK 0xfffffffff #define SID_MASK 0xfffffffffUL
#define GET_ESID(x) (((x) >> SID_SHIFT) & SID_MASK) #define GET_ESID(x) (((x) >> SID_SHIFT) & SID_MASK)
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
...@@ -196,11 +198,11 @@ extern int page_is_ram(unsigned long physaddr); ...@@ -196,11 +198,11 @@ extern int page_is_ram(unsigned long physaddr);
/* KERNELBASE is defined for performance reasons. */ /* KERNELBASE is defined for performance reasons. */
/* When KERNELBASE moves, those macros may have */ /* When KERNELBASE moves, those macros may have */
/* to change! */ /* to change! */
#define PAGE_OFFSET 0xC000000000000000 #define PAGE_OFFSET ASM_CONST(0xC000000000000000)
#define KERNELBASE PAGE_OFFSET #define KERNELBASE PAGE_OFFSET
#define VMALLOCBASE 0xD000000000000000 #define VMALLOCBASE 0xD000000000000000UL
#define IOREGIONBASE 0xE000000000000000 #define IOREGIONBASE 0xE000000000000000UL
#define EEHREGIONBASE 0xA000000000000000 #define EEHREGIONBASE 0xA000000000000000UL
#define IO_REGION_ID (IOREGIONBASE>>REGION_SHIFT) #define IO_REGION_ID (IOREGIONBASE>>REGION_SHIFT)
#define EEH_REGION_ID (EEHREGIONBASE>>REGION_SHIFT) #define EEH_REGION_ID (EEHREGIONBASE>>REGION_SHIFT)
......
...@@ -306,7 +306,10 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr) ...@@ -306,7 +306,10 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr)
return old; return old;
} }
/* PTE updating functions */ /* PTE updating functions, this function puts the PTE in the
* batch, doesn't actually triggers the hash flush immediately,
* you need to call flush_tlb_pending() to do that.
*/
extern void hpte_update(pte_t *ptep, unsigned long pte, int wrprot); extern void hpte_update(pte_t *ptep, unsigned long pte, int wrprot);
static inline int ptep_test_and_clear_young(pte_t *ptep) static inline int ptep_test_and_clear_young(pte_t *ptep)
...@@ -318,7 +321,7 @@ static inline int ptep_test_and_clear_young(pte_t *ptep) ...@@ -318,7 +321,7 @@ static inline int ptep_test_and_clear_young(pte_t *ptep)
old = pte_update(ptep, _PAGE_ACCESSED); old = pte_update(ptep, _PAGE_ACCESSED);
if (old & _PAGE_HASHPTE) { if (old & _PAGE_HASHPTE) {
hpte_update(ptep, old, 0); hpte_update(ptep, old, 0);
flush_tlb_pending(); /* XXX generic code doesn't flush */ flush_tlb_pending();
} }
return (old & _PAGE_ACCESSED) != 0; return (old & _PAGE_ACCESSED) != 0;
} }
...@@ -396,11 +399,37 @@ static inline void pte_clear(pte_t * ptep) ...@@ -396,11 +399,37 @@ static inline void pte_clear(pte_t * ptep)
*/ */
static inline void set_pte(pte_t *ptep, pte_t pte) static inline void set_pte(pte_t *ptep, pte_t pte)
{ {
if (pte_present(*ptep)) if (pte_present(*ptep)) {
pte_clear(ptep); pte_clear(ptep);
flush_tlb_pending();
}
*ptep = __pte(pte_val(pte)) & ~_PAGE_HPTEFLAGS; *ptep = __pte(pte_val(pte)) & ~_PAGE_HPTEFLAGS;
} }
/* Set the dirty and/or accessed bits atomically in a linux PTE, this
* function doesn't need to flush the hash entry
*/
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry, int dirty)
{
unsigned long bits = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW);
unsigned long old, tmp;
__asm__ __volatile__(
"1: ldarx %0,0,%4\n\
andi. %1,%0,%6\n\
bne- 1b \n\
or %0,%3,%0\n\
stdcx. %0,0,%4\n\
bne- 1b"
:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
:"r" (bits), "r" (ptep), "m" (ptep), "i" (_PAGE_BUSY)
:"cc");
}
#define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
__ptep_set_access_flags(__ptep, __entry, __dirty)
/* /*
* Macro to mark a page protection value as "uncacheable". * Macro to mark a page protection value as "uncacheable".
*/ */
......
...@@ -69,10 +69,6 @@ struct thread_info { ...@@ -69,10 +69,6 @@ struct thread_info {
#define get_thread_info(ti) get_task_struct((ti)->task) #define get_thread_info(ti) get_task_struct((ti)->task)
#define put_thread_info(ti) put_task_struct((ti)->task) #define put_thread_info(ti) put_task_struct((ti)->task)
#if THREAD_SIZE != (4*PAGE_SIZE)
#error update vmlinux.lds and current_thread_info to match
#endif
/* how to get the thread information struct from C */ /* how to get the thread information struct from C */
static inline struct thread_info *current_thread_info(void) static inline struct thread_info *current_thread_info(void)
{ {
......
...@@ -581,7 +581,7 @@ static inline void ptep_mkdirty(pte_t *ptep) ...@@ -581,7 +581,7 @@ static inline void ptep_mkdirty(pte_t *ptep)
static inline void static inline void
ptep_establish(struct vm_area_struct *vma, ptep_establish(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep, unsigned long address, pte_t *ptep,
pte_t entry, int dirty) pte_t entry)
{ {
ptep_clear_flush(vma, address, ptep); ptep_clear_flush(vma, address, ptep);
set_pte(ptep, entry); set_pte(ptep, entry);
......
...@@ -1477,7 +1477,6 @@ int ide_taskfile_ioctl(ide_drive_t *, unsigned int, unsigned long); ...@@ -1477,7 +1477,6 @@ int ide_taskfile_ioctl(ide_drive_t *, unsigned int, unsigned long);
int ide_cmd_ioctl(ide_drive_t *, unsigned int, unsigned long); int ide_cmd_ioctl(ide_drive_t *, unsigned int, unsigned long);
int ide_task_ioctl(ide_drive_t *, unsigned int, unsigned long); int ide_task_ioctl(ide_drive_t *, unsigned int, unsigned long);
extern void ide_delay_50ms(void);
extern int system_bus_clock(void); extern int system_bus_clock(void);
extern u8 ide_auto_reduce_xfer(ide_drive_t *); extern u8 ide_auto_reduce_xfer(ide_drive_t *);
......
...@@ -15,10 +15,10 @@ ...@@ -15,10 +15,10 @@
#if BITS_PER_LONG == 32 #if BITS_PER_LONG == 32
# define IDR_BITS 5 # define IDR_BITS 5
# define IDR_FULL 0xffffffff # define IDR_FULL 0xfffffffful
#elif BITS_PER_LONG == 64 #elif BITS_PER_LONG == 64
# define IDR_BITS 6 # define IDR_BITS 6
# define IDR_FULL 0xffffffffffffffff # define IDR_FULL 0xfffffffffffffffful
#else #else
# error "BITS_PER_LONG is not 32 or 64" # error "BITS_PER_LONG is not 32 or 64"
#endif #endif
......
...@@ -481,7 +481,6 @@ asmlinkage void __init start_kernel(void) ...@@ -481,7 +481,6 @@ asmlinkage void __init start_kernel(void)
proc_root_init(); proc_root_init();
#endif #endif
check_bugs(); check_bugs();
printk("POSIX conformance testing by UNIFIX\n");
/* /*
* We count on the initial thread going ok * We count on the initial thread going ok
......
...@@ -3566,7 +3566,8 @@ static int migration_call(struct notifier_block *nfb, unsigned long action, ...@@ -3566,7 +3566,8 @@ static int migration_call(struct notifier_block *nfb, unsigned long action,
/* Idle task back to normal (off runqueue, low prio) */ /* Idle task back to normal (off runqueue, low prio) */
rq = task_rq_lock(rq->idle, &flags); rq = task_rq_lock(rq->idle, &flags);
deactivate_task(rq->idle, rq); deactivate_task(rq->idle, rq);
__setscheduler(rq->idle, SCHED_NORMAL, MAX_PRIO); rq->idle->static_prio = MAX_PRIO;
__setscheduler(rq->idle, SCHED_NORMAL, 0);
task_rq_unlock(rq, &flags); task_rq_unlock(rq, &flags);
BUG_ON(rq->nr_running != 0); BUG_ON(rq->nr_running != 0);
......
...@@ -1004,7 +1004,7 @@ static inline void break_cow(struct vm_area_struct * vma, struct page * new_page ...@@ -1004,7 +1004,7 @@ static inline void break_cow(struct vm_area_struct * vma, struct page * new_page
flush_cache_page(vma, address); flush_cache_page(vma, address);
entry = maybe_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)), entry = maybe_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)),
vma); vma);
ptep_establish(vma, address, page_table, entry, 1); ptep_establish(vma, address, page_table, entry);
update_mmu_cache(vma, address, entry); update_mmu_cache(vma, address, entry);
} }
...@@ -1056,7 +1056,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct * vma, ...@@ -1056,7 +1056,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct * vma,
flush_cache_page(vma, address); flush_cache_page(vma, address);
entry = maybe_mkwrite(pte_mkyoung(pte_mkdirty(pte)), entry = maybe_mkwrite(pte_mkyoung(pte_mkdirty(pte)),
vma); vma);
ptep_establish(vma, address, page_table, entry, 1); ptep_set_access_flags(vma, address, page_table, entry, 1);
update_mmu_cache(vma, address, entry); update_mmu_cache(vma, address, entry);
pte_unmap(page_table); pte_unmap(page_table);
spin_unlock(&mm->page_table_lock); spin_unlock(&mm->page_table_lock);
...@@ -1646,7 +1646,7 @@ static inline int handle_pte_fault(struct mm_struct *mm, ...@@ -1646,7 +1646,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
entry = pte_mkdirty(entry); entry = pte_mkdirty(entry);
} }
entry = pte_mkyoung(entry); entry = pte_mkyoung(entry);
ptep_establish(vma, address, pte, entry, write_access); ptep_set_access_flags(vma, address, pte, entry, write_access);
update_mmu_cache(vma, address, entry); update_mmu_cache(vma, address, entry);
pte_unmap(pte); pte_unmap(pte);
spin_unlock(&mm->page_table_lock); spin_unlock(&mm->page_table_lock);
......
...@@ -284,6 +284,7 @@ void __vunmap(void *addr, int deallocate_pages) ...@@ -284,6 +284,7 @@ void __vunmap(void *addr, int deallocate_pages)
if ((PAGE_SIZE-1) & (unsigned long)addr) { if ((PAGE_SIZE-1) & (unsigned long)addr) {
printk(KERN_ERR "Trying to vfree() bad address (%p)\n", addr); printk(KERN_ERR "Trying to vfree() bad address (%p)\n", addr);
WARN_ON(1);
return; return;
} }
...@@ -291,6 +292,7 @@ void __vunmap(void *addr, int deallocate_pages) ...@@ -291,6 +292,7 @@ void __vunmap(void *addr, int deallocate_pages)
if (unlikely(!area)) { if (unlikely(!area)) {
printk(KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n", printk(KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
addr); addr);
WARN_ON(1);
return; return;
} }
......
...@@ -495,6 +495,11 @@ static void tbf_walk(struct Qdisc *sch, struct qdisc_walker *walker) ...@@ -495,6 +495,11 @@ static void tbf_walk(struct Qdisc *sch, struct qdisc_walker *walker)
} }
} }
static struct tcf_proto **tbf_find_tcf(struct Qdisc *sch, unsigned long cl)
{
return NULL;
}
static struct Qdisc_class_ops tbf_class_ops = static struct Qdisc_class_ops tbf_class_ops =
{ {
.graft = tbf_graft, .graft = tbf_graft,
...@@ -504,6 +509,7 @@ static struct Qdisc_class_ops tbf_class_ops = ...@@ -504,6 +509,7 @@ static struct Qdisc_class_ops tbf_class_ops =
.change = tbf_change_class, .change = tbf_change_class,
.delete = tbf_delete, .delete = tbf_delete,
.walk = tbf_walk, .walk = tbf_walk,
.tcf_chain = tbf_find_tcf,
.dump = tbf_dump_class, .dump = tbf_dump_class,
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment