- 10 Apr, 2002 8 commits
-
-
Hans Reiser authored
This patch is to add forgotten metadata journaling for a case when we free blocks after tail conversion failures. Found and fixed by Chris Mason
-
Hans Reiser authored
This patch is to fix a case where flag was not set at inode-read time which prevented 32bit uid/gid to work correctly.
-
Hans Reiser authored
This patch is to convert pap14030 panic into warning. While doing this, a bug was uncovered, that when get_block() returns a failure, buffer is still marked as mapped, and on subsequent access to this buffer get_block() was not called anymore. This is also fixed.
-
Hans Reiser authored
This patch is to fix a lookup problem on bigendian platforms
-
Hans Reiser authored
This patch is to fix a problem when directory's atime was not updated on readdir(). Patch is written by Chris Mason.
-
Martin Dalecki authored
- Integrate the TCQ stuff from Jens Axboe. Deal with the conflicts, apply some cosmetic changes. We are still not at a stage where we could immediately integrate ata_request and ata_taskfile but we are no longer far away. - Clean up the data transfer function in ide-disk to use ata_request structures directly. - Kill useless leading version information in ide-disk.c - Replace the ATA_AR_INIT macro with inline ata_ar_init() function. - Replace IDE_CLEAR_TAG with ata_clear_tag(). - Replace IDE_SET_TAG with ata_set_tag(). - Kill georgeous ide_dmafunc_verbose(). - Fix typo in ide_enable_queued() (ide-tcq.c!) Apparently there still problems with a TCQ enabled device and a not enabled device on the same channel, but let's first synchronize up with Jens.
-
Martin Dalecki authored
- Eliminate ide_task_t and rename struct ide_task_s to struct ata_taskfile. This should become the entity which is holding all data for a request in the future. If this turns out to be the case, we will just rename it to ata_request. - Reduce the number of arguments for the ata_taskfile() function. This helps to wipe quite a lot of code out as well. This stage is not sensitive, so let's make a patch before we start to integrate the last work of Jens Axboe.
-
bk://ppc.bkbits.net/for-linus-pppLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
- 11 Apr, 2002 1 commit
-
-
Paul Mackerras authored
and scheduling-in-interrupt problems we had, and also makes it much faster when handling large numbers (100s or more) of PPP units.
-
- 10 Apr, 2002 1 commit
-
-
bk://ppc.bkbits.net/for-linus-ppcLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
- 11 Apr, 2002 2 commits
-
-
Paul Mackerras authored
and cacheflush.h in a few places where they are needed.
-
Paul Mackerras authored
flushing code a little.
-
- 10 Apr, 2002 20 commits
-
-
Steve Cameron authored
Patch to cciss driver in 2.4.8-pre2 to use pdev->irq and other pci_dev structure elements only after calling pci_enable_device. Morten Helgesen <admin@nextframe.net> sent me this.
-
Andy Grover authored
The latest ACPI merge accidentally clobbered another change in pci-irq.c. Here's the original patch again (applies fine except for an offset) Thanks -- Andy
-
bk://linuxusb.bkbits.net/pci_hp-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Alexander Viro authored
More places where we want the size of block device and have relevant struct block_device * available,
-
Alexander Viro authored
Fixes races in jffs2_get_sb() - current code has a window when two mounts of the same mtd device can miss each other, resulting in two active instances of jffs2 fighting over the same device.
-
Alexander Viro authored
Assorted compile fixes in mtdblock.c
-
Alexander Viro authored
All places where we do blkdev_size_in_bytes(sb->s_dev) are bogus - we can get the same information from ->s_bdev without messing with kdev_t, major/minor, etc. There will be more patches of that kind - in the long run I'd expect only one caller of blkdev_size_in_bytes() to survive. One if fs/block_dev.c, that is - called when we open device.
-
Andrew Morton authored
Pretty simple. - use a timer to kick off a pdflush thread every five seconds to run the kupdate code. - wakeup_bdflush() kicks off a pdflush thread to run the current bdflush function. There's some loss of functionality here - the ability to tune the writeback periods. The numbers are hardwired at present. But the intent is that buffer-based writeback disappears altogether. New mechanisms for tuning the writeback will need to be introduced.
-
Andrew Morton authored
This is pdflush's first application! The writeback of the unused inodes list by keventd is removed, and a pdflush thread is dispatched instead. There is a need for exclusion - to prevent all the pdflush threads from working against the same request queue. This is implemented locally. And this is a problem, because other pdflush threads can be dispatched to writeback other filesystem objects, and they don't know that there's already a pdflush thread working that request queue. So moving the exclusion into the request queue itself is on my things-to-do-list. But the code as-is works OK - under a `dbench 100' load the number of pdflush instances can grow as high as four or five. Some fine tuning is needed...
-
Andrew Morton authored
This patch implements a gang-of-threads which are designed to be used for dirty data writeback. "pdflush" -> dirty page flush, or something. The number of threads is dynamically managed by a simple demand-driven algorithm. "Oh no, more kernel threads". Don't worry, kupdate and bdflush disappear later. The intent is that no two pdflush threads are ever performing writeback against the same request queue at the same time. It would be wasteful to do that. My current patches don't quite achieve this; I need to move the state into the request queue itself... The driver for implementing the thread pool was to avoid the possibility where bdflush gets stuck on one device's get_request_wait() queue while lots of other disks sit idle. Also generality, abstraction, and the need to have something in place to perform the address_space-based writeback when the buffer_head-based writeback disappears. There is no provision inside the pdflush code itself to prevent many threads from working against the same device. That's the responsibility of the caller. The main API function, `pdflush_operation()' attempts to find a thread to do some work for you. It is not reliable - it may return -1 and say "sorry, I didn't do that". This happens if all threads are busy. One _could_ extend pdflush_operation() to queue the work so that it is guaranteed to happen. If there's a need, that additional minor complexity can be added.
-
Andrew Morton authored
page->buffers is a bit of a layering violation. Not all address_spaces have pages which are backed by buffers. The exclusive use of page->buffers for buffers means that a piece of prime real estate in struct page is unavailable to other forms of address_space. This patch turns page->buffers into `unsigned long page->private' and sets in place all the infrastructure which is needed to allow other address_spaces to use this storage. This change alows the multipage-bio-writeout patches to use page->private to cache the results of an earlier get_block(), so repeated calls into the filesystem are not needed in the case of file overwriting. Devlopers should think carefully before calling try_to_free_buffers() or block_flushpage() or writeout_one_page() or waitfor_one_page() against a page. It's only legal to do this if you *know* that the page is buffer-backed. And only the address_space knows that. Arguably, we need new a_ops for writeout_one_page() and waitfor_one_page(). But I have more patches on the boil which obsolete these functions in favour of ->writepage() and wait_on_page(). The new PG_private page bit is used to indicate that there is something at page->private. The core kernel does not know what that object actually is, just that it's there. The kernel must call a_ops->releasepage() to try to make page->private go away. And a_ops->flushpage() at truncate time.
-
Andrew Morton authored
I'd like to be able to claim amazing speedups, but the best benchmark I could find was diffing two 256 megabyte files, which is about 10% quicker. And that is probably due to the window size being effectively 50% larger. Fact is, any disk worth owning nowadays has a segmented 2-megabyte cache, and OS-level readahead mainly seems to save on CPU cycles rather than overall throughput. Once you start reading more streams than there are segments in the disk cache we start to win. Still. The main motivation for this work is to clean the code up, and to create a central point at which many pages are marshalled together so that they can all be encapsulated into the smallest possible number of BIOs, and injected into the request layer. A number of filesystems were poking around inside the readahead state variables. I'm not really sure what they were up to, but I took all that out. The readahead code manages its own state autonomously and should not need any hints. - Unifies the current three readahead functions (mmap reads, read(2) and sys_readhead) into a single implementation. - More aggressive in building up the readahead windows. - More conservative in tearing them down. - Special start-of-file heuristics. - Preallocates the readahead pages, to avoid the (never demonstrated, but potentially catastrophic) scenario where allocation of readahead pages causes the allocator to perform VM writeout. - Gets all the readahead pages gathered together in one spot, so they can be marshalled into big BIOs. - reinstates the readahead ioctls, so hdparm(8) and blockdev(8) are working again. The readahead settings are now per-request-queue, and the drivers never have to know about it. I use blockdev(8). It works in units of 512 bytes. - Identifies readahead thrashing. Also attempts to handle it. Certainly the changes here delay the onset of catastrophic readahead thrashing by quite a lot, and decrease it seriousness as we get more deeply into it, but it's still pretty bad.
-
Andrew Morton authored
Before the mempool was added, the VM was getting many, many 0-order allocation failures due to the atomic ratnode allocations inside swap_out. That monster mempool is doing its job - drove a 256meg machine a gigabyte into swap with no ratnode allocation failures at all. So we do need to trim that pool a bit, and also handle the case where swap_out fails, and not just keep pointlessly calling it.
-
Rusty Russell authored
This changes everything arch specific PPC and i386 which should have been unsigned long (it doesn't *matter*, but bad habits get copied to where it does matter). No object code changes
-
Rusty Russell authored
This removes gratuitous & operators in front of tty->process_char_map and tty->read_flags. No object code changes
-
Rusty Russell authored
This changes over some bogus casts, and converts the ext2, hfs and minix set-bit macros. Also changes pte and open_fds to hand the actual bitfield rather than whole structure. No object code changes
-
Greg Kroah-Hartman authored
driver needs this. This is already done in 2.4.x
-
Greg Kroah-Hartman authored
fixed linker bug when driver is compiled into the kernel.
-
Greg Kroah-Hartman authored
removed the list-multi targets, as they aren't needed anymore.
-
Greg Kroah-Hartman authored
Only build the IBM PCI hotplug driver if CONFIG_X86_IO_APIC is selected
-
- 09 Apr, 2002 7 commits
-
-
Robert Love authored
This patch implements the following calls to set and retrieve a task's CPU affinity: int sched_setaffinity(pid_t pid, unsigned int len, unsigned long *new_mask_ptr) int ched_getaffinity(pid_t pid, unsigned int len, unsigned long *user_mask_ptr)
-
Linus Torvalds authored
-
Alexander Viro authored
a) part of open_namei() done after we'd found vfsmount/dentry of the object we want to open had been split into a helper - may_open(). b) do_open() in fs/nfsctl.c didn't do any permission checks on the nfsd file it was opening - sudden idiocy attack on my part (I missed the fact that dentry_open() doesn't do permission checks - open_namei() does). Fixed by adding obvious may_open() calls.
-
Rusty Russell authored
As per David Mosberger's request, splits into per-arch files (solves the #include mess), and fixes my "was not an lvalue" bug.
-
Linus Torvalds authored
Cosmetic change: x86_capability. Makes it an unsigned long, and removes the gratuitous & operators (it is already an array). These produce warnings when set_bit() etc. takes an unsigned long * instead of a void *. Originally from Rusty Russell
-
Linus Torvalds authored
-
Linus Torvalds authored
-
- 10 Apr, 2002 1 commit
-
-
Wim Van Sebroeck authored
i810_rng: add support for other i8xx chipsets to the Random Number Generator module. This is being done by adding the detection of the 82801BA(M) and 82801CA(M) I/O Controller Hub's.
-