- 21 Sep, 2002 2 commits
-
-
Alexander Viro authored
cdu31a switched to use of gendisk
-
Alexander Viro authored
pcd switched to use of gendisk
-
- 20 Sep, 2002 3 commits
-
-
Linus Torvalds authored
-
Jens Axboe authored
-
http://ppc.bkbits.net/for-linus-ppc64Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 21 Sep, 2002 11 commits
-
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_Makefilecleanup
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_Makefilecleanup
-
- 20 Sep, 2002 9 commits
-
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_Makefilecleanup
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_Makefilecleanup
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64
-
Ingo Molnar authored
the attached patch (against BK-curr) fixes a bug in the new PID allocator, which bug can cause incorrect hashing of the PID structure which causes infinite loops in find_pid(). [and potentially other problems.]
-
- 19 Sep, 2002 15 commits
-
-
Anton Blanchard authored
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Andrew Morton authored
Patch from Rohit Seth: allow hugetlb pages to be allocated from the highmem zone.
-
Andrew Morton authored
From Marcus Alanen <maalanen@ra.abo.fi> Don't retake the zone lock after spilling a batch of pages into the buddy. Instead, just clear local variable `zone' to indicate that no lock is held. This is actually a common case - whenever release_pages() is called with exactly 16 pages (truncate, page reclaim..) Marcus' patch will save a lock and an unlock. Also, remove some lock-avoidance heuristics in pagevec_deactivate_inactive(): the caller has already made these checks, and the chance of the check here actually doing anything useful is negligible.
-
Andrew Morton authored
- Spell Jeremy's name correctly. - Fix compile warning in raw.c - Do a waitqueue_active() test before waking klogd in printk. Not only is is negligibly faster, but the wake_up() in there causes deadlocks when you try to print debug info out from inside scheduler code. This patch gives a delightfully obscure way of avoiding the deadlock: kill off klogd. - Fix a couple of compile warnings in the mtrr code.
-
Andrew Morton authored
From Christoph Hellwig, acked by Jens. - remove some unneeded runtime initializers. - remove the explicit call to hd_init() - it already goes through module_init(), so we're currently running hd_init() twice.
-
Andrew Morton authored
From Christoph Hellwig, acked by Rohit. - fix config.in description: we know we're on i386 and we also know that a feature can only be enabled if the hw supports it, the code alone is not enough - the sysctl is VM-releated, so move it from /proc/sys/kernel tp /proc/sys/vm - adopt to standard sysctl names
-
Andrew Morton authored
From Christoph Hellwig. There are no lock_kernel() calls in mm/
-
Andrew Morton authored
From Hubertus Franke. The MAP_LOCKED flag to mmap() currently does nothing. Hubertus' patch fixes it so that the relevant mapping is locked into memory, if the called has CAP_IPC_LOCK.
-
Andrew Morton authored
Somebody somewhere is stomping on PF_NOWARN, and page allocation failure warnings are coming out of the wrong places. So change the handling of current->flags to be: int pf_flags = current->flags; current->flags |= PF_NOWARN; ... current->flags = pf_flags; which is a generally more robust approach.
-
Andrew Morton authored
- writev currently returns -EFAULT if _any_ of the segments has an invalid address. We should only return -EFAULT if the first segment has a bad address. If some of the first segments have valid addresses we need to write them and return a partial result. - The current code only checks if the sum-of-lengths is negative. If individual segments have a negative length but the result is positive we miss that. So rework the code to detect this, and to be immune to odd wrapping situations. As a bonus, we save one pass across the iovec. - ditto for readv. The check for "does any segment have a negative length" has already been performed in do_readv_writev(), but it's basically free here, and we need to do it for generic_file_read/write anyway. This all means that the iov_length() function is unsafe because of wrap/overflow isues. It should only be used after the generic_file_read/write or do_readv_writev() checking has been performed. Its callers have been reviewed and they are OK. The code now passes LTP testing and has been QA'd by Janet's team.
-
Andrew Morton authored
A patch from Hirokazu Takahashi to speed up the new sped-up writev code. Instead of running ->prepare_write/->commit_write for each individual segment, we walk the segments between prepage and commit. So potentially much larger amounts of data are passed to commit_write(), and prepare_write() is called much less often. Added bonus: the segment walk happens inside the kmap_atomic(), so we run kmap_atomic() once per page, not once per segment. We've demonstrated a speedup of over 3x. This is writing 1024-segment iovecs where the individual segments have an average length of 24 bytes, which is a favourable case for this patch.
-
Andrew Morton authored
Silly bug which was halving swapout bandwidth: we've taken a copy of page->mapping into a local convenience variable, but forgot to update that local after adding the page to swapcache.
-
Andrew Morton authored
This was designed to be a really sterm throttling threshold: if dirty memory reaches this level then perform writeback and actually wait on it. It doesn't work. Because memory dirtiers are required to perform writeback if the amount of dirty AND writeback memory exceeds dirty_async_ratio. So kill it, and rely just on the request queues being appropriately scaled to the machine size (they are). This is basically what 2.4 does.
-