- 06 Jul, 2003 21 commits
-
-
Andrew Morton authored
- xfs printk warning fix (dev_t is ulong on ppc64) - unused var in serial_remove() (Daniele Bellucci <bellucda@tiscali.it>)
-
Andrew Morton authored
From: Mikael Pettersson <mikpe@csd.uu.se> This patch fixes two p->thread_info->cpu occurrences in kernel/sched.c to use the task_cpu(p) macro instead, which is optimised on UP. Although one of the occurrences is under #ifdef CONFIG_SMP, it's bad style to use the raw non-optimisable form in non-arch code.
-
Linus Torvalds authored
in the networking code. From YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
-
Greg Ungerer authored
Architecture specific flat loader code for v850 moved into its own v850 flat.h header. This patch also adds supporti for a number of relocation cases that need to be handled at laod time. Most of this code is originally from Miles Bader <miles@gnu.org>.
-
Greg Ungerer authored
Architecture specific flat loader code for m68knommu moved into its own m68knommu flat.h header. Part of the shared library flat loader update.
-
Greg Ungerer authored
Architecture specific flat loader code for H8/300 moved into its own H8/300 flat.h header.
-
Greg Ungerer authored
This patch adds shared library support to the MMU application loader, binfmt_flat. This is not new, it is a forward port from the same support in 2.4.x kernels with MMUless support, and has been running for well over a year now. The code support is conditionally compiled on CONFIG_BINFMT_FLAT_SHARED. This change also abstracts a bit more architecture dependent code into the separate flat.h includes. Basically relocations within an application also carry a tag to identify what they refer too (this code or which shared library). This is patched as before at load/run-time with an appropriate address.
-
Greg Ungerer authored
Unify access_ok for all m68knommu targets. All targets use the common linker script and have common end symbols. So now we can just use a simple check.
-
Greg Ungerer authored
Remove "%d0" register from clobber list of down_trylock() for m68knommu. It is not used by the asm code here at all.
-
Greg Ungerer authored
Force PAGE_SIZE for the m68knommu architecture to be an unsigned long. This makes it consistent with all other architectures and cleans up a load of compiler warnings.
-
Greg Ungerer authored
Conditionally copy the ROMfs filesystem on the Motorola M5307C3 target board only if using a ROMfs.
-
Greg Ungerer authored
Allow setting boot time parameters at configuration for Motorola 5282 targets.
-
Linus Torvalds authored
This improves cold-cache program startup noticeably for me, and simplifies the read-ahead logic at the same time. The rules for read-ahead are: - if the vma is marked random, we just do the regular one-page case. Obvious. - if the vma is marked "linear access", we use the regular readahead code. No change in behaviour there (well, we also only consider it a _miss_ if it was marked linear access - the "readahead" and "readaround" things are now totally independent of each other) - otherwise, we look at how many hits/misses we've had for this particular file open for mmap, and if we've had noticeably more misses than hits, we don't bother with read-around. In particular, this means that the "real" read-ahead logic literally only needs to worry about finding sequential accesses, and does not have to worry about the common executable mmap access patthers that have very different behaviour. Some constant tweaking may be a good idea.
-
Ingo Molnar authored
in add_timer_internal() we simply leave the timer pending forever if the expiry is in more than 0xffffffff jiffies. This means more than 48 days on eg. ia64 - which is not an unrealistic timeout. IIRC crond is happy to use extremely large timeouts. It's better to time out early (if you can call 48 days "early") than to not time out at all.
-
Bernardo Innocenti authored
This offers a generic do_div64() that actually does the right thing, unlike some architectures that "optimized" the 64-by-32 divide into just a 32-bit divide. Both ppc and sh were already providing an assembly optimized __div64_32(). I called my function the same, so that their optimized versions will automatically override mine in lib.a. I've only tested extensively on m68knommu (uClinux) and made sure generated code is reasonably short. Should be ok also on parisc, since it's the same algorithm they were using before. - add generic C implementations of the do_div() for 32bit and 64bit archs in asm-generic/div64.h; - add generic library support function __div64_32() to handle the full 64/32 case on 32bit archs; - kill multiple copies of generic do_div() in architecture specific subdirs. Most copies were either buggy or not doing what they were supposed to do; - ensure all surviving instances of do_div() have their parameters correctly parenthesized to avoid funny side-effects;
-
Paul Fulghum authored
Fix arbitration between net open and tty open. Cleanup missed bits of CUA device removal changes.
-
Paul Fulghum authored
Fix arbitration between net open and tty open. Clean up unused locals resulting from latest tty changes.
-
Paul Fulghum authored
Fix arbitration between net open and tty open. Cleanup unused local resulting from latest tty changes.
-
Benjamin Herrenschmidt authored
From Mikael Petterson: Booting kernel 2.5.74 on a PowerMac with CONFIG_BLK_DEV_IDE_PMAC=y results in an oops during IDE init, and the box then reboots. The patch below updates drivers/ide/ppc/pmac.c to also set up the hwif->ide_dma_queued_off and hwif->ide_dma_queued_on function pointers, which fixes the oops. Tested on my ancient PM4400.
-
Pavel Machek authored
I no longer have the time/interest in nbd, and Paul agreed to take it over.
-
Anton Blanchard authored
The compat ioctls for device mapper were not being enabled due to an incorrect config option.
-
- 05 Jul, 2003 19 commits
-
-
Andrew Morton authored
This tweaks the mmap read-ahead behaviour so that the prefaulting is largely pointless. - double the minimum readaround chunksize in page_cache_readaround(). - when a seek is detected, collapse the window more slowly.
-
Krzysztof Halasa authored
-
Andrew Morton authored
i2o_scsi.c now needs pci.h.
-
Andrew Morton authored
From: ilmari@ilmari.org (Dagfinn Ilmari Mannsaker) It turns out that net/bluetooth/rfcomm/sock.c (and net/bluetooth/hci_sock.c) had been left out when net_proto_family gained an owner field, here's a patch that fixes them both.
-
Andrew Morton authored
From: junkio@cox.net Sigh. Is there a gcc option to tell it to not accept this incompatible C99 extension?
-
Andrew Morton authored
From: Arvind Kandhare <arvind.kan@wipro.com> When switch_uid is called, the reference count of the new user is incremented twice. I think the increment in the switch_uid is done because of the reparent_to_init() function which does not increase the __count for root user. But if switch_uid is called from any other function, the reference count is already incremented by the caller by calling alloc_uid for the new user. Hence the count is incremented twice. The user struct will not be deleted even when there are no processes holding a reference count for it. This does not cause any problem currently because nothing is dependent on timely deletion of the user struct.
-
Andrew Morton authored
From: Davide Libenzi <davidel@xmailserver.org> - Inline eventpoll_release() so that __fput() does not need to call in epoll code if the file itself is not registered inside an epoll fd - Add <linux/types.h> inclusion due __u32 and __u64 usage - Fix debug printf that would otherwise panic if enabled with the new epoll code
-
Andrew Morton authored
From: Davide Libenzi <davidel@xmailserver.org> - Remove a couple of impossible debug checks (unsigneds cannot be negative!) - If __alloc_bootmem_core() fails with a goal and unaligned node_boot_start it'll loop fovever.
-
Andrew Morton authored
If de_thread() fails in flush_old_exec() then we try to fail the execve(). That is a bad move, because exec_mmap() has already switched the current process over to the new mm. The new process is not yet sufficiently set up to handle the error and the kernel doublefaults and dies. exec_mmap() is the point of no return. Change flush_old_exec() to call de_thread() before running exec_mmap() so the execing program sees the error. I added fault injection to both de_thread() and exec_mmap() - everything now survives OK.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> Add some comments to the request allocation code.
-
Andrew Morton authored
- pass gfp_flags to get_io_context(): not all callers are forced to use GFP_ATOMIC(). - fix locking in get_io_context(): bump the refcount whilein the exclusive region. - don't go oops in get_io_context() if the kmalloc failed. - in as_get_io_context(): fail the whole thing if we were unable to allocate the AS-specific part. - as_remove_queued_request() cleanup
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> The following patch gets batching working how it should be. After a process is woken up, it is allowed to allocate up to 32 requests for 20ms. It does not stop other processes submitting requests if it isn't submitting though. This should allow less context switches, and allow batches of requests from each process to be sent to the io scheduler instead of 1 request from each process. tiobench sequential writes are more than tripled, random writes are nearly doubled over mm1. In earlier tests I generally saw better CPU efficiency but it doesn't show here. There is still debug to be taken out. Its also only on UP. Avg Maximum Lat% Lat% CPU Identifier Rate (CPU%) Latency Latency >2s >10s Eff ------------------- ------ --------- ---------- ------- ------ ---- -2.5.71-mm1 11.13 3.783% 46.10 24668.01 0.84 0.02 294 +2.5.71-mm1 13.21 4.489% 37.37 5691.66 0.76 0.00 294 Random Reads ------------------- ------ --------- ---------- ------- ------ ---- -2.5.71-mm1 0.97 0.582% 519.86 6444.66 11.93 0.00 167 +2.5.71-mm1 1.01 0.604% 484.59 6604.93 10.73 0.00 167 Sequential Writes ------------------- ------ --------- ---------- ------- ------ ---- -2.5.71-mm1 4.85 4.456% 77.80 99359.39 0.18 0.13 109 +2.5.71-mm1 14.11 14.19% 10.07 22805.47 0.09 0.04 99 Random Writes ------------------- ------ --------- ---------- ------- ------ ---- -2.5.71-mm1 0.46 0.371% 14.48 6173.90 0.23 0.00 125 +2.5.71-mm1 0.86 0.744% 24.08 8753.66 0.31 0.00 115 It decreases context switch rate on IBM's 8-way on ext2 tiobench 64 threads from ~2500/s to ~140/s on their regression tests.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> Generalise the AS-specific per-process IO context so that other IO schedulers could use it.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> This patch fixes the request batching fairness/starvation issue. Its not clear what is going on with 2.4, but it seems that its a problem around this area. Anyway, previously: * request queue fills up * process 1 calls get_request, sleeps * a couple of requests are freed * process 2 calls get_request, proceeds * a couple of requests are freed * process 2 calls get_request... Now as unlikely as it seems, it could be a problem. Its a fairness problem that process 2 can skip ahead of process 1 anyway. With the patch: * request queue fills up * any process calling get_request will sleep * once the queue gets below the batch watermark, processes start being worken, and may allocate. This patch includes Chris Mason's fix to only clear queue_full when all tasks have been woken. Previously I think starvation and unfairness could still occur. With this change to the blk-fair-batches patch, Chris is showing some much improved numbers for 2.4 - 170 ms max wait vs 2700ms without blk-fair-batches for a dbench 90 run. He didn't indicate how much difference his patch alone made, but it is an important fix I think.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> If there are no requess in flight against the target device and get_request() fails, nothing will wake us up. Fix.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> This patch implements a hint so that AS can tell the request allocator to allocate a request even if there are none left (the accounting is quite flexible and easily handles overallocations). elv_may_queue semantics have changed from "the elevator does _not_ want another request allocated" to "the elevator _insists_ that another request is allocated". I couldn't see any harm ;) Now in practice, AS will only allow _1_ request over the limit, because as soon as the request is sent to AS, it stops anticipating.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> Now that we are counting requests (not requests free), this patch changes the congested & batch watermarks to be more logical. Also a minor fix to the sysfs code.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> This gets rid of the global queue_nr_requests and usage of BLKDEV_MAX_RQ (the latter is now only used to set the queues' defaults). The queue depth becomes per-queue, controlled by a sysfs entry.
-
Andrew Morton authored
Using keventd for running request_fns is risky because keventd itself can block on disk I/O. Use the new kblockd kernel threads for the generic unplugging.
-