- 18 Jun, 2002 40 commits
-
-
Andries E. Brouwer authored
-
Martin Schwidefsky authored
some recent changes in the s390 architectures files: 1) Makefile fixes. 2) Add missing include statements. 3) Convert all parametes in the 31 bit emulation wrapper of sys_futex. 4) Remove semicolons after 'fi' in Config.in 5) Fix scheduler defines in system.h 6) Simplifications in qdio.c
-
http://linux-isdn.bkbits.net/linux-2.5.makeLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Andi Kleen authored
Pure luck that this ever worked at all. The optimized assembly for XOR in RAID-5 declared did clobber registers, but did declare them as read-only. I'm pretty sure that at least the 4 disk and possibly the 5 disk cases did corrupt callee saved registers. The others probably got away because they were always used in own functions (and only clobbering caller saved registers)and only called via pointers, preventing inlining. Some of the replacements are a bit complicated because the functions exceed gcc's 10 asm argument limit when each input/output register needs two arguments. Works around that by saving/restoring some of the registers manually. I wasn't able to test it in real-life because I don't have a RAID setup and the RAID code didn't compile since several 2.5 releases. I wrote some test programs that did test the XOR and they showed no regression. Also aligns to XMM save area to 16 bytes to save a few cycles.
-
Jan Kara authored
This renames 'xqm.h' to a bit better (more consistent with rest of sources) name.
-
Zwane Mwaikambo authored
Patch to reorder the APIC configuration so that dependencies are determined beforehand for MCE. Keith Owens pointed this out a whiles back actually.
-
Adrian Bunk authored
It seems func.h needs to inlude linux/kdev_t.h:
-
Jens Axboe authored
For some odd reason, the blkdev.h changes did not get patched into your tree from the patch I sent?! Anyways, here's that change:
-
Stephen Rothwell authored
This patch fixes the following problems in the file lease: when there are multiple shared leases on a file, all the lease holders get notified when someone opens the file for writing (used to be only the first). when a nonblocking open breaks a lease, it will time out as it should (used to never time out). This should make the leases code more usable (hopefully).
-
Stephen Rothwell authored
This patch makes copy_siginfo_to_user excplicitly copy the correct union member. Previously we were getting the correct result but really by accident.
-
Stephen Rothwell authored
I needed these to make 2.5.22 build for me.
-
Stephen Rothwell authored
arch/ppc64/kernel/sys_ppc32.c has a getname32 function. The only difference between it and getname() is that it calls do_getname32() instead of do_getname() (see fs/namei.c). The difference between do_getname and do_getname32 is that the former checks to make sure that the pointer it is passed is less that TASK_SIZE and restricts the length copied to the lesser of PATH_MAX and (TASK_SIZE - pointer). do_getname32 uses PAGE_SIZE instead of PATH_MAX. Anton Blanchard says it is OK to remove getname32. arch/ia64/ia32/sys_ia32.c defined a getname32(), but nothing used it. This patch removes both.
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Ingo Molnar authored
without making sure that the target CPU is allowed.
-
Ingo Molnar authored
-
Ingo Molnar authored
the current implementation does the following to 'give up' the CPU: - it decreases its priority by 1 until it reaches the lowest level - it queues the task to the end of the priority queue this scheme works fine in most cases, but if sched_yield()-active tasks are mixed with CPU-using processes then it's quite likely that the CPU-using process is in the expired array. In that case the yield()-ing process only requeues itself in the active array - a true context-switch to the expired process will only occur once the timeslice of the yield()-ing process has expired: in ~150 msecs. This leads to the yield()-ing and CPU-using process to use up rougly the same amount of CPU-time, which is arguably deficient. i've fixed this problem by extending sched_yield() the following way: + * There are three levels of how a yielding task will give up + * the current CPU: + * + * #1 - it decreases its priority by one. This priority loss is + * temporary, it's recovered once the current timeslice + * expires. + * + * #2 - once it has reached the lowest priority level, + * it will give up timeslices one by one. (We do not + * want to give them up all at once, it's gradual, + * to protect the casual yield()er.) + * + * #3 - once all timeslices are gone we put the process into + * the expired array. + * + * (special rule: RT tasks do not lose any priority, they just + * roundrobin on their current priority level.) + */
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Ingo Molnar authored
-
Ingo Molnar authored
-
Paul Menage authored
This patch (against 2.5.22) removes the BKL from around the call to i_op->permission() in fs/namei.c, and pushes the BKL into those filesystems that have permission() methods that require it.
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Kai Mäkisara authored
This contains the following changes to the SCSI tape driver: - one buffer is used for each tape (no buffer pool) - buffers allocated when needed and freed when device closed - common code from read and write moved to a function - default maximum number of scatter/gather segments increased to 64 - tape status set to "no tape" after succesful unload
-
Matthew Wilcox authored
Nobody's using it any more, kill:
-
Matthew Wilcox authored
This is actually part of the work I've been doing to remove BHs, but it stands by itself.
-
Andi Kleen authored
This patch streamlines poll and select by adding fast paths for a small number of descriptors passed. The majority of polls/selects seem to be of this nature. The main saving comes from not allocating two pages for wait queue and table, but from using stack allocation (upto 256bytes) when only a few descriptors are needed. This makes it as fast again as 2.0 and even a bit faster because the wait queue page allocation is avoided too (except when the drivers overflow it) select also skips a lot faster over big holes and avoids the separate pass of determining the max. number of descriptors in the bitmap. A typical linux system saves a considerable amount of unswappable memory with this patch, because it usually has 10+ daemons hanging around in poll or select with each two pages allocated for data and wait queue. Some other cleanups.
-
Andi Kleen authored
x86-64 needs an own special declaration of jiffies_64. prepare for this by moving the jiffies_64 declaration from kernel/timer.c down into each architecture.
-
Andi Kleen authored
x86_64 core updates. - Make it compile again (switch_to macros etc., add dummy suspend.h) - reenable strength reduce optimization - Fix ramdisk (patch from Mikael Pettersson) - Some merges from i386 - Reimplement lazy iobitmap allocation. I reimplemented it based on bcrl's idea. - Fix IPC 32bit emulation to actually work and move into own file - New fixed mtrr.c from DaveJ ported from 2.4 and reenable it. - Move tlbstate into PDA. - Add some changes that got lost during the last merge. - new memset that seems to actually work. - Align signal handler stack frames to 16 bytes. - Some more minor bugfixes.
-
Andrew Morton authored
Heaven knows why, but that's what the opengroup say, and returning -EFAULT causes 2.5 to fail one of the Linux Test Project tests. [ENOMEM] The addresses in the range starting at addr and continuing for len bytes are outside the range allowed for the address space of a process or specify one or more pages that are not mapped. 2.4 has it right, but 2.5 doesn't.
-
Andrew Morton authored
Reduce the radix tree nodes from 128 slots to 64. - The main reason for this is that on 64-bit/4k page machines, the slab allocator has decided that radix tree nodes will require an order-1 allocation. Shrinking the nodes to 64 slots pulls that back to an order-0 allocation. - On x86 we get fifteen 64-slot nodes per page rather than seven 129-slot nodes, for a modest memory saving. - Halving the node size will approximately halve the memory use in the worrisome really-large, really-sparse file case. Of course, the downside is longer tree walks. Each level of the tree covers six bits of pagecache index rather than seven. As ever, I am guided by Anton's profiling on the 12- and 32-way PPC boxes. radix_tree_lookup() is currently down in the noise floor. Now, there is one special case: one file which is really big and which is accessed in a random manner and which is accessed very heavily: the blockdev mapping. We _are_ showing some locking cost in __find_get_block (used to be __get_hash_table) and in its call to find_get_page(). I have a bunch of patches which introduce a generic per-cpu buffer LRU, and which remove ext2's private bitmap buffer LRUs. I expect these patches to wipe the blockdev mapping lookup lock contention off the map, but I'm awaiting test results from Anton before deciding whether those patches are worth submitting.
-
Andrew Morton authored
Renames the buffer_head lookup function `get_hash_table' to `find_get_block'. get_hash_table() is too generic a name. Plus it doesn't even use a hash any more.
-
Andrew Morton authored
One weakness which was introduced when the buffer LRU went away was that GFP_NOFS allocations became equivalent to GFP_NOIO. Because all writeback goes via writepage/writepages, which requires entry into the filesystem. However now that swapout no longer calls bmap(), we can honour GFP_NOFS's intent for swapcache pages. So if the allocation request specifies __GFP_IO and !__GFP_FS, we can wait on swapcache pages and we can perform swapcache writeout. This should strengthen the VM somewhat.
-
Andrew Morton authored
The set_page_buffers() and clear_page_buffers() macros are each used in only one place. Fold them into their callers.
-
Andrew Morton authored
highmem.h includes bio.h, so just about every compilation unit in the kernel gets to process bio.h. The patch moves the BIO-related functions out of highmem.h and into bio-related headers. The nested include is removed and all files which need to include bio.h now do so.
-
Andrew Morton authored
alloc_bufer_head() does not need the additional argument - GFP_NOFS is always correct.
-
Andrew Morton authored
Clean up ext3's journal_try_to_free_buffers(). Now that the releasepage() a_op is non-blocking and need not perform I/O, this function becomes much simpler.
-
Andrew Morton authored
bio_copy is doing vfrom = kmap_atomic(bv->bv_page, KM_BIO_IRQ); vto = kmap_atomic(bbv->bv_page, KM_BIO_IRQ); which, if I understand atomic kmaps, is incorrect. Both source and dest will get the same pte. The patch creates a separate atomic kmap member for the destination and source of this copy.
-
Andrew Morton authored
Fix the loop driver for loop-on-blockdev setups. When presented with a multipage BIO, loop_make_request overindexes the first page and corrupts kernel memory. Fix it to walk the individual pages. BTW, I suspect the IV handling in loop may be incorrect for multipage BIOs. Should we not be recalculating the IV for each page in the BIOs, or incrementing the offset by the size of the preceding pages, or such?
-
Andrew Morton authored
This patch changes the swap I/O handling. The objectives are: - Remove swap special-casing - Stop using buffer_heads -> direct-to-BIO - Make S_ISREG swapfiles more robust. I've spent quite some time with swap. The first patches converted swap to use block_read/write_full_page(). These were discarded because they are still using buffer_heads, and a reasonable amount of otherwise unnecessary infrastructure had to be added to the swap code just to make it look like a regular fs. So this code just has a custom direct-to-BIO path for swap, which seems to be the most comfortable approach. A significant thing here is the introduction of "swap extents". A swap extent is a simple data structure which maps a range of swap pages onto a range of disk sectors. It is simply: struct swap_extent { struct list_head list; pgoff_t start_page; pgoff_t nr_pages; sector_t start_block; }; At swapon time (for an S_ISREG swapfile), each block in the file is bmapped() and the block numbers are parsed to generate the device's swap extent list. This extent list is quite compact - a 512 megabyte swapfile generates about 130 nodes in the list. That's about 4 kbytes of storage. The conversion from filesystem blocksize blocks into PAGE_SIZE blocks is performed at swapon time. At swapon time (for an S_ISBLK swapfile), we install a single swap extent which describes the entire device. The advantages of the swap extents are: 1: We never have to run bmap() (ie: read from disk) at swapout time. So S_ISREG swapfiles are now just as robust as S_ISBLK swapfiles. 2: All the differences between S_ISBLK swapfiles and S_ISREG swapfiles are handled at swapon time. During normal operation, we just don't care. Both types of swapfiles are handled the same way. 3: The extent lists always operate in PAGE_SIZE units. So the problems of going from fs blocksize to PAGE_SIZE are handled at swapon time and normal operating code doesn't need to care. 4: Because we don't have to fiddle with different blocksizes, we can go direct-to-BIO for swap_readpage() and swap_writepage(). This introduces the kernel-wide invariant "anonymous pages never have buffers attached", which cleans some things up nicely. All those block_flushpage() calls in the swap code simply go away. 5: The kernel no longer has to allocate both buffer_heads and BIOs to perform swapout. Just a BIO. 6: It permits us to perform swapcache writeout and throttling for GFP_NOFS allocations (a later patch). (Well, there is one sort of anon page which can have buffers: the pages which are cast adrift in truncate_complete_page() because do_invalidatepage() failed. But these pages are never added to swapcache, and nobody except the VM LRU has to deal with them). The swapfile parser in setup_swap_extents() will attempt to extract the largest possible number of PAGE_SIZE-sized and PAGE_SIZE-aligned chunks of disk from the S_ISREG swapfile. Any stray blocks (due to file discontiguities) are simply discarded - we never swap to those. If an S_ISREG swapfile is found to have any unmapped blocks (file holes) then the swapon attempt will fail. The extent list can be quite large (hundreds of nodes for a gigabyte S_ISREG swapfile). It needs to be consulted once for each page within swap_readpage() and swap_writepage(). Hence there is a risk that we could blow significant amounts of CPU walking that list. However I have implemented a "where we found the last block" cache, which is used as the starting point for the next search. Empirical testing indicates that this is wildly effective - the average length of the list walk in map_swap_page() is 0.3 iterations per page, with a 130-element list. It _could_ be that some workloads do start suffering long walks in that code, and perhaps a tree would be needed there. But I doubt that, and if this is happening then it means that we're seeking all over the disk for swap I/O, and the list walk is the least of our problems. rw_swap_page_nolock() now takes a page*, not a kernel virtual address. It has been renamed to rw_swap_page_sync() and it takes care of locking and unlocking the page itself. Which is all a much better interface. Support for type 0 swap has been removed. Current versions of mkwap(8) seem to never produce v0 swap unless you explicitly ask for it, so I doubt if this will affect anyone. If you _do_ have a type 0 swapfile, swapon will fail and the message version 0 swap is no longer supported. Use mkswap -v1 /dev/sdb3 is printed. We can remove that code for real later on. Really, all that swapfile header parsing should be pushed out to userspace. This code always uses single-page BIOs for swapin and swapout. I have an additional patch which converts swap to use mpage_writepages(), so we swap out in 16-page BIOs. It works fine, but I don't intend to submit that. There just doesn't seem to be any significant advantage to it. I can't see anything in sys_swapon()/sys_swapoff() which needs the lock_kernel() calls, so I deleted them. If you ftruncate an S_ISREG swapfile to a shorter size while it is in use, subsequent swapout will destroy the filesystem. It was always thus, but it is much, much easier to do now. Not really a kernel problem, but swapon(8) should not be allowing the kernel to use swapfiles which are modifiable by unprivileged users.
-
Andrew Morton authored
Convert swap pages so that they are PageWriteback and !PageLocked while under writeout, like all other block-backed pages. (Network filesystems aren't doing this yet - their pages are still locked while under writeout)
-
Andrew Morton authored
buffer_insert_list() is showing up on Anton's graphs. It'll be via ext2's mark_buffer_dirty_inode() against indirect blocks. If the buffer is already on an inode queue, we know that it is on the correct inode's queue so we don't need to re-add it.
-