- 11 Sep, 2002 2 commits
-
-
http://linux.bkbits.net/linux-2.5Christoph Hellwig authored
into dhcp212.munich.sgi.com:/home/hch/repo/bk/linux-2.5-xfs
-
Christoph Hellwig authored
-
- 10 Sep, 2002 38 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
-
Matthew Wilcox authored
SERIAL_IO_GSC was a mistake and should never have been added.
-
Matthew Wilcox authored
When drivers/serial was split off, the following helptexts should have been deleted, but weren't.
-
Sam Ravnborg authored
The reason for the ftape messup of export-objs is the usage of the strange FT_KSYM macro in ftape_syms.c. That exist solely for backwards compatibility for kernel 2.1.18 and older. Better clean it up.
-
Matthew Wilcox authored
- Add FL_SLEEP flag to indicate we intend to sleep and therefore desire to be placed on the block list. Use it for POSIX & flock locks. - Remove locks_block_on. - Change posix_unblock_lock to eliminate a race that will appear once we don't use the BKL any more. - Update the comment for locks_same_owner() and rename it to posix_same_owner(). - Change locks_mandatory_area() to allocate its lock on the stack and call posix_lock_file() instead of repeating that logic. - Rename the "caller" parameter to posix_lock_file() to "request" to better show that this is not to be inserted directly. - Redo some of the proc code a little. Stop exposing kernel addresses to userspace (whoever thought _that_ was a good idea?!) and show how we should be printing the device name. The last part is ifdeffed out to avoid breaking lslk. - Remove FL_BROKEN. And there was much rejoicing.
-
Art Haas authored
Here are some patches for C99 initializers in fs/nfs. Patches are against 2.5.32.
-
Celso González authored
The function save_flags must use unsigned long instead long (signed) This trivial patch solves the problem
-
Celso González authored
The function save_flags must use an unsigned long parameter instead a long (signed) one This trivial patch solves the problem
-
Celso González authored
The function save_flags must use an unsigned long parameter instead a long (signed) one This trivial patch solves the problem
-
Brad Hards authored
<asm/io.h> has the normal idempotent construction on every architecture. The attached file removes the second #include.
-
Brad Hards authored
<linux/init.h> has the normal idempotent construction. The attached file removes the second #include.
-
Celso González authored
The function save_flags must use unsigned long instead long (signed) This trivial patch solves the problem
-
Lucas Correia Villa Real authored
This is a trivial patch already applied in the -ac tree for the 2.4.19 kernel. Patch for lp.c avoid +/- operations with 0 and explicit some debug information as KERN_INFO or KERN_ERR.
-
James Mayer authored
-
Randy Dunlap authored
-
Peter Samuelson authored
drivers/char/Config.in still has a complete copy of agp/Config.in. It's an exact cut-n-paste - the md5sums even match. (:
-
Marcus Alanen authored
Bad error path.. ret is already set to -ENODEV, no need to set them again before jumping out.
-
Rusty Russell authored
The old form of designated initializers are obsolete: we need to replace them with the ISO C forms before 2.6. Gcc has always supported both forms anyway.
-
Bernhard Fischer authored
-
Rusty Russell authored
The old form of designated initializers are obsolete: we need to replace them with the ISO C forms before 2.6. Gcc has always supported both forms anyway.
-
Celso González authored
The function save_flags must use unsigned long instead long (signed) This trivial patch solves the problem
-
Celso González authored
The function save_flags must use an unsigned long parameter instead a long (signed) one This trivial patch solves the problem
-
Brad Hards authored
<linux/serial.h> has the normal idempotent construction. The attached file removes the second #include.
-
Marcus Alanen authored
-
Matt Domsch authored
Trivial patch changes my zip code. Applies to 2.4.x and 2.5.x trees.
-
Celso González authored
The function save_flags must use unsigned long instead long (signed) This trivial patch solves the problem
-
Skip Ford authored
-
Celso González authored
The function save_flags must use unsigned long instead long (signed) This trivial patch solves the problem
-
Celso González authored
The function save_flags must use unsigned long instead long (signed) This trivial patch solves the problem
-
James Mayer authored
-
Celso González authored
The function save_flags must use an unsigned long parameter instead a long (signed) one This trivial patch solves the problem
-
Linus Torvalds authored
but also about being called whenever we're holding any other preemption locks.
-
Linus Torvalds authored
version in include/linux didn't get deleted.
-
Andrew Morton authored
Bill Irwin's patch to fix up pte's in highmem. With CONFIG_HIGHPTE, the direct pte pointer in struct page becomes the 64-bit physical address of the single pte which is mapping this page. If the page is not PageDirect then page->pte.chain points at a list of pte_chains, which each now contain an array of 64-bit physical addresses of the pte's which are mapping the page. The functions rmap_ptep_map() and rmap_ptep_unmap() are used for mapping and unmapping the page which backs the target pte. The patch touches all architectures (adding do-nothing compatibility macros and inlines). It generally mangles lots of header files and may break non-ia32 compiles. I've had it in testing since 2.5.31.
-
Andrew Morton authored
The pte_chains presently consist of a pte pointer and a `next' link. So there's a 50% memory wastage here as well as potential for a lot of misses during walks of the singly-linked per-page list. This patch increases the pte_chain structure to occupy a full cacheline. There are 7, 15 or 31 pte pointers per structure rather than just one. So the wastage falls to a few percent and the number of misses during the walk is reduced. The patch doesn't make much difference in simple testing, because in those tests the pte_chain list from the previous page has good cache locality with the next page's list. The patch sped up Anton's "10,000 concurrently exitting shells" test by 3x or 4x. It gives a 10% reduction in system time for a kernel build on 16p NUMAQ. It saves memory and reduces the amount of work performed in the slab allocator. Pages which are mapped by only a single process continue to not have a pte_chain. The pointer in struct page points directly at the mapping pte (a "PageDirect" pte pointer). Once the page is shared a pte_chain is allocated and both the new and old pte pointers are moved into it. We used to collapse the pte_chain back to a PageDirect representation in page_remove_rmap(). That has been changed. That collapse is now performed inside page reclaim, via page_referenced(). The thinking here is that if a page was previously shared then it may become shared again, so leave the pte_chain structure in place. But if the system is under memory pressure then start reaping them anyway.
-
Andrew Morton authored
This patch addresses the excessive consumption of ZONE_NORMAL by buffer_heads on highmem machines. The algorithms which decide which buffers to shoot down are fairly dumb, but they only cut in on machines with large highmem:lowmem ratios and the code footprint is tiny. The buffer.c change implements the buffer_head accounting - it sets the upper limit on buffer_head memory occupancy to 10% of ZONE_NORMAL. A possible side-effect of this change is that the kernel will perform more calls to get_block() to map pages to disk. This will only be observed when a file is being repeatadly overwritten - this is the only case in which the "cached get_block result" in the buffers is useful. I did quite some testing of this back in the delalloc ext2 days, and was not able to come up with a test in which the cached get_block result was measurably useful. That's for ext2, which has a fast get_block(). A desirable side effect of this patch is that the kernel will be able to cache much more blockdev pagecache in ZONE_NORMAL, so there are more ext2/3 indirect blocks in cache, so with some workloads, less I/O will be performed. In mpage_writepage(): if the number of buffer_heads is excessive then buffers are stripped from pages as they are submitted for writeback. This change is only useful for filesystems which are using the mpage code. That's ext2 and ext3-writeback and JFS. An mpage patch for reiserfs was floating about but seems to have got lost. There is no need to strip buffers for reads because the mpage code does not attach buffers for reads. These are perhaps not the most appropriate buffer_heads to toss away. Perhaps something smarter should be done to detect file overwriting, or to toss the 'oldest' buffer_heads first. In refill_inactive(): if the number of buffer_heads is excessive then strip buffers from pages as they move onto the inactive list. This change is useful for all filesystems. This approach is good because pages which are being repeatedly overwritten will remain on the active list and will retain their buffers, whereas pages which are not being overwritten will be stripped.
-
Andrew Morton authored
Writeback parameter tuning. Somewhat experimental, but heading in the right direction, I hope. - Allowing 40% of physical memory to be dirtied on massive ia32 boxes is unreasonable. It pins too many buffer_heads and contribues to page reclaim latency. The patch changes the initial value of /proc/sys/vm/dirty_background_ratio, dirty_async_ratio and (the presently non-functional) dirty_sync_ratio so that they are reduced when the highmem:lowmem ratio exceeds 4:1. These ratios are scaled so that as the highmem:lowmem ratio goes beyond 4:1, the maximum amount of allowed dirty memory ceases to increase. It is clamped at the amount of memory which a 4:1 machine is allowed to use. - Aggressive reduction in the dirty memory threshold at which background writeback cuts in. 2.4 uses 30% of ZONE_NORMAL. 2.5 uses 40% of total memory. This patch changes it to 10% of total memory (if total memory <= 4G. Even less otherwise - see above). This means that: - Much more writeback is performed by pdflush. - When the application is generating dirty data at a moderate rate, background writeback cuts in much earlier, so memory is cleaned more promptly. - Reduces the risk of user applications getting stalled by writeback. - Will damage dbench numbers. It turns out that the damage is fairly small, and dbench isn't a worthwhile workload for optimisation. - Moderate reduction in the dirty level at which the write(2) caller is forced to perform writeback (throttling). Was 40% of total memory. Is now 30% of total memory (if total memory <= 4G, less otherwise). This is to reduce page reclaim latency, and generally because allowing processes to flood the machine with dirty data is a bad thing in mixed workloads.
-