- 30 Jul, 2002 11 commits
-
-
Patrick Mochel authored
of the directory itself
-
Patrick Mochel authored
to access the struct device, rather than via struct driver_file_entry::parent pointer.
-
Patrick Mochel authored
driverfs: Don't put the driver_file_entry in struct inode::u.generic_ip or struct file::private_data (since it's already in struct dentry::d_fsdata and we always get to that)
-
Patrick Mochel authored
for anything useful anymore.
-
Patrick Mochel authored
-
Patrick Mochel authored
as we don't use the lists anymore
-
Patrick Mochel authored
driverfs: Do hashed lookup of dentry's when deleting a driverfs file (instead of searching the list we keep)
-
Patrick Mochel authored
into osdl.org:/home/mochel/src/kernel/devel/linux-2.5-driverfs
-
bk://ncpfs.bkbits.net/linux-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Petr Vandrovec authored
-
Linus Torvalds authored
using a system call from kernel space. This avoids one level of hidden code, and makes what happens much more explicit (and speeds it up too, fwiw)
-
- 29 Jul, 2002 29 commits
-
-
Eric Sandeen authored
This "warning fix" bug report is actually an OOPS bugfix.
-
Matthew Wilcox authored
locks_unlock_delete is buggy in a couple of different ways (previously reported by Brian Dixon). Rather than fix it, this patch simply deletes it and uses the normal posix file locking mechanisms to remove all locks in locks_remove_posix instead.
-
Anton Blanchard authored
Make cpu_relax() on all architectures a gcc barrier to match x86.
-
http://linux-isdn.bkbits.net/linux-2.5.makeLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Kai Germaschewski authored
-
Sam Ravnborg authored
-
Sam Ravnborg authored
o Corrected dependencies for parportbook o Introduced do_cmd, thus adhering to KBUILD_VERBOSE and make -s
-
Sam Ravnborg authored
do_cmd is a nice shorthand when creating rules that in one line shall adhere to KBUILD_VERBOSE and make -s. So far the only user is (will be) the docbook makefile
-
Linus Torvalds authored
file locking LSM update
-
Linus Torvalds authored
-
Hugh Dickins authored
An acct flag was added to do_munmap, true everywhere but in mremap's move_vma: instead of updating the arch and driver sources, revert that that change and temporarily mask VM_ACCOUNT around that one do_munmap. Also, noticed that do_mremap fails needlessly if both shrinking _and_ moving a mapping: update old_len to pass vm area boundaries test.
-
Hugh Dickins authored
If we support mmap MAP_NORESERVE, we should support it on shared anonymous objects: too bad that needs a few changes. do_mmap_pgoff pass VM_ACCOUNT (or not) down to shmem_file_setup, flag stored into shmem info, for use by shmem_delete_inode later. Also removed a harmless but pointless call to shmem_truncate.
-
Hugh Dickins authored
Update Doc and remove FIXME comment from fork.c now accounting right.
-
Hugh Dickins authored
do_mmap_pgoff's (file == NULL) check was incorrect: it caused shared MAP_ANONYMOUS objects to be counted twice (again in shmem_file_setup), and again on fork(); whereas the equivalent shared /dev/zero objects were correctly counted. Conversely, a private readonly file mapping was (correctly) not counted, but still not counted when mprotected to writable: mprotect_fixup had pointless "charged = 0" changes, now it does vm_enough_memory checking when private is first made writable (but later we may want to refine behaviour on a noreserve mapping). Also changed correct (flags & MAP_SHARED) test in do_mmap_pgoff to equivalent (vm_flags & VM_SHARED) test: because do_mmap_pgoff is dealing with vm_flags rather than the input flags by that stage.
-
Hugh Dickins authored
Remove vm_unacct_vma function: it's only used in one place, which can do it better by using vm_unacct_memory directly.
-
Hugh Dickins authored
do_mmap_pgoff clears MAP_NORESERVE from vm_flags when VM accounts strictly: but it's not in vm_flags, it's in flags (and tested there).
-
Hugh Dickins authored
There is no point in do_mremap clearing MAP_NORESERVE from its flags: it has already validated that only the MREMAP_ flags can be set, and it has no use for MAP_NORESERVE in the code that follows anyway.
-
Hugh Dickins authored
shmem_notify_change and shmem_file_write be careful about overflowingly large loff_t before shifting it into unsigned long for vm_enough_memory. Rename SHMEM_MAX_BLOCKS to SHMEM_MAX_INDEX (to avoid confusion with 512-byte blocks), define SHMEM_MAX_BYTES from it. But 2.5 vmtruncate lacked the s_maxbytes error handling which shmem_notify_change now expects: bring it in from the -dj tree. shmem_file_write error handling needs a closer look later on.
-
Hugh Dickins authored
Repeated overnight kernel builds in tmpfs showed insane Committed_AS by morning. The main bug was that shmem_file_write was passing (newsize-oldsize)>>PAGE_SHIFT to vm_enough_memory, but it has to be ((newsize>>PAGE_SHIFT)-(oldsize>>PAGE_SHIFT)) - imagine 1k writes. But actually, if we're going to do strict accounting, then we should round up to next page not down - use VM_ACCT macro throughout (needs unusual mix of PAGE_CACHE_SIZE with PAGE_SHIFT); and must count one page for a long symlink.
-
Christoph Hellwig authored
Currently there is no way to find out the effective object size of a slab cache. XFS has lots of IRIX-derived code that want to do zalloc() style allocations on zones (which are implemented as slab caches in XFS/Linux) and thus needs to know about it. There are three ways do implement it: a) implement kmem_cache_zalloc b) make the xfs zone a struct of kmem_cache_t and a size variable c) implement kmem_cache_size The current XFS tree does a) but I absolutely don't like it as encourages people to use kmem_cache_zalloc for new code instead of thinking about how to utilize slab object reuse. b) would be easy, but I guess kmem_cache_size is usefull enough to get into the kernel. Here's the patch:
-
Linus Torvalds authored
actual implementation and avoid confusion.
-
Dave Hansen authored
I just duplicated the method used in drivers/net/tulip/de2104x.c
-
Kai Germaschewski authored
pointed out by Sam Ravnborg
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.make-fix
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.make-fix
-
Linus Torvalds authored
-
David Howells authored
This should do the trick.
-
Jens Axboe authored
-
Paul Mackerras authored
I found a situation where page->index for a pagetable page can be set to 0 instead of the correct value. This means that ptep_to_address will return the wrong answer. The problem occurs when remap_pmd_range calls pte_alloc_map and pte_alloc_map needs to allocate a new pte page, because remap_pmd_range has masked off the top bits of the address (to avoid overflow in the computation of `end'), and it passes the masked address to pte_alloc_map. Now we presumably don't need to get from the physical pages mapped by remap_page_range back to the ptes mapping them. But we could easily map some normal pages using ptes in that pagetable page subsequently, and when we call ptep_to_address on their ptes it will give the wrong answer. The patch below fixes the problem. There is a more general question this brings up - some of the procedures which iterate over ranges of ptes will do the wrong thing if the end of the address range is too close to ~0UL, while others are OK. Is this a problem in practice? On i386, ppc, and the 64-bit architectures it isn't since user addresses can't go anywhere near ~0UL, but what about arm or m68k for instance? And BTW, being able to go from a pte pointer to the mm and virtual address that that pte maps is an extremely useful thing on ppc, since it will enable me to do MMU hash-table management at set_pte (and ptep_*) time and thus avoid the extra traversal of the pagetables that I am currently doing in flush_tlb_*. So if you do decide to back out rmap, please leave in the hooks for setting page->mapping and page->index on pagetable pages.
-