1. 02 Jul, 2003 8 commits
    • Andrew Morton's avatar
      [PATCH] ramfs: use rgeneric_file_llseek · c94f7f38
      Andrew Morton authored
      Teach ramfs to use generic_file_llseek: default_llseek takes lock_kernel().
      c94f7f38
    • Andrew Morton's avatar
      [PATCH] NUMA memory reporting fix · d4388840
      Andrew Morton authored
      From: Dave Hansen <haveblue@us.ibm.com>
      
      The current numa meminfo code exports (via sysfs) pgdat->node_size, as
      totalram.  This variable is consistently used elsewhere to mean "the number
      of physical pages that this particular node spans".  This is _not_ what we
      want to see from meminfo, which is: "how much actual memory does this node
      have?"
      
      The following patch removes pgdat->node_size, and replaces it with
      ->node_spanned_pages.  This is to avoid confusion with a new variable,
      node_present_pages, which is the _actual_ value that we want to export in
      meminfo.  Most of the patch is a simple s/node_size/node_spanned_pages/.
      The node_size() macro is also removed, and replaced with new ones for
      node_{spanned,present}_pages() to avoid confusion.
      
      We were bitten by this problem in this bug:
      	http://bugme.osdl.org/show_bug.cgi?id=818
      
      Compiled and tested on NUMA-Q.
      d4388840
    • Andrew Morton's avatar
      [PATCH] page unmapping debug · 98eb235b
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      Manfred's latest page unmapping debug patch.
      
      The patch adds support for a special debug mode to both the page and the slab
      allocator: Unused pages are removed from the kernel linear mapping.  This
      means that now any access to freed memory will cause an immediate exception.
      Right now, read accesses remain totally unnoticed and write accesses may be
      catched by the slab poisoning, but usually far too late for a meaningfull bug
      report.
      
      The implementation is based on a new arch dependant function,
      kernel_map_pages(), that removes the pages from the linear mapping.  It's
      right now only implemented for i386.
      
      Changelog:
      
      - Add kernel_map_pages() for i386, based on change_page_attr.  If
        DEBUG_PAGEALLOC is not set, then the function is an empty stub.  The stub
        is in <linux/mm.h>, i.e.  it exists for all archs.
      
      - Make change_page_attr irq safe.  Note that it's not fully irq safe due to
        the lack of the tlb flush ipi, but it's good enough for kernel_map_pages().
         Another problem is that kernel_map_pages is not permitted to fail, thus
        PSE is disabled if DEBUG_PAGEALLOC is enabled
      
      - use kernel_map pages for the page allocator.
      
      - use kernel_map_pages for the slab allocator.
      
        I couldn't resist and added additional debugging support into mm/slab.c:
      
        * at kfree time, the complete backtrace of the kfree caller is stored
          in the freed object.
      
        * a ptrinfo() function that dumps all known data about a kernel virtual
          address: the pte value, if it belongs to a slab cache the cache name and
          additional info.
      
        * merging of common code: new helper function obj_dbglen and obj_dbghdr
          for the conversion between the user visible object pointers/len and the
          actual, internal addresses and len values.
      98eb235b
    • Andrew Morton's avatar
      [PATCH] move_vma() make_pages_present() fix · 17003453
      Andrew Morton authored
      From: Hugh Dickins <hugh@veritas.com>
      
      mremap's move_vma VM_LOCKED case was still wrong.
      
      If the do_munmap unmaps a part of new_vma, then its vm_start and vm_end
      from before cannot both be the right addresses for the make_pages_present
      range, and may BUG() there.
      
      We need [new_addr, new_addr+new_len) to be locked down; but
      move_page_tables already transferred the locked pages [new_addr,
      new_addr+old_len), and they're either held in a VM_LOCKED vma throughout,
      or temporarily in no vma: in neither case can be swapped out, so no need to
      run over that range again.
      17003453
    • Dagfinn Ilmari Mannsåker's avatar
      [PATCH] Allow modular DM · eeb96479
      Dagfinn Ilmari Mannsåker authored
      With the recent fixes, io_schedule needs to be exported for modular dm
      to work.
      eeb96479
    • Linus Torvalds's avatar
      Linux 2.5.74 · 495c3da1
      Linus Torvalds authored
      495c3da1
    • Joe Thornber's avatar
      [PATCH] dm: remove bogus yields · 8732dde8
      Joe Thornber authored
      Replace a couple of bogus yields() with schedule() and io_schedule()
      respectively.
      8732dde8
    • Joe Thornber's avatar
      [PATCH] dm: fix memory leak · 2ea58325
      Joe Thornber authored
      2ea58325
  2. 01 Jul, 2003 31 commits
  3. 02 Jul, 2003 1 commit