An error occurred fetching the project authors.
  1. 04 Feb, 2008 2 commits
  2. 31 Jan, 2008 1 commit
  3. 30 Jan, 2008 5 commits
  4. 17 Oct, 2007 3 commits
  5. 11 Oct, 2007 2 commits
  6. 22 Jul, 2007 1 commit
  7. 12 May, 2007 1 commit
  8. 02 May, 2007 2 commits
    • Jeremy Fitzhardinge's avatar
      [PATCH] i386: PARAVIRT: Allow paravirt backend to choose kernel PMD sharing · 5311ab62
      Jeremy Fitzhardinge authored
      Normally when running in PAE mode, the 4th PMD maps the kernel address space,
      which can be shared among all processes (since they all need the same kernel
      mappings).
      
      Xen, however, does not allow guests to have the kernel pmd shared between page
      tables, so parameterize pgtable.c to allow both modes of operation.
      
      There are several side-effects of this.  One is that vmalloc will update the
      kernel address space mappings, and those updates need to be propagated into
      all processes if the kernel mappings are not intrinsically shared.  In the
      non-PAE case, this is done by maintaining a pgd_list of all processes; this
      list is used when all process pagetables must be updated.  pgd_list is
      threaded via otherwise unused entries in the page structure for the pgd, which
      means that the pgd must be page-sized for this to work.
      
      Normally the PAE pgd is only 4x64 byte entries large, but Xen requires the PAE
      pgd to page aligned anyway, so this patch forces the pgd to be page
      aligned+sized when the kernel pmd is unshared, to accomodate both these
      requirements.
      
      Also, since there may be several distinct kernel pmds (if the user/kernel
      split is below 3G), there's no point in allocating them from a slab cache;
      they're just allocated with get_free_page and initialized appropriately.  (Of
      course the could be cached if there is just a single kernel pmd - which is the
      default with a 3G user/kernel split - but it doesn't seem worthwhile to add
      yet another case into this code).
      
      [ Many thanks to wli for review comments. ]
      Signed-off-by: default avatarJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: default avatarWilliam Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      5311ab62
    • Jeremy Fitzhardinge's avatar
      [PATCH] i386: Relocate VDSO ELF headers to match mapped location with COMPAT_VDSO · d4f7a2c1
      Jeremy Fitzhardinge authored
      Some versions of libc can't deal with a VDSO which doesn't have its
      ELF headers matching its mapped address.  COMPAT_VDSO maps the VDSO at
      a specific system-wide fixed address.  Previously this was all done at
      build time, on the grounds that the fixed VDSO address is always at
      the top of the address space.  However, a hypervisor may reserve some
      of that address space, pushing the fixmap address down.
      
      This patch does the adjustment dynamically at runtime, depending on
      the runtime location of the VDSO fixmap.
      
      [ Patch has been through several hands: Jan Beulich wrote the orignal
        version; Zach reworked it, and Jeremy converted it to relocate phdrs
        as well as sections. ]
      Signed-off-by: default avatarJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: "Jan Beulich" <JBeulich@novell.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Roland McGrath <roland@redhat.com>
      d4f7a2c1
  9. 13 Feb, 2007 2 commits
    • Zachary Amsden's avatar
      [PATCH] i386: vMI backend for paravirt-ops · 7ce0bcfd
      Zachary Amsden authored
      Fairly straightforward implementation of VMI backend for paravirt-ops.
      
      [Adrian Bunk: some cleanups]
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      7ce0bcfd
    • Zachary Amsden's avatar
      [PATCH] MM: page allocation hooks for VMI backend · c119ecce
      Zachary Amsden authored
      The VMI backend uses explicit page type notification to track shadow page
      tables.  The allocation of page table roots is especially tricky.  We need to
      clone the root for non-PAE mode while it is protected under the pgd lock to
      correctly copy the shadow.
      
      We don't need to allocate pgds in PAE mode, (PDPs in Intel terminology) as
      they only have 4 entries, and are cached entirely by the processor, which
      makes shadowing them rather simple.
      
      For base page table level allocation, pmd_populate provides the exact hook
      point we need.  Also, we need to allocate pages when splitting a large page,
      and we must release pages before returning the page to any free pool.
      
      Despite being required with these slightly odd semantics for VMI, Xen also
      uses these hooks to determine the exact moment when page tables are created or
      released.
      
      AK: All nops for other architectures
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      c119ecce
  10. 07 Dec, 2006 2 commits
    • Christoph Lameter's avatar
      [PATCH] slab: remove kmem_cache_t · e18b890b
      Christoph Lameter authored
      Replace all uses of kmem_cache_t with struct kmem_cache.
      
      The patch was generated using the following script:
      
      	#!/bin/sh
      	#
      	# Replace one string by another in all the kernel sources.
      	#
      
      	set -e
      
      	for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
      		quilt add $file
      		sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
      		mv /tmp/$$ $file
      		quilt refresh
      	done
      
      The script was run like this
      
      	sh replace kmem_cache_t "struct kmem_cache"
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      e18b890b
    • Jan Beulich's avatar
      [PATCH] i386: clear_fixmap() should not use set_pte() · b0bfece4
      Jan Beulich authored
      While not strictly required with the current code (as the upper half of
      page table entries generated by __set_fixmap() cannot be non-zero due
      to the second parameter of this function being 'unsigned long'), the
      use of set_pte() in __set_fixmap() in the context of clear_fixmap() is
      still improper with CONFIG_X86_PAE (see the respective comment in
      include/asm-i386/pgtable-3level.h) and would turn into a bug if that
      second parameter ever gets changed to a 64-bit type.
      Signed-off-by: default avatarJan Beulich <jbeulich@novell.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      b0bfece4
  11. 26 Sep, 2006 2 commits
  12. 30 Jun, 2006 6 commits
  13. 27 Mar, 2006 1 commit
  14. 30 Oct, 2005 2 commits
    • Dave Hansen's avatar
      [PATCH] memory hotplug locking: node_size_lock · 208d54e5
      Dave Hansen authored
      pgdat->node_size_lock is basically only neeeded in one place in the normal
      code: show_mem(), which is the arch-specific sysrq-m printing function.
      
      Strictly speaking, the architectures not doing memory hotplug do no need this
      locking in show_mem().  However, they are all included for completeness.  This
      should also make any future consolidation of all of the implementations a
      little more straightforward.
      
      This lock is also held in the sparsemem code during a memory removal, as
      sections are invalidated.  This is the place there pfn_valid() is made false
      for a memory area that's being removed.  The lock is only required when doing
      pfn_valid() operations on memory which the user does not already have a
      reference on the page, such as in show_mem().
      Signed-off-by: default avatarDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      208d54e5
    • Hugh Dickins's avatar
      [PATCH] mm: split page table lock · 4c21e2f2
      Hugh Dickins authored
      Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
      a many-threaded application which concurrently initializes different parts of
      a large anonymous area.
      
      This patch corrects that, by using a separate spinlock per page table page, to
      guard the page table entries in that page, instead of using the mm's single
      page_table_lock.  (But even then, page_table_lock is still used to guard page
      table allocation, and anon_vma allocation.)
      
      In this implementation, the spinlock is tucked inside the struct page of the
      page table page: with a BUILD_BUG_ON in case it overflows - which it would in
      the case of 32-bit PA-RISC with spinlock debugging enabled.
      
      Splitting the lock is not quite for free: another cacheline access.  Ideally,
      I suppose we would use split ptlock only for multi-threaded processes on
      multi-cpu machines; but deciding that dynamically would have its own costs.
      So for now enable it by config, at some number of cpus - since the Kconfig
      language doesn't support inequalities, let preprocessor compare that with
      NR_CPUS.  But I don't think it's worth being user-configurable: for good
      testing of both split and unsplit configs, split now at 4 cpus, and perhaps
      change that to 8 later.
      
      There is a benefit even for singly threaded processes: kswapd can be attacking
      one part of the mm while another part is busy faulting.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4c21e2f2
  15. 05 Sep, 2005 1 commit
    • Zachary Amsden's avatar
      [PATCH] i386: encapsulate copying of pgd entries · d7271b14
      Zachary Amsden authored
      Add a clone operation for pgd updates.
      
      This helps complete the encapsulation of updates to page tables (or pages
      about to become page tables) into accessor functions rather than using
      memcpy() to duplicate them.  This is both generally good for consistency
      and also necessary for running in a hypervisor which requires explicit
      updates to page table entries.
      
      The new function is:
      
      clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
      
         dst - pointer to pgd range anwhere on a pgd page
         src - ""
         count - the number of pgds to copy.
      
         dst and src can be on the same page, but the range must not overlap
         and must not cross a page boundary.
      
      Note that I ommitted using this call to copy pgd entries into the
      software suspend page root, since this is not technically a live paging
      structure, rather it is used on resume from suspend.  CC'ing Pavel in case
      he has any feedback on this.
      
      Thanks to Chris Wright for noticing that this could be more optimal in
      PAE compiles by eliminating the memset.
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      d7271b14
  16. 25 Jun, 2005 1 commit
  17. 23 Jun, 2005 2 commits
    • Martin J. Bligh's avatar
      [PATCH] add page_state info to show_mem · 6f4e1e50
      Martin J. Bligh authored
      This helps a lot when debugging out of memory stuff - useful especially to
      see if all the memory is sucked into slab, etc.
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      6f4e1e50
    • Dave Hansen's avatar
      [PATCH] remove non-DISCONTIG use of pgdat->node_mem_map · 408fde81
      Dave Hansen authored
      This patch effectively eliminates direct use of pgdat->node_mem_map outside
      of the DISCONTIG code.  On a flat memory system, these fields aren't
      currently used, neither are they on a sparsemem system.
      
      There was also a node_mem_map(nid) macro on many architectures.  Its use
      along with the use of ->node_mem_map itself was not consistent.  It has
      been removed in favor of two new, more explicit, arch-independent macros:
      
      	pgdat_page_nr(pgdat, pagenr)
      	nid_page_nr(nid, pagenr)
      
      I called them "pgdat" and "nid" because we overload the term "node" to mean
      "NUMA node", "DISCONTIG node" or "pg_data_t" in very confusing ways.  I
      believe the newer names are much clearer.
      
      These macros can be overridden in the sparsemem case with a theoretically
      slower operation using node_start_pfn and pfn_to_page(), instead.  We could
      make this the only behavior if people want, but I don't want to change too
      much at once.  One thing at a time.
      
      This patch removes more code than it adds.
      
      Compile tested on alpha, alpha discontig, arm, arm-discontig, i386, i386
      generic, NUMAQ, Summit, ppc64, ppc64 discontig, and x86_64.  Full list
      here: http://sr71.net/patches/2.6.12/2.6.12-rc1-mhp2/configs/
      
      Boot tested on NUMAQ, x86 SMP and ppc64 power4/5 LPARs.
      Signed-off-by: default avatarDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: default avatarMartin J. Bligh <mbligh@aracnet.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      408fde81
  18. 19 Apr, 2005 1 commit
    • Hugh Dickins's avatar
      [PATCH] freepgt: free_pgtables use vma list · e0da382c
      Hugh Dickins authored
      Recent woes with some arches needing their own pgd_addr_end macro; and 4-level
      clear_page_range regression since 2.6.10's clear_page_tables; and its
      long-standing well-known inefficiency in searching throughout the higher-level
      page tables for those few entries to clear and free: all can be blamed on
      ignoring the list of vmas when we free page tables.
      
      Replace exit_mmap's clear_page_range of the total user address space by
      free_pgtables operating on the mm's vma list; unmap_region use it in the same
      way, giving floor and ceiling beyond which it may not free tables.  This
      brings lmbench fork/exec/sh numbers back to 2.6.10 (unless preempt is enabled,
      in which case latency fixes spoil unmap_vmas throughput).
      
      Beware: the do_mmap_pgoff driver failure case must now use unmap_region
      instead of zap_page_range, since a page table might have been allocated, and
      can only be freed while it is touched by some vma.
      
      Move free_pgtables from mmap.c to memory.c, where its lower levels are adapted
      from the clear_page_range levels.  (Most of free_pgtables' old code was
      actually for a non-existent case, prev not properly set up, dating from before
      hch gave us split_vma.) Pass mmu_gather** in the public interfaces, since we
      might want to add latency lockdrops later; but no attempt to do so yet, going
      by vma should itself reduce latency.
      
      But what if is_hugepage_only_range?  Those ia64 and ppc64 cases need careful
      examination: put that off until a later patch of the series.
      
      What of x86_64's 32bit vdso page __map_syscall32 maps outside any vma?
      
      And the range to sparc64's flush_tlb_pgtables?  It's less clear to me now that
      we need to do more than is done here - every PMD_SIZE ever occupied will be
      flushed, do we really have to flush every PGDIR_SIZE ever partially occupied? 
      A shame to complicate it unnecessarily.
      
      Special thanks to David Miller for time spent repairing my ceilings.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      e0da382c
  19. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4