An error occurred fetching the project authors.
  1. 06 Sep, 2005 1 commit
    • David Gibson's avatar
      [PATCH] Invert sense of SLB class bit · 14b34661
      David Gibson authored
      Currently, we set the class bit in kernel SLB entries, and clear it on
      user SLB entries.  On POWER5, ERAT entries created in real mode have
      the class bit clear.  So to avoid flushing kernel ERAT entries on each
      context switch, this patch inverts our usage of the class bit, setting
      it on user SLB entries and clearing it on kernel SLB entries.
      
      Booted on POWER5 and G5.
      Signed-off-by: default avatarDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      14b34661
  2. 29 Aug, 2005 2 commits
    • David Gibson's avatar
      [PATCH] Dynamic hugepage addresses for ppc64 · c594adad
      David Gibson authored
      Paulus, I think this is now a reasonable candidate for the post-2.6.13
      queue.
      
      Relax address restrictions for hugepages on ppc64
      
      Presently, 64-bit applications on ppc64 may only use hugepages in the
      address region from 1-1.5T.  Furthermore, if hugepages are enabled in
      the kernel config, they may only use hugepages and never normal pages
      in this area.  This patch relaxes this restriction, allowing any
      address to be used with hugepages, but with a 1TB granularity.  That
      is if you map a hugepage anywhere in the region 1TB-2TB, that entire
      area will be reserved exclusively for hugepages for the remainder of
      the process's lifetime.  This works analagously to hugepages in 32-bit
      applications, where hugepages can be mapped anywhere, but with 256MB
      (mmu segment) granularity.
      
      This patch applies on top of the four level pagetable patch
      (http://patchwork.ozlabs.org/linuxppc64/patch?id=1936).
      Signed-off-by: default avatarDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      c594adad
    • David Gibson's avatar
      [PATCH] Four level pagetables for ppc64 · e28f7faf
      David Gibson authored
      Implement 4-level pagetables for ppc64
      
      This patch implements full four-level page tables for ppc64, thereby
      extending the usable user address range to 44 bits (16T).
      
      The patch uses a full page for the tables at the bottom and top level,
      and a quarter page for the intermediate levels.  It uses full 64-bit
      pointers at every level, thus also increasing the addressable range of
      physical memory.  This patch also tweaks the VSID allocation to allow
      matching range for user addresses (this halves the number of available
      contexts) and adds some #if and BUILD_BUG sanity checks.
      Signed-off-by: default avatarDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      e28f7faf
  3. 13 Jul, 2005 1 commit
  4. 22 Jun, 2005 2 commits
    • Wolfgang Wander's avatar
      [PATCH] Avoiding mmap fragmentation · 1363c3cd
      Wolfgang Wander authored
      Ingo recently introduced a great speedup for allocating new mmaps using the
      free_area_cache pointer which boosts the specweb SSL benchmark by 4-5% and
      causes huge performance increases in thread creation.
      
      The downside of this patch is that it does lead to fragmentation in the
      mmap-ed areas (visible via /proc/self/maps), such that some applications
      that work fine under 2.4 kernels quickly run out of memory on any 2.6
      kernel.
      
      The problem is twofold:
      
        1) the free_area_cache is used to continue a search for memory where
           the last search ended.  Before the change new areas were always
           searched from the base address on.
      
           So now new small areas are cluttering holes of all sizes
           throughout the whole mmap-able region whereas before small holes
           tended to close holes near the base leaving holes far from the base
           large and available for larger requests.
      
        2) the free_area_cache also is set to the location of the last
           munmap-ed area so in scenarios where we allocate e.g.  five regions of
           1K each, then free regions 4 2 3 in this order the next request for 1K
           will be placed in the position of the old region 3, whereas before we
           appended it to the still active region 1, placing it at the location
           of the old region 2.  Before we had 1 free region of 2K, now we only
           get two free regions of 1K -> fragmentation.
      
      The patch addresses thes issues by introducing yet another cache descriptor
      cached_hole_size that contains the largest known hole size below the
      current free_area_cache.  If a new request comes in the size is compared
      against the cached_hole_size and if the request can be filled with a hole
      below free_area_cache the search is started from the base instead.
      
      The results look promising: Whereas 2.6.12-rc4 fragments quickly and my
      (earlier posted) leakme.c test program terminates after 50000+ iterations
      with 96 distinct and fragmented maps in /proc/self/maps it performs nicely
      (as expected) with thread creation, Ingo's test_str02 with 20000 threads
      requires 0.7s system time.
      
      Taking out Ingo's patch (un-patch available per request) by basically
      deleting all mentions of free_area_cache from the kernel and starting the
      search for new memory always at the respective bases we observe: leakme
      terminates successfully with 11 distinctive hardly fragmented areas in
      /proc/self/maps but thread creating is gringdingly slow: 30+s(!) system
      time for Ingo's test_str02 with 20000 threads.
      
      Now - drumroll ;-) the appended patch works fine with leakme: it ends with
      only 7 distinct areas in /proc/self/maps and also thread creation seems
      sufficiently fast with 0.71s for 20000 threads.
      Signed-off-by: default avatarWolfgang Wander <wwc@rentec.com>
      Credit-to: "Richard Purdie" <rpurdie@rpsys.net>
      Signed-off-by: default avatarKen Chen <kenneth.w.chen@intel.com>
      Acked-by: Ingo Molnar <mingo@elte.hu> (partly)
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      1363c3cd
    • David Gibson's avatar
      [PATCH] Hugepage consolidation · 63551ae0
      David Gibson authored
      A lot of the code in arch/*/mm/hugetlbpage.c is quite similar.  This patch
      attempts to consolidate a lot of the code across the arch's, putting the
      combined version in mm/hugetlb.c.  There are a couple of uglyish hacks in
      order to covert all the hugepage archs, but the result is a very large
      reduction in the total amount of code.  It also means things like hugepage
      lazy allocation could be implemented in one place, instead of six.
      
      Tested, at least a little, on ppc64, i386 and x86_64.
      
      Notes:
      	- this patch changes the meaning of set_huge_pte() to be more
      	  analagous to set_pte()
      	- does SH4 need s special huge_ptep_get_and_clear()??
      Acked-by: default avatarWilliam Lee Irwin <wli@holomorphy.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      63551ae0
  5. 01 May, 2005 1 commit
  6. 19 Apr, 2005 2 commits
    • Hugh Dickins's avatar
      [PATCH] freepgt: hugetlb area is clean · 021740dc
      Hugh Dickins authored
      Once we're strict about clearing away page tables, hugetlb_prefault can assume
      there are no page tables left within its range.  Since the other arches
      continue if !pte_none here, let i386 do the same.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      021740dc
    • Hugh Dickins's avatar
      [PATCH] freepgt: hugetlb_free_pgd_range · 3bf5ee95
      Hugh Dickins authored
      ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being
      called, and it wasn't obvious what to do about them.
      
      The ppc64 case turns out to be easy: the associated tables are noted elsewhere
      and freed later, safe to either skip its hugetlb areas or go through the
      motions of freeing nothing.  Since ia64 does need a special case, restore to
      ppc64 the special case of skipping them.
      
      The ia64 hugetlb case has been broken since pgd_addr_end went in, though it
      probably appeared to work okay if you just had one such area; in fact it's
      been broken much longer if you consider a long munmap spanning from another
      region into the hugetlb region.
      
      In the ia64 hugetlb region, more virtual address bits are available than in
      the other regions, yet the page tables are structured the same way: the page
      at the bottom is larger.  Here we need to scale down each addr before passing
      it to the standard free_pgd_range.  Was about to write a hugely_scaled_down
      macro, but found htlbpage_to_page already exists for just this purpose.  Fixed
      off-by-one in ia64 is_hugepage_only_range.
      
      Uninline free_pgd_range to make it available to ia64.  Make sure the
      vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any
      other (safe to join huges?  probably but don't bother).
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      3bf5ee95
  7. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4