1. 19 Sep, 2002 10 commits
    • Andrew Morton's avatar
      [PATCH] fix mmap(MAP_LOCKED) · 859629c6
      Andrew Morton authored
      From Hubertus Franke.
      
      The MAP_LOCKED flag to mmap() currently does nothing.  Hubertus' patch
      fixes it so that the relevant mapping is locked into memory, if the
      called has CAP_IPC_LOCK.
      859629c6
    • Andrew Morton's avatar
      [PATCH] fix suppression of page allocation failure warnings · d51832f3
      Andrew Morton authored
      Somebody somewhere is stomping on PF_NOWARN, and page allocation
      failure warnings are coming out of the wrong places.
      
      So change the handling of current->flags to be:
      
      int pf_flags = current->flags;
      
      current->flags |= PF_NOWARN;
      ...
      current->flags = pf_flags;
      
      which is a generally more robust approach.
      d51832f3
    • Andrew Morton's avatar
      [PATCH] readv/writev bounds checking fixes · d4872de3
      Andrew Morton authored
      - writev currently returns -EFAULT if _any_ of the segments has an
      invalid address.  We should only return -EFAULT if the first segment
      has a bad address.
      
      If some of the first segments have valid addresses we need to write
      them and return a partial result.
      
      - The current code only checks if the sum-of-lengths is negative.  If
      individual segments have a negative length but the result is positive
      we miss that.
      
      So rework the code to detect this, and to be immune to odd wrapping
      situations.
      
      As a bonus, we save one pass across the iovec.
      
      - ditto for readv.
      
      The check for "does any segment have a negative length" has already
      been performed in do_readv_writev(), but it's basically free here, and
      we need to do it for generic_file_read/write anyway.
      
      This all means that the iov_length() function is unsafe because of
      wrap/overflow isues.  It should only be used after the
      generic_file_read/write or do_readv_writev() checking has been
      performed.  Its callers have been reviewed and they are OK.
      
      The code now passes LTP testing and has been QA'd by Janet's team.
      d4872de3
    • Andrew Morton's avatar
      [PATCH] writev speedup · bd90a275
      Andrew Morton authored
      A patch from Hirokazu Takahashi to speed up the new sped-up writev
      code.
      
      Instead of running ->prepare_write/->commit_write for each individual
      segment, we walk the segments between prepage and commit.  So
      potentially much larger amounts of data are passed to commit_write(),
      and prepare_write() is called much less often.
      
      Added bonus: the segment walk happens inside the kmap_atomic(), so we
      run kmap_atomic() once per page, not once per segment.
      
      We've demonstrated a speedup of over 3x.  This is writing 1024-segment
      iovecs where the individual segments have an average length of 24
      bytes, which is a favourable case for this patch.
      bd90a275
    • Andrew Morton's avatar
      [PATCH] swapout fix · 62a29ea1
      Andrew Morton authored
      Silly bug which was halving swapout bandwidth: we've taken a copy of
      page->mapping into a local convenience variable, but forgot to update
      that local after adding the page to swapcache.
      62a29ea1
    • Andrew Morton's avatar
      [PATCH] remove /proc/sys/vm/dirty_sync_thresh · da1eca60
      Andrew Morton authored
      This was designed to be a really sterm throttling threshold: if dirty
      memory reaches this level then perform writeback and actually wait on
      it.
      
      It doesn't work.  Because memory dirtiers are required to perform
      writeback if the amount of dirty AND writeback memory exceeds
      dirty_async_ratio.
      
      So kill it, and rely just on the request queues being appropriately
      scaled to the machine size (they are).
      
      This is basically what 2.4 does.
      da1eca60
    • Andrew Morton's avatar
      [PATCH] remove statm_pgd_range · 6fda85f2
      Andrew Morton authored
      Bill Irwin's patch to avoid having to walk pagetables while generating
      /proc/stat output.
      
      It can significantly overstate the size of various mappings because it
      assumes that all VMAs are fully populated.
      
      But spending 100% of one of my four CPUs running top(1) is a bug.
      
      Bill says this fixes a bug, too.  The `SIZE' parameter is supposed to
      display the amount of memory which the process would consume if it
      faulted everything in.  But "before it only showed instantiated
      3rd-level pagetables, so if something within a 4MB aligned range hadn't
      been faulted in it would slip past the old one".
      6fda85f2
    • Andrew Morton's avatar
      [PATCH] _alloc_pages cleanup · ccc98a67
      Andrew Morton authored
      Patch from Martin Bligh.  It should only affect machines using
      discontigmem.
      
      "This patch is was originally from Andrea's tree (from SGI??), and has
      been tweaked since by both Christoph (who cleaned up all the code),
      and myself (who just hit it until it worked).
      
      It removes _alloc_pages, and adds all nodes to the zonelists
      directly, which also changes the fallback zone order to something more
      sensible ...  instead of: "foreach (node) { foreach (zone) }" we now
      do something more like "foreach (zone_type) { foreach (node) }"
      
      Christoph has a more recent version that's fancier and does a couple
      more cleanups, but it seems to have a bug in it that I can't track
      down easily, so I propose we do the simple thing for now, and take the
      rest of the cleanups when it works ...  it seems to build nicely on
      top of this seperately to me.
      
      Tested on 16-way NUMA-Q with discontigmem + NUMA support."
      ccc98a67
    • Andrew Morton's avatar
      [PATCH] free_area_init cleanup · e07316f9
      Andrew Morton authored
      Patch from Martin Bligh.  It should only affect machines using
      discontigmem.
      
      "This patch cleans up free_area_init stuff, and undefines mem_map and
      max_mapnr for discontigmem, where they were horrible kludges anyway
      ...  We just use the lmem_maps instead, which makes much more sense.
      It also kills pgdat->node_start_mapnr, which is tarred with the same
      brush.
      
      It breaks free_area_init_core into a couple of sections, pulls the
      allocation of the lmem_map back into the next higher function, and
      passes more things via the pgdat.  But that's not very interesting,
      the objective was to kill mem_map for discontigmem, which seems to
      attract bugs like flypaper.  This brings any misuses to obvious
      compile-time errors rather than wierd oopses, which I can't help but
      feel is a good thing.
      
      It does break other discontigmem architectures, but in a very obvious
      way (they won't compile) and it's easy to fix.  I think that's a small
      price to pay ...  ;-) At some point soon I will follow up with a patch
      to remove free_area_init_node for the contig mem case, or at the very
      least rename it to something more sensible, like __free_area_init.
      
      Christoph has grander plans to kill mem_map more extensively in
      addition to the attatched, but I've heard nobody disagree that it
      should die for the discontigmem case at least.
      
      Oh, and I renamed mem_map in drivers/pcmcia/sa1100 to pc_mem_map
      because my tiny little brain (and cscope) find it confusing like that.
      
      Tested on 16-way NUMA-Q with discontigmem + NUMA support and on a
      standard PC (well, boots and appears functional).  On top of
      2.5.33-mm4"
      e07316f9
    • Andrew Morton's avatar
      [PATCH] clean up argument passing in writeback paths · 967e6864
      Andrew Morton authored
      The writeback code paths which walk the superblocks and inodes are
      getting an increasing arguments passed to them.
      
      The patch wraps those args into the new `struct writeback_control',
      and uses that instead.  There is no functional change.
      
      The new writeback_control structure is passed down through the
      writeback paths in the place where the old `nr_to_write' pointer used
      to be.
      
      writeback_control will be used to pass new information up and down the
      writeback paths.  Such as whether the writeback should be non-blocking,
      and whether queue congestion was encountered.
      967e6864
  2. 17 Sep, 2002 19 commits
  3. 16 Sep, 2002 11 commits