1. 18 Aug, 2002 5 commits
  2. 17 Aug, 2002 6 commits
  3. 16 Aug, 2002 2 commits
  4. 15 Aug, 2002 27 commits
    • Linus Torvalds's avatar
      c2480c85
    • Andrew Morton's avatar
      [PATCH] memory leak in current BK · 2329a4f6
      Andrew Morton authored
      Well I didn't test that very well.  __page_cache_release() is doing a
      __free_page() on a zero-ref page, so __free_pages() sends the refcount
      negative and doesn't free it.  With patch #8, page_cache_release()
      almost never frees pages, but it must have been leaking a little bit.
      Lucky it showed up.
      
      This fixes it, and also adds a missing PageReserved test in put_page().
      Which makes put_page() identical to page_cache_release(), but there are
      header file woes.  I'll fix that up later.
      2329a4f6
    • Brad Heilbrun's avatar
      [PATCH] Reorder unlocking in rq_unlock · 0016745e
      Brad Heilbrun authored
      This trivial patch reorders the unlocking in rq_unlock()... I was
      tired of getting stack dumps in my messages file.
      0016745e
    • Linus Torvalds's avatar
      Don't allow user-level helpers to be run when our infrastructure · 0704298b
      Linus Torvalds authored
      isn't ready for it (either during early boot, or at shutdown)
      0704298b
    • Linus Torvalds's avatar
      Merge http://linux-isdn.bkbits.net/linux-2.5.isdn · 41421468
      Linus Torvalds authored
      into home.transmeta.com:/home/torvalds/v2.5/linux
      41421468
    • Kai Germaschewski's avatar
      ISDN: Remove debugging code · 0ccca8d5
      Kai Germaschewski authored
      0ccca8d5
    • Kai Germaschewski's avatar
      ISDN: Fix BC_BUSY problem · 4f609391
      Kai Germaschewski authored
      Make sure to properly reset the state after disconnect
      
      (Karsten Keil)
      4f609391
    • Kai Germaschewski's avatar
      992bbca5
    • Kai Germaschewski's avatar
      ISDN: __FUNCTION__ cleanup · 3baba482
      Kai Germaschewski authored
      Newer gcc's don't like string concat with __FUNCTION__, so
      use %s and __FUNCTION__ as argument.
      3baba482
    • Kai Germaschewski's avatar
      ISDN: Use C99 initializers · 6f124a96
      Kai Germaschewski authored
      Thanks to Rusty for posting the script...
      6f124a96
    • Kai Germaschewski's avatar
      ISDN: Fix Config.in problem · 7e7f7ea3
      Kai Germaschewski authored
      drivers/isdn/hysdn/Config.in was referring to
      CONFIG_ISDN_CAPI before it was defined.
      
      Noticed by Greg Banks.
      7e7f7ea3
    • Ingo Molnar's avatar
      [PATCH] thread management - take three · 496084cb
      Ingo Molnar authored
      you have applied my independent-pointer patch already, but i think your
      CLEARTID variant is the most elegant solution: it reuses a clone argument,
      thus reduces the number of arguments and it's also a nice conceptual pair
      to the existing SETTID call. And the TID field can be used as a 'usage'
      field as well, because the TID (PID) can never be 0, reducing the number
      of fields in the TCB. And we can change the userspace locking code to use
      the TID field no problem.
      496084cb
    • Paul Larson's avatar
      [PATCH] Include tgid when finding next_safe in get_pid() · eb2e58fd
      Paul Larson authored
      Include tgid when finding next_safe in get_pid()
      eb2e58fd
    • Benjamin LaHaise's avatar
      [PATCH] reduce stack usage of sanitize_e820_map · 270ebb5c
      Benjamin LaHaise authored
      Currently, sanitize_e820_map uses 0x738 bytes of stack.  The patch below
      moves the arrays into __initdata, reducing stack usage to 0x34 bytes.
      270ebb5c
    • Andrew Morton's avatar
      [PATCH] uninitialised local in generic_file_write · 7dd294f7
      Andrew Morton authored
      generic_file_write_nolock() is initialising the pagevec too late,
      so if we take an early `goto out' the kernel oopses.  O_DIRECT writes
      take that path.
      7dd294f7
    • Martin Mares's avatar
      [PATCH] PCI ID's for 2.5.31 · 75754eb4
      Martin Mares authored
      I've filtered all submissions to the ID database, merged new ID's from
      both 2.4.x and 2.5.x kernels and here is the result -- patch to 2.5.31
      pci.ids with all the new stuff. Could you please send it to Linus?
      (I would do it myself, but it seems I'll have a lot of work with the
      floods in Prague very soon.)
      75754eb4
    • Keith Mannthey's avatar
      [PATCH] for i386 SETUP CODE · 9cbec887
      Keith Mannthey authored
         The following is a simple fix for an array overrun problem in
      mpparse.c.  I am working on a multiquad box which has a EISA bus in it
      for it's service processor.  It's local bus number is 18 which is > 3
      (see quad_local_to_mp_bus_id.  When the NR_CPUS is close the the real
      number of cpus adding the EISA bus #18 in the array stomps all over
      various things in memory.  The EISA bus does not need to be mapped
      anywhere in the kernel for anything.  This patch will not affect non
      clustered apic (multiquad) kernels.
      9cbec887
    • Trond Myklebust's avatar
      [PATCH] Clean up the RPC socket slot allocation code [2/2] · fb9100d0
      Trond Myklebust authored
      Patch by Chuck Lever. Remove the timeout logic from call_reserve.
      This improves the overall RPC call ordering, and ensures that soft
      tasks don't time out and give up before they have attempted to send
      their message down the socket.
      fb9100d0
    • Trond Myklebust's avatar
      [PATCH] Clean up the RPC socket slot allocation code [1/2] · 7a72fa16
      Trond Myklebust authored
      Another patch by Chuck Lever. Fixes up some nasty logic in
      call_reserveresult().
      7a72fa16
    • Trond Myklebust's avatar
      [PATCH] cleanup RPC accounting · be6dd3ef
      Trond Myklebust authored
      The following patch is by Chuck Lever, and fixes an an accounting
      error in the 'rpc' field in /proc/net/rpc/nfs.
      be6dd3ef
    • Trond Myklebust's avatar
      [PATCH] Fix typo in the RPC reconnect code... · 0e6a8740
      Trond Myklebust authored
      The following patch fixes a typo that appears both in kernel 2.4.19
      and 2.5.31
      0e6a8740
    • Albert Cranford's avatar
      [PATCH] 2.5.31 reverse spin_lock_irq for i2c-elektor.c · 2e2fa887
      Albert Cranford authored
      Pleaase reverse deadlocking change to i2c-elektor.c
      2e2fa887
    • Andrew Morton's avatar
      [PATCH] deferred and batched addition of pages to the LRU · 44260240
      Andrew Morton authored
      The remaining source of page-at-a-time activity against
      pagemap_lru_lock is the anonymous pagefault path, which cannot be
      changed to operate against multiple pages at a time.
      
      But what we can do is to batch up just its adding of pages to the LRU,
      via buffering and deferral.
      
      This patch is based on work from Bill Irwin.
      
      The patch changes lru_cache_add to put the pages into a per-CPU
      pagevec.  They are added to the LRU 16-at-a-time.
      
      And in the page reclaim code, purge the local CPU's buffer before
      starting.  This is mainly to decrease the chances of pages staying off
      the LRU for very long periods: if the machine is under memory pressure,
      CPUs will spill their pages onto the LRU promptly.
      
      A consequence of this change is that we can have up to 15*num_cpus
      pages which are not on the LRU.  Which could have a slight effect on VM
      accuracy, but I find that doubtful.  If the system is under memory
      pressure the pages will be added to the LRU promptly, and these pages
      are the most-recently-touched ones - the VM isn't very interested in
      them anyway.
      
      This optimisation could be made SMP-specific, but I felt it best to
      turn it on for UP as well for consistency and better testing coverage.
      44260240
    • Andrew Morton's avatar
      [PATCH] pagemap_lru_lock wrapup · eed29d66
      Andrew Morton authored
      Some fallout from the pagemap_lru_lock changes:
      
      - lru_cache_del() is no longer used.  Kill it.
      
      - page_cache_release() almost never actually frees pages.  So inline
        page_cache_release() and move its rarely-called slow path into (the
        misnamed) mm/swap.c
      
      - update the locking comment in filemap.c.  pagemap_lru_lock used to
        be one of the outermost locks in the VM locking hierarchy.  Now, we
        never take any other locks while holding pagemap_lru_lock.  So it
        doesn't have any relationship with anything.
      
      - put_page() now removes pages from the LRU on the final put.  The
        lock is interrupt safe.
      eed29d66
    • Andrew Morton's avatar
      [PATCH] make pagemap_lru_lock irq-safe · aaba9265
      Andrew Morton authored
      It is expensive for a CPU to take an interrupt while holding the page
      LRU lock, because other CPUs will pile up on the lock while the
      interrupt runs.
      
      Disabling interrupts while holding the lock reduces contention by an
      additional 30% on 4-way.  This is when the only source of interrupts is
      disk completion.  The improvement will be higher with more CPUs and it
      will be higher if there is networking happening.
      
      The maximum hold time of this lock is 17 microseconds on 500 MHx PIII,
      which is well inside the kernel's maximum interrupt latency (which was
      100 usecs when I last looked, a year ago).
      
      This optimisation is not needed on uniprocessor, but the patch disables
      IRQs while holding pagemap_lru_lock anyway, so it becomes an irq-safe
      spinlock, and pages can be moved from the LRU in interrupt context.
      
      pagemap_lru_lock has been renamed to _pagemap_lru_lock to pick up any
      missed uses, and to reliably break any out-of-tree patches which may be
      using the old semantics.
      aaba9265
    • Andrew Morton's avatar
      [PATCH] batched removal of pages from the LRU · 008f707c
      Andrew Morton authored
      Convert all the bulk callers of lru_cache_del() to use the batched
      pagevec_lru_del() function.
      
      Change truncate_complete_page() to not delete the page from the LRU.
      Do it in page_cache_release() instead.  (This reintroduces the problem
      with final-release-from-interrupt.  THat gets fixed further on).
      
      This patch changes the truncate locking somewhat.  The removal from the
      LRU now happens _after_ the page has been removed from the
      address_space and has been unlocked.  So there is now a window where
      the shrink_cache code can discover the to-be-freed page via the LRU
      list.  But that's OK - the page is clean, its buffers (if any) are
      clean.  It's not attached to any mapping.
      008f707c
    • Andrew Morton's avatar
      [PATCH] batched addition of pages to the LRU · 9eb76ee2
      Andrew Morton authored
      The patch goes through the various places which were calling
      lru_cache_add() against bulk pages and batches them up.
      
      Also.  This whole patch series improves the behaviour of the system
      under heavy writeback load.  There is a reduction in page allocation
      failures, some reduction in loss of interactivity due to page
      allocators getting stuck on writeback from the VM.  (This is still bad
      though).
      
      I think it's due to the change here in mpage_writepages().  That
      function was originally unconditionally refiling written-back pages to
      the head of the inactive list.  The theory being that they should be
      moved out of the way of page allocators, who would end up waiting on
      them.
      
      It appears that this simply had the effect of pushing dirty, unwritten
      data closer to the tail of the inactive list, making things worse.
      
      So instead, if the caller is (typically) balance_dirty_pages() then
      leave the pages where they are on the LRU.
      
      If the caller is PF_MEMALLOC then the pages *have* to be refiled.  This
      is because VM writeback is clustered along mapping->dirty_pages, and
      it's almost certain that the pages which are being written are near the
      tail of the LRU.  If they were left there, page allocators would block
      on them too soon.  It would effectively become a synchronous write.
      9eb76ee2