An error occurred fetching the project authors.
  1. 08 Sep, 2004 1 commit
    • Anton Blanchard's avatar
      [PATCH] ppc64: fix __rw_yield prototype · ba5d8937
      Anton Blanchard authored
      From: Nathan Lynch <nathanl@austin.ibm.com>
      
      Hit this in latest bk:
      
      include/asm/spinlock.h: In function `_raw_read_lock':
      include/asm/spinlock.h:198: warning: passing arg 1 of `__rw_yield' from incompatible pointer type
      include/asm/spinlock.h: In function `_raw_write_lock':
      include/asm/spinlock.h:255: warning: passing arg 1 of `__rw_yield' from incompatible pointer type
      
      This seems to have been broken by the out-of-line spinlocks patch.
      You won't hit it unless you've enabled CONFIG_PPC_SPLPAR.  Use the
      rwlock_t for the argument type, and move the definition of rwlock_t up
      next to that of spinlock_t.
      Signed-off-by: default avatarNathan Lynch <nathanl@austin.ibm.com>
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ba5d8937
  2. 04 Sep, 2004 1 commit
  3. 02 Jul, 2004 1 commit
    • David Gibson's avatar
      [PATCH] ppc64: PACA cleanup · 16b004a7
      David Gibson authored
      Cleanup the PPC64 PACA structure.  It was previously a big mess of
      unecessary fields, overengineered cache layout and uninformative comments. 
      This is essentially a rewrite of include/asm-pp64/paca.h with associated
      changes elsewhere.  The patch:
      
      - Removes unused PACA fields
      
      - Removes uneeded #includes
      
      - Uses gcc attributes instead of explicit padding to get the desired
        cacheline layout, also rethinks the layout and comments accordingly.
      
      - Better comments where asm or firmware dependencies apply non-obvious
        layout constraints.
      
      - Splits up the pointless STAB structure, letting us move its elements
        independently.
      
      - Uses offsetof instead of hardcoded offset in spinlocks.
      
      - Eradicates xStudlyCaps identifiers
      
      - Replaces PACA guard page with an explicitly defined emergency stack
        (removing more than NR_CPUS pages from the initialized data segment).
      
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      16b004a7
  4. 25 May, 2004 1 commit
  5. 21 May, 2004 1 commit
    • Andrew Morton's avatar
      [PATCH] ppc64: fix inline version of _raw_spin_trylock · 68a6cd9b
      Andrew Morton authored
      From: Paul Mackerras <paulus@samba.org>
      
      When I added the out-of-line spinlocks on PPC64, I inadvertently introduced
      a bug in the inline version of _raw_spin_trylock, where it returns the
      opposite of what it should return.  The patch below fixes it.
      68a6cd9b
  6. 14 May, 2004 1 commit
    • Andrew Morton's avatar
      [PATCH] ppc64: strengthen I/O and memory barriers · 2c3956c9
      Andrew Morton authored
      From: Paul Mackerras <paulus@samba.org>
      
      After I sent the recent patch to include/asm-ppc64/io.h which put stronger
      barriers in the I/O accessor macros, Paul McKenney pointed out to me that a
      writex/outx could still slide out from inside a spinlocked region.  This patch
      makes the barriers a bit stronger so that this can't happen.  It means that we
      need to use a sync instruction for wmb (a full "heavyweight" sync), since
      drivers rely on wmb for ordering between writes to system memory and writes to
      a device.
      
      I have left smb_wmb() as a lighter-weight barrier that orders stores, and
      doesn't impose an ordering between cacheable and non-cacheable accesses (the
      amusingly-named eieio instruction).  I am assuming here that smp_wmb is only
      used for ordering stores to system memory so that another cpu will see them in
      order.  It can't be used for enforcing any ordering that a device will see,
      because it is just a gcc barrier on UP.
      
      This also changes the spinlock/rwlock unlock code to use lwsync ("light-weight
      sync") rather than eieio, since eieio doesn't order loads, and we need to
      ensure that loads stay inside the spinlocked region.
      2c3956c9
  7. 10 May, 2004 2 commits
    • Andrew Morton's avatar
      [PATCH] Un-inline spinlocks on ppc64 · 5dfd0a43
      Andrew Morton authored
      From: Paul Mackerras <paulus@samba.org>
      
      The patch below moves the ppc64 spinlocks and rwlocks out of line and into
      arch/ppc64/lib/locks.c, and implements _raw_spin_lock_flags for ppc64.
      
      Part of the motivation for moving the spinlocks and rwlocks out of line was
      that I needed to add code to the slow paths to yield the processor to the
      hypervisor on systems with shared processors.  On these systems, a cpu as
      seen by the kernel is a virtual processor that is not necessarily running
      full-time on a real physical cpu.  If we are spinning on a lock which is
      held by another virtual processor which is not running at the moment, we
      are just wasting time.  In such a situation it is better to do a hypervisor
      call to ask it to give the rest of our time slice to the lock holder so
      that forward progress can be made.
      
      The one problem with out-of-line spinlock routines is that lock contention
      will show up in profiles in the spin_lock etc.  routines rather than in the
      callers, as it does with inline spinlocks.  I have added a CONFIG_SPINLINE
      config option for people that want to do profiling.  In the longer term, Anton
      is talking about teaching the profiling code to attribute samples in the spin
      lock routines to the routine's caller.
      
      This patch reduces the kernel by about 80kB on my G5.  With inline
      spinlocks selected, the kernel gets about 4kB bigger than without the
      patch, because _raw_spin_lock_flags is slightly bigger than _raw_spin_lock.
      
      This patch depends on the patch from Keith Owens to add
      _raw_spin_lock_flags.
      5dfd0a43
    • Andrew Morton's avatar
      [PATCH] Allow architectures to reenable interrupts on contended spinlocks · 07f94531
      Andrew Morton authored
      From: Keith Owens <kaos@sgi.com>
      
      As requested by Linus, update all architectures to add the common
      infrastructure.  Tested on ia64 and i386.
      
      Enable interrupts while waiting for a disabled spinlock, but only if
      interrupts were enabled before issuing spin_lock_irqsave().
      
      This patch consists of three sections :-
      
      * An architecture independent change to call _raw_spin_lock_flags()
        instead of _raw_spin_lock() when the flags are available.
      
      * An ia64 specific change to implement _raw_spin_lock_flags() and to
        define _raw_spin_lock(lock) as _raw_spin_lock_flags(lock, 0) for the
        ASM_SUPPORTED case.
      
      * Patches for all other architectures and for ia64 with !ASM_SUPPORTED
        to map _raw_spin_lock_flags(lock, flags) to _raw_spin_lock(lock).
        Architecture maintainers can define _raw_spin_lock_flags() to do
        something useful if they want to enable interrupts while waiting for
        a disabled spinlock.
      07f94531
  8. 11 Sep, 2002 1 commit
  9. 12 Aug, 2002 1 commit
  10. 02 May, 2002 1 commit
  11. 12 Mar, 2002 1 commit
  12. 15 Feb, 2002 1 commit