1. 02 Mar, 2016 7 commits
    • Cyril Bur's avatar
      powerpc: Restore FPU/VEC/VSX if previously used · 70fe3d98
      Cyril Bur authored
      Currently the FPU, VEC and VSX facilities are lazily loaded. This is not
      a problem unless a process is using these facilities.
      
      Modern versions of GCC are very good at automatically vectorising code,
      new and modernised workloads make use of floating point and vector
      facilities, even the kernel makes use of vectorised memcpy.
      
      All this combined greatly increases the cost of a syscall since the
      kernel uses the facilities sometimes even in syscall fast-path making it
      increasingly common for a thread to take an *_unavailable exception soon
      after a syscall, not to mention potentially taking all three.
      
      The obvious overcompensation to this problem is to simply always load
      all the facilities on every exit to userspace. Loading up all FPU, VEC
      and VSX registers every time can be expensive and if a workload does
      avoid using them, it should not be forced to incur this penalty.
      
      An 8bit counter is used to detect if the registers have been used in the
      past and the registers are always loaded until the value wraps to back
      to zero.
      
      Several versions of the assembly in entry_64.S were tested:
      
        1. Always calling C.
        2. Performing a common case check and then calling C.
        3. A complex check in asm.
      
      After some benchmarking it was determined that avoiding C in the common
      case is a performance benefit (option 2). The full check in asm (option
      3) greatly complicated that codepath for a negligible performance gain
      and the trade-off was deemed not worth it.
      Signed-off-by: default avatarCyril Bur <cyrilbur@gmail.com>
      [mpe: Move load_vec in the struct to fill an existing hole, reword change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      
      fixup
      70fe3d98
    • Cyril Bur's avatar
      powerpc: Explicitly disable math features when copying thread · d272f667
      Cyril Bur authored
      Currently when threads get scheduled off they always giveup the FPU,
      Altivec (VMX) and Vector (VSX) units if they were using them. When they are
      scheduled back on a fault is then taken to enable each facility and load
      registers. As a result explicitly disabling FPU/VMX/VSX has not been
      necessary.
      
      Future changes and optimisations remove this mandatory giveup and fault
      which could cause calls such as clone() and fork() to copy threads and run
      them later with FPU/VMX/VSX enabled but no registers loaded.
      
      This patch starts the process of having MSR_{FP,VEC,VSX} mean that a
      threads registers are hot while not having MSR_{FP,VEC,VSX} means that the
      registers must be loaded. This allows for a smarter return to userspace.
      Signed-off-by: default avatarCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      d272f667
    • Cyril Bur's avatar
      selftests/powerpc: Test FPU and VMX regs in signal ucontext · 48e8c571
      Cyril Bur authored
      Load up the non volatile FPU and VMX regs and ensure that they are the
      expected value in a signal handler
      Signed-off-by: default avatarCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      48e8c571
    • Cyril Bur's avatar
      selftests/powerpc: Test preservation of FPU and VMX regs across preemption · e5ab8be6
      Cyril Bur authored
      Loop in assembly checking the registers with many threads.
      Signed-off-by: default avatarCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e5ab8be6
    • Cyril Bur's avatar
      selftests/powerpc: Test the preservation of FPU and VMX regs across syscall · 01127f1e
      Cyril Bur authored
      Test that the non volatile floating point and Altivec registers get
      correctly preserved across the fork() syscall.
      
      fork() works nicely for this purpose, the registers should be the same for
      both parent and child
      Signed-off-by: default avatarCyril Bur <cyrilbur@gmail.com>
      [mpe: Add include guards to basic_asm.h, minor formatting]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      01127f1e
    • Suraj Jitindar Singh's avatar
      selftests/powerpc: Remove -flto from common CFLAGS · a4cf0a2e
      Suraj Jitindar Singh authored
      LTO can cause GCC to inline some functions which have attributes set.
      The act of inlining the functions can lead to GCC forgetting about the
      attributes which leads to incorrect tests.
      
      Notable example being: __attribute__((__target__("no-vsx")))
      
      LTO can also interact strangely with custom assembly functions and cause
      tests to intermittently fail.
      
      Both these cases are hard to detect and require manual inspection of
      binaries which is unlikely to happen for all tests. Furthermore, LTO
      optimisations are not necessary for selftests and correctness is
      paramount and as such it is best to disable LTO.
      
      LTO can be enabled on a per test basis.
      
      A pseries_le_defconfig kernel on a POWER8 was used to determine that the
      same subset of selftests pass and fail with and without -flto in the
      common Makefile.
      Signed-off-by: default avatarSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Reviewed-by: default avatarCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      a4cf0a2e
    • Michael Ellerman's avatar
      selftests/powerpc: Fix out of bounds access in TM signal test · 501e279c
      Michael Ellerman authored
      Gcc helpfully points out that we're accessing past the end of the gprs
      array:
      
        tm-signal-msr-resv.c: In function 'signal_usr1':
        tm-signal-msr-resv.c:43:37: error: array subscript is above array bounds [-Werror=array-bounds]
          ucp->uc_mcontext.regs->gpr[PT_MSR] |= (7ULL);
      
      We haven't noticed previously because -flto was hiding it somehow.
      
      The code is confused, PT_MSR isn't a gpr, instead it's in
      uc_regs->gregs, so fix it.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      501e279c
  2. 01 Mar, 2016 9 commits
  3. 29 Feb, 2016 7 commits
  4. 27 Feb, 2016 2 commits
  5. 25 Feb, 2016 1 commit
  6. 24 Feb, 2016 5 commits
  7. 22 Feb, 2016 6 commits
  8. 17 Feb, 2016 3 commits
    • Boqun Feng's avatar
      powerpc: atomic: Implement acquire/release/relaxed variants for cmpxchg · 56c08e6d
      Boqun Feng authored
      Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on
      which _release variants can be built.
      
      To avoid superfluous barriers in _acquire variants, we implement these
      operations with assembly code rather use __atomic_op_acquire() to build
      them automatically.
      
      For the same reason, we keep the assembly implementation of fully
      ordered cmpxchg operations.
      
      However, we don't do the similar for _release, because that will require
      putting barriers in the middle of ll/sc loops, which is probably a bad
      idea.
      
      Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not
      compiler barriers.
      Signed-off-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      56c08e6d
    • Boqun Feng's avatar
      powerpc: atomic: Implement acquire/release/relaxed variants for xchg · 26760fc1
      Boqun Feng authored
      Implement xchg{,64}_relaxed and atomic{,64}_xchg_relaxed, based on these
      _relaxed variants, release/acquire variants and fully ordered versions
      can be built.
      
      Note that xchg{,64}_relaxed and atomic_{,64}_xchg_relaxed are not
      compiler barriers.
      Signed-off-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      26760fc1
    • Boqun Feng's avatar
      powerpc: atomic: Implement atomic{, 64}_*_return_* variants · dc53617c
      Boqun Feng authored
      On powerpc, acquire and release semantics can be achieved with
      lightweight barriers("lwsync" and "ctrl+isync"), which can be used to
      implement __atomic_op_{acquire,release}.
      
      For release semantics, since we only need to ensure all memory accesses
      that issue before must take effects before the -store- part of the
      atomics, "lwsync" is what we only need. On the platform without
      "lwsync", "sync" should be used. Therefore in __atomic_op_release() we
      use PPC_RELEASE_BARRIER.
      
      For acquire semantics, "lwsync" is what we only need for the similar
      reason.  However on the platform without "lwsync", we can use "isync"
      rather than "sync" as an acquire barrier. Therefore in
      __atomic_op_acquire() we use PPC_ACQUIRE_BARRIER, which is barrier() on
      UP, "lwsync" if available and "isync" otherwise.
      
      Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other
      variants with these helpers.
      Signed-off-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      dc53617c