1. 10 Oct, 2017 1 commit
  2. 06 Oct, 2017 7 commits
  3. 05 Oct, 2017 2 commits
    • Naveen N. Rao's avatar
      powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return() · 3368f569
      Naveen N. Rao authored
      Fix a circa 2005 FIXME by implementing a check to ensure that we
      actually got into the jprobe break handler() due to the trap in
      jprobe_return().
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      3368f569
    • Naveen N. Rao's avatar
      powerpc/jprobes: Disable preemption when triggered through ftrace · 6baea433
      Naveen N. Rao authored
      KPROBES_SANITY_TEST throws the below splat when CONFIG_PREEMPT is
      enabled:
      
        Kprobe smoke test: started
        DEBUG_LOCKS_WARN_ON(val > preempt_count())
        ------------[ cut here ]------------
        WARNING: CPU: 19 PID: 1 at kernel/sched/core.c:3094 preempt_count_sub+0xcc/0x140
        Modules linked in:
        CPU: 19 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc7-nnr+ #97
        task: c0000000fea80000 task.stack: c0000000feb00000
        NIP:  c00000000011d3dc LR: c00000000011d3d8 CTR: c000000000a090d0
        REGS: c0000000feb03400 TRAP: 0700   Not tainted  (4.13.0-rc7-nnr+)
        MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28000282  XER: 00000000
        CFAR: c00000000015aa18 SOFTE: 0
        <snip>
        NIP preempt_count_sub+0xcc/0x140
        LR  preempt_count_sub+0xc8/0x140
        Call Trace:
          preempt_count_sub+0xc8/0x140 (unreliable)
          kprobe_handler+0x228/0x4b0
          program_check_exception+0x58/0x3b0
          program_check_common+0x16c/0x170
          --- interrupt: 0 at kprobe_target+0x8/0x20
                           LR = init_test_probes+0x248/0x7d0
          kp+0x0/0x80 (unreliable)
          livepatch_handler+0x38/0x74
          init_kprobes+0x1d8/0x208
          do_one_initcall+0x68/0x1d0
          kernel_init_freeable+0x298/0x374
          kernel_init+0x24/0x160
          ret_from_kernel_thread+0x5c/0x70
        Instruction dump:
        419effdc 3d22001b 39299240 81290000 2f890000 409effc8 3c82ffcb 3c62ffcb
        3884bc68 3863bc18 4803d5fd 60000000 <0fe00000> 4bffffa8 60000000 60000000
        ---[ end trace 432dd46b4ce3d29f ]---
        Kprobe smoke test: passed successfully
      
      The issue is that we aren't disabling preemption in
      kprobe_ftrace_handler(). Disable it.
      
      Fixes: ead514d5 ("powerpc/kprobes: Add support for KPROBES_ON_FTRACE")
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      [mpe: Trim oops a little for formatting]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      6baea433
  4. 04 Oct, 2017 18 commits
  5. 03 Oct, 2017 1 commit
  6. 28 Sep, 2017 2 commits
    • Frederic Barrat's avatar
      cxl: Enable global TLBIs for cxl contexts · 03b8abed
      Frederic Barrat authored
      The PSL and nMMU need to see all TLB invalidations for the memory
      contexts used on the adapter. For the hash memory model, it is done by
      making all TLBIs global as soon as the cxl driver is in use. For
      radix, we need something similar, but we can refine and only convert
      to global the invalidations for contexts actually used by the device.
      
      The new mm_context_add_copro() API increments the 'active_cpus' count
      for the contexts attached to the cxl adapter. As soon as there's more
      than 1 active cpu, the TLBIs for the context become global. Active cpu
      count must be decremented when detaching to restore locality if
      possible and to avoid overflowing the counter.
      
      The hash memory model support is somewhat limited, as we can't
      decrement the active cpus count when mm_context_remove_copro() is
      called, because we can't flush the TLB for a mm on hash. So TLBIs
      remain global on hash.
      Signed-off-by: default avatarFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Fixes: f24be42a ("cxl: Add psl9 specific code")
      Tested-by: default avatarAlistair Popple <alistair@popple.id.au>
      [mpe: Fold in updated comment on the barrier from Fred]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      03b8abed
    • Frederic Barrat's avatar
      powerpc/mm: Export flush_all_mm() · 6110236b
      Frederic Barrat authored
      With the optimizations introduced by commit a46cc7a9
      ("powerpc/mm/radix: Improve TLB/PWC flushes"), flush_tlb_mm() no
      longer flushes the page walk cache (PWC) with radix. This patch
      introduces flush_all_mm(), which flushes everything, TLB and PWC, for
      a given mm.
      Signed-off-by: default avatarFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Reviewed-By: default avatarAlistair Popple <alistair@popple.id.au>
      [mpe: Add a WARN_ON_ONCE() in the empty hash routines]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      6110236b
  7. 26 Sep, 2017 2 commits
    • Michael Neuling's avatar
      powerpc/64s: Add workaround for P9 vector CI load issue · 5080332c
      Michael Neuling authored
      POWER9 DD2.1 and earlier has an issue where some cache inhibited
      vector load will return bad data. The workaround is two part, one
      firmware/microcode part triggers HMI interrupts when hitting such
      loads, the other part is this patch which then emulates the
      instructions in Linux.
      
      The affected instructions are limited to lxvd2x, lxvw4x, lxvb16x and
      lxvh8x.
      
      When an instruction triggers the HMI, all threads in the core will be
      sent to the HMI handler, not just the one running the vector load.
      
      In general, these spurious HMIs are detected by the emulation code and
      we just return back to the running process. Unfortunately, if a
      spurious interrupt occurs on a vector load that's to normal memory we
      have no way to detect that it's spurious (unless we walk the page
      tables, which is very expensive). In this case we emulate the load but
      we need do so using a vector load itself to ensure 128bit atomicity is
      preserved.
      
      Some additional debugfs emulated instruction counters are added also.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      [mpe: Switch CONFIG_PPC_BOOK3S_64 to CONFIG_VSX to unbreak the build]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      5080332c
    • Benjamin Herrenschmidt's avatar
      powerpc/powernv: Rework EEH initialization on powernv · b9fde58d
      Benjamin Herrenschmidt authored
      Remove the post_init callback which is only used
      by powernv, we can just call it explicitly from
      the powernv code.
      
      This partially kills the ability to "disable" eeh at
      runtime via debugfs as this was calling that same
      callback again, but this is both unused and broken
      in several ways. If we want to revive it, we need
      to create a dedicated enable/disable callback on the
      backend that does the right thing.
      
      Let the bulk of eeh initialize normally at
      core_initcall() like it does on pseries by removing
      the hack in eeh_init() that delays it.
      
      Instead we make sure our eeh->probe cleanly bails
      out of the PEs haven't been created yet and we force
      a re-probe where we used to call eeh_init() again.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: default avatarRussell Currey <ruscur@russell.cc>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      b9fde58d
  8. 24 Sep, 2017 7 commits