1. 03 Feb, 2017 33 commits
    • James Hogan's avatar
      KVM: MIPS: Improve kvm_get_inst() error return · 122e51d4
      James Hogan authored
      Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a
      fault reading the guest instruction. This has the rather arbitrary magic
      value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is
      a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)".
      
      Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to
      return the instruction via a u32 *out argument. We can then drop the
      KVM_INVALID_INST definition entirely.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      122e51d4
    • James Hogan's avatar
      KVM: MIPS/T&E: Don't treat code fetch faults as MMIO · a1ecc54d
      James Hogan authored
      In order to make use of the CP0_BadInstr & CP0_BadInstrP registers we
      need to be a bit more careful not to treat code fetch faults as MMIO,
      lest we hit an UNPREDICTABLE register value when we try to emulate the
      MMIO load instruction but there was no valid instruction word available
      to the hardware.
      
      Add a kvm_is_ifetch_fault() helper to try to figure out whether a load
      fault was due to a code fetch, and prevent MMIO instruction emulation in
      that case.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a1ecc54d
    • James Hogan's avatar
      KVM: MIPS/MMU: Drop kvm_get_new_mmu_context() · a98dd741
      James Hogan authored
      MIPS KVM uses its own variation of get_new_mmu_context() which takes an
      extra vcpu pointer (unused) and does exactly the same thing.
      
      Switch to just using get_new_mmu_context() directly and drop KVM's
      version of it as it doesn't really serve any purpose.
      
      The nearby declarations of kvm_mips_alloc_new_mmu_context(),
      kvm_mips_vcpu_load() and kvm_mips_vcpu_put() are also removed from
      kvm_host.h, as no definitions or users exist.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a98dd741
    • James Hogan's avatar
      KVM: MIPS/Emulate: Drop redundant TLB flushes on exceptions · 7071a885
      James Hogan authored
      When exceptions are injected into the MIPS KVM guest, the whole host TLB
      is flushed (except any entries in the guest KSeg0 range). This is
      certainly not mandated by the architecture when exceptions are taken
      (userland can't directly change TLB mappings anyway), and is a pretty
      heavyweight operation:
      
       - There may be hundreds of TLB entries especially when a 512 entry FTLB
         is present. These are walked and read and conditionally invalidated,
         so the TLBINV feature can't be used either.
      
       - It'll indiscriminately wipe out entries belonging to other memory
         spaces. A simple ASID regeneration would be much faster to perform,
         although it'd wipe out the guest KSeg0 mappings too.
      
      My suspicion is that this was simply to plaster over the fact that
      kvm_mips_host_tlb_inv() incorrectly only invalidated TLB entries in the
      ASID for guest usermode, and not the ASID for guest kernelmode.
      
      Now that the recent commit "KVM: MIPS/TLB: Flush host TLB entry in
      kernel ASID" fixes kvm_mips_host_tlb_inv() to flush TLB entries in the
      kernelmode ASID when the guest TLB changes, lets drop these calls and
      the otherwise unused kvm_mips_flush_host_tlb().
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7071a885
    • James Hogan's avatar
      KVM: MIPS/TLB: Drop kvm_local_flush_tlb_all() · 49ec508e
      James Hogan authored
      Now that KVM no longer uses wired entries we can safely use
      local_flush_tlb_all() when we need to flush the entire TLB (on the start
      of a new ASID cycle). This doesn't flush wired entries, which allows
      other code to use them without KVM clobbering them all the time. It also
      is more up to date, knowing about the tlbinv architectural feature,
      flushing of micro TLB on cores where that is necessary (Loongson I
      believe), and knows to stop the HTW while doing so.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      49ec508e
    • James Hogan's avatar
      KVM: MIPS/Emulate: Fix CACHE emulation for EVA hosts · 8af0e3c2
      James Hogan authored
      Use protected_writeback_dcache_line() instead of flush_dcache_line(),
      and protected_flush_icache_line() instead of flush_icache_line(), so
      that CACHEE (the EVA variant) is used on EVA host kernels.
      
      Without this, guest floating point branch delay slot emulation via a
      trampoline on the user stack fails on EVA host kernels due to failure of
      the icache sync, resulting in the break instruction getting skipped and
      execution from the stack.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      8af0e3c2
    • James Hogan's avatar
      KVM: MIPS: Use uaccess to read/modify guest instructions · dacc3ed1
      James Hogan authored
      Now that we have GVA page tables, use standard user accesses with page
      faults disabled to read & modify guest instructions. This should be more
      robust (than the rather dodgy method of accessing guest mapped segments
      by just directly addressing them) and will also work with Enhanced
      Virtual Addressing (EVA) host kernel configurations where dedicated
      instructions are needed for accessing user mode memory.
      
      For simplicity and speed we do this regardless of the guest segment the
      address resides in, rather than handling guest KSeg0 specially with
      kmap_atomic() as before.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      dacc3ed1
    • James Hogan's avatar
      KVM: MIPS: Drop vm_init() callback · 7a156e9f
      James Hogan authored
      Now that the commpage doesn't use wired TLB entries, the per-CPU
      vm_init() callback is the only work done by kvm_mips_init_vm_percpu().
      
      The trap & emulate implementation doesn't actually need to do anything
      from vm_init(), and the future VZ implementation would be better served
      by a kvm_arch_hardware_enable callback anyway.
      
      Therefore drop the vm_init() callback entirely, allowing the
      kvm_mips_init_vm_percpu() function to also be dropped, along with the
      kvm_mips_instance atomic counter.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7a156e9f
    • James Hogan's avatar
      KVM: MIPS/MMU: Convert commpage fault handling to page tables · 4c86460c
      James Hogan authored
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of commpage faults from the guest kernel to
      fill the GVA page table and invalidate the TLB entry, rather than
      filling the wired TLB entry directly.
      
      For simplicity we no longer use a wired entry for the commpage (refill
      should be much cheaper with the fast-path handler anyway). Since we
      don't need to manipulate the TLB directly any longer, move the function
      from tlb.c to mmu.c. This puts it closer to the similar functions
      handling KSeg0 and TLB mapped page faults from the guest.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      4c86460c
    • James Hogan's avatar
      KVM: MIPS/MMU: Convert TLB mapped faults to page tables · 7e3d2a75
      James Hogan authored
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of page faults in TLB mapped segment from
      the guest to fill a single GVA page table entry and invalidate the TLB
      entry, rather than filling a TLB entry pair directly.
      
      Also remove the now unused kvm_mips_get_{kernel,user}_asid() functions
      in mmu.c and kvm_mips_host_tlb_write() in tlb.c.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7e3d2a75
    • James Hogan's avatar
      KVM: MIPS/MMU: Convert KSeg0 faults to page tables · fb995893
      James Hogan authored
      Now that we have GVA page tables and an optimised TLB refill handler in
      place, convert the handling of KSeg0 page faults from the guest to fill
      the GVA page tables and invalidate the TLB entry, rather than filling a
      TLB entry directly.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      fb995893
    • James Hogan's avatar
      KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW · aba85929
      James Hogan authored
      Implement invalidation of specific pairs of GVA page table entries in
      one or both of the GVA page tables. This is used when existing mappings
      are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due
      to the sharing of page tables in the host kernel range, we should be
      careful not to allow host pages to be invalidated.
      
      Add a helper kvm_mips_walk_pgd() which can be used when walking of
      either GPA (future patches) or GVA page tables is needed, optionally
      with allocation of page tables along the way when they don't exist.
      
      GPA page table walking will need to be protected by the kvm->mmu_lock,
      so we also add a small MMU page cache in each KVM VCPU, like that found
      for other architectures but smaller. This allows enough pages to be
      pre-allocated to handle a single fault without holding the lock,
      allowing the helper to run with the lock held without having to handle
      allocation failures.
      
      Using the same mechanism for GVA allows the same code to be used, and
      allows it to use the same cache of allocated pages if the GPA walk
      didn't need to allocate any new tables.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      aba85929
    • James Hogan's avatar
      KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes · a31b50d7
      James Hogan authored
      Implement invalidation of large ranges of virtual addresses from GVA
      page tables in response to a guest ASID change (immediately for guest
      kernel page table, lazily for guest user page table).
      
      We iterate through a range of page tables invalidating entries and
      freeing fully invalidated tables. To minimise overhead the exact ranges
      invalidated depends on the flags argument to kvm_mips_flush_gva_pt(),
      which also allows it to be used in future KVM_CAP_SYNC_MMU patches in
      response to GPA changes, which unlike guest TLB mapping changes affects
      guest KSeg0 mappings.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a31b50d7
    • James Hogan's avatar
      KVM: MIPS/TLB: Generalise host TLB invalidate to kernel ASID · 57e3869c
      James Hogan authored
      Refactor kvm_mips_host_tlb_inv() to also be able to invalidate any
      matching TLB entry in the kernel ASID rather than assuming only the TLB
      entries in the user ASID can change. Two new bool user/kernel arguments
      allow the caller to indicate whether the mapping should affect each of
      the ASIDs for guest user/kernel mode.
      
      - kvm_mips_invalidate_guest_tlb() (used by TLBWI/TLBWR emulation) can
        now invalidate any corresponding TLB entry in both the kernel ASID
        (guest kernel may have accessed any guest mapping), and the user ASID
        if the entry being replaced is in guest USeg (where guest user may
        also have accessed it).
      
      - The tlbmod fault handler (and the KSeg0 / TLB mapped / commpage fault
        handlers in later patches) can now invalidate the corresponding TLB
        entry in whichever ASID is currently active, since only a single page
        table will have been updated anyway.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      57e3869c
    • James Hogan's avatar
      KVM: MIPS/TLB: Fix off-by-one in TLB invalidate · f3a8603f
      James Hogan authored
      kvm_mips_host_tlb_inv() uses the TLBP instruction to probe the host TLB
      for an entry matching the given guest virtual address, and determines
      whether a match was found based on whether CP0_Index > 0. This is
      technically incorrect as an index of 0 (with the high bit clear) is a
      perfectly valid TLB index.
      
      This is harmless at the moment due to the use of at least 1 wired TLB
      entry for the KVM commpage, however we will soon be ridding ourselves of
      that particular wired entry so lets fix the condition in case the entry
      needing invalidation does land at TLB index 0.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f3a8603f
    • James Hogan's avatar
      KVM: MIPS: Add fast path TLB refill handler · a7cfa7ac
      James Hogan authored
      Use functions from the general MIPS TLB exception vector generation code
      (tlbex.c) to construct a fast path TLB refill handler similar to the
      general one, but cut down and capable of preserving K0 and K1.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a7cfa7ac
    • James Hogan's avatar
      KVM: MIPS: Support NetLogic KScratch registers · 29b500b5
      James Hogan authored
      tlbex.c uses the implementation dependent $22 CP0 register group on
      NetLogic cores, with the help of the c0_kscratch() helper. Allow these
      registers to be allocated by the KVM entry code too instead of assuming
      KScratch registers are all $31, which will also allow pgd_reg to be
      handled since it is allocated that way.
      
      We also drop the masking of kscratch_mask with 0xfc, as it is redundant
      for the standard KScratch registers (Config4.KScrExist won't have the
      low 2 bits set anyway), and apparently not necessary for NetLogic.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      29b500b5
    • James Hogan's avatar
      KVM: MIPS/T&E: Activate GVA page tables in guest context · 7faa6eec
      James Hogan authored
      Activate the GVA page tables when in guest context. This will allow the
      normal Linux TLB refill handler to fill from it when guest memory is
      read, as well as preventing accidental reading from user memory.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7faa6eec
    • James Hogan's avatar
      KVM: MIPS/T&E: Allocate GVA -> HPA page tables · f7f1427d
      James Hogan authored
      Allocate GVA -> HPA page tables for guest kernel and guest user mode on
      each VCPU, to allow for fast path TLB refill handling to be added later.
      
      In the process kvm_arch_vcpu_init() needs updating to pass on any error
      from the vcpu_init() callback.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f7f1427d
    • James Hogan's avatar
      KVM: MIPS: Wire up vcpu uninit · 630766b3
      James Hogan authored
      Wire up a vcpu uninit implementation callback. This will be used for the
      clean up of GVA->HPA page tables.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      630766b3
    • James Hogan's avatar
      KVM: MIPS/T&E: active_mm = init_mm in guest context · a7ebb2e4
      James Hogan authored
      Set init_mm as the active_mm and update mm_cpumask(current->mm) to
      reflect that it isn't active when in guest context. This prevents cache
      management code from attempting cache flushes on host virtual addresses
      while in guest context, for example due to a cache management IPIs or
      later when writing of dynamically translated code hits copy on write.
      
      We do this using helpers in static kernel code to avoid having to export
      init_mm to modules.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a7ebb2e4
    • James Hogan's avatar
      KVM: MIPS/T&E: Restore host asid on return to host · 91cdee57
      James Hogan authored
      We only need the guest ASID loaded while in guest context, i.e. while
      running guest code and while handling guest exits. We load the guest
      ASID when entering the guest, however we restore the host ASID later
      than necessary, when the VCPU state is saved i.e. vcpu_put() or slightly
      earlier if preempted after returning to the host.
      
      This mismatch is both unpleasant and causes redundant host ASID restores
      in kvm_trap_emul_vcpu_put(). Lets explicitly restore the host ASID when
      returning to the host, and don't bother restoring the host ASID on
      context switch in unless we're already in guest context.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      91cdee57
    • James Hogan's avatar
      KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacks · a2c046e4
      James Hogan authored
      Add implementation callbacks for entering the guest (vcpu_run()) and
      reentering the guest (vcpu_reenter()), allowing implementation specific
      operations to be performed before entering the guest or after returning
      to the host without cluttering kvm_arch_vcpu_ioctl_run().
      
      This allows the T&E specific lazy user GVA flush to be moved into
      trap_emul.c, along with disabling of the HTW. We also move
      kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer
      state prior to delivering interrupts.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a2c046e4
    • James Hogan's avatar
      KVM: MIPS: Remove duplicated ASIDs from vcpu · c550d539
      James Hogan authored
      The kvm_vcpu_arch structure contains both mm_structs for allocating MMU
      contexts (primarily the ASID) but it also copies the resulting ASIDs
      into guest_{user,kernel}_asid[] arrays which are referenced from uasm
      generated code.
      
      This duplication doesn't seem to serve any purpose, and it gets in the
      way of generalising the ASID handling across guest kernel/user modes, so
      lets just extract the ASID straight out of the mm_struct on demand, and
      in fact there are convenient cpu_context() and cpu_asid() macros for
      doing so.
      
      To reduce the verbosity of this code we do also add kern_mm and user_mm
      local variables where the kernel and user mm_structs are used.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      c550d539
    • James Hogan's avatar
      KVM: MIPS/MMU: Move preempt/ASID handling to implementation · 1581ff3d
      James Hogan authored
      The MIPS KVM host and guest GVA ASIDs may need regenerating when
      scheduling a process in guest context, which is done from the
      kvm_arch_vcpu_load() / kvm_arch_vcpu_put() functions in mmu.c.
      
      However this is a fairly implementation specific detail. VZ for example
      may use GuestIDs instead of normal ASIDs to distinguish mappings
      belonging to different guests, and even on VZ without GuestID the root
      TLB will be used differently to trap & emulate.
      
      Trap & emulate GVA ASIDs only relate to the user part of the full
      address space, so can be left active during guest exit handling (guest
      context) to allow guest instructions to be easily read and translated.
      
      VZ root ASIDs however are for GPA mappings so can't be left active
      during normal kernel code. They also aren't useful for accessing guest
      virtual memory, and we should have CP0_BadInstr[P] registers available
      to provide encodings of trapping guest instructions anyway.
      
      Therefore move the ASID preemption handling into the implementation
      callback.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1581ff3d
    • James Hogan's avatar
      KVM: MIPS: Convert get/set_regs -> vcpu_load/put · a60b8438
      James Hogan authored
      Convert the get_regs() and set_regs() callbacks to vcpu_load() and
      vcpu_put(), which provide a cpu argument and more closely match the
      kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by.
      
      This is in preparation for moving ASID management into the
      implementations.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a60b8438
    • James Hogan's avatar
      KVM: MIPS/MMU: Simplify ASID restoration · 1534b396
      James Hogan authored
      KVM T&E uses an ASID for guest kernel mode and an ASID for guest user
      mode. The current ASID is saved when the guest is scheduled out, and
      restored when scheduling back in, with checks for whether the ASID needs
      to be regenerated.
      
      This isn't really necessary as the ASID can be easily determined by the
      current guest mode, so lets simplify it to just read the required ASID
      from guest_kernel_asid or guest_user_asid even if the ASID hasn't been
      regenerated.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1534b396
    • James Hogan's avatar
      KVM: MIPS: Drop partial KVM_NMI implementation · 00104b41
      James Hogan authored
      MIPS incompletely implements the KVM_NMI ioctl to supposedly perform a
      CPU reset, but all it actually does is invalidate the ASIDs. It doesn't
      expose the KVM_CAP_USER_NMI capability which is supposed to indicate the
      presence of the KVM_NMI ioctl, and no user software actually uses it on
      MIPS.
      
      Since this is dead code that would technically need updating for GVA
      page table handling in upcoming patches, remove it now. If we wanted to
      implement NMI injection later it can always be done properly along with
      the KVM_CAP_USER_NMI capability, and if we wanted to implement a proper
      CPU reset it would be better done with a separate ioctl.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      00104b41
    • James Hogan's avatar
      Merge MIPS prerequisites · adb0b25f
      James Hogan authored
      Merge in MIPS prerequisites from GVA page tables and GPA page tables
      series. The same branch can also merge into the MIPS tree.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      adb0b25f
    • James Hogan's avatar
      MIPS: Add return errors to protected cache ops · 7170bdc7
      James Hogan authored
      The protected cache ops contain no out of line fixup code to return an
      error code in the event of a fault, with the cache op being skipped in
      that case. For KVM however we'd like to detect this case as page
      faulting will be disabled so it could happen during normal operation if
      the GVA page tables were flushed, and need to be handled by the caller.
      
      Add the out-of-line fixup code to load the error value -EFAULT into the
      return variable, and adapt the protected cache line functions to pass
      the error back to the caller.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7170bdc7
    • James Hogan's avatar
      MIPS: Export some tlbex internals for KVM to use · 722b4544
      James Hogan authored
      Export to TLB exception code generating functions so that KVM can
      construct a fast TLB refill handler for guest context without
      reinventing the wheel quite so much.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      722b4544
    • James Hogan's avatar
      MIPS: uasm: Add include guards in asm/uasm.h · 93a93c24
      James Hogan authored
      Add include guards in asm/uasm.h to allow it to be safely used by a new
      header asm/tlbex.h in the next patch to expose TLB exception building
      functions for KVM to use.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      93a93c24
    • James Hogan's avatar
      MIPS: Export pgd/pmd symbols for KVM · ccf01516
      James Hogan authored
      Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
      GPL kernel modules so that MIPS KVM can use the inline page table
      management functions and switch between page tables:
      
      - pmd_init() will be used directly by KVM to initialise newly allocated
        pmd tables with invalid lower level table pointers.
      
      - invalid_pmd_table is used by pud_present(), pud_none(), and
        pud_clear(), which KVM will use to test and clear pud entries.
      
      - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
        to the appropriate GVA page tables.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      ccf01516
  2. 02 Feb, 2017 2 commits
  3. 20 Jan, 2017 1 commit
  4. 17 Jan, 2017 2 commits
  5. 16 Jan, 2017 2 commits