1. 30 May, 2014 10 commits
    • Alexander Graf's avatar
      KVM: PPC: Book3S PR: PAPR: Access HTAB in big endian · 1692aa3f
      Alexander Graf authored
      The HTAB on PPC is always in big endian. When we access it via hypercalls
      on behalf of the guest and we're running on a little endian host, we need
      to make sure we swap the bits accordingly.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      1692aa3f
    • Alexander Graf's avatar
      KVM: PPC: Book3S PR: Default to big endian guest · 94810ba4
      Alexander Graf authored
      The default MSR when user space does not define anything should be identical
      on little and big endian hosts, so remove MSR_LE from it.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      94810ba4
    • Alexander Graf's avatar
      KVM: PPC: Book3S_64 PR: Access shadow slb in big endian · 14a7d41d
      Alexander Graf authored
      The "shadow SLB" in the PACA is shared with the hypervisor, so it has to
      be big endian. We access the shadow SLB during world switch, so let's make
      sure we access it in big endian even when we're on a little endian host.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      14a7d41d
    • Alexander Graf's avatar
      KVM: PPC: Book3S_64 PR: Access HTAB in big endian · 4e509af9
      Alexander Graf authored
      The HTAB is always big endian. We access the guest's HTAB using
      copy_from/to_user, but don't yet take care of the fact that we might
      be running on an LE host.
      
      Wrap all accesses to the guest HTAB with big endian accessors.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      4e509af9
    • Alexander Graf's avatar
      KVM: PPC: Book3S_32: PR: Access HTAB in big endian · 860540bc
      Alexander Graf authored
      The HTAB is always big endian. We access the guest's HTAB using
      copy_from/to_user, but don't yet take care of the fact that we might
      be running on an LE host.
      
      Wrap all accesses to the guest HTAB with big endian accessors.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      860540bc
    • Alexander Graf's avatar
      KVM: PPC: Book3S: PR: Fix C/R bit setting · 740f834e
      Alexander Graf authored
      Commit 9308ab8e made C/R HTAB updates go byte-wise into the target HTAB.
      However, it didn't update the guest's copy of the HTAB, but instead the
      host local copy of it.
      
      Write to the guest's HTAB instead.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      CC: Paul Mackerras <paulus@samba.org>
      Acked-by: default avatarPaul Mackerras <paulus@samba.org>
      740f834e
    • Aneesh Kumar K.V's avatar
      KVM: PPC: BOOK3S: PR: Fix WARN_ON with debug options on · 7562c4fd
      Aneesh Kumar K.V authored
      With debug option "sleep inside atomic section checking" enabled we get
      the below WARN_ON during a PR KVM boot. This is because upstream now
      have PREEMPT_COUNT enabled even if we have preempt disabled. Fix the
      warning by adding preempt_disable/enable around floating point and altivec
      enable.
      
      WARNING: at arch/powerpc/kernel/process.c:156
      Modules linked in: kvm_pr kvm
      CPU: 1 PID: 3990 Comm: qemu-system-ppc Tainted: G        W     3.15.0-rc1+ #4
      task: c0000000eb85b3a0 ti: c0000000ec59c000 task.ti: c0000000ec59c000
      NIP: c000000000015c84 LR: d000000003334644 CTR: c000000000015c00
      REGS: c0000000ec59f140 TRAP: 0700   Tainted: G        W      (3.15.0-rc1+)
      MSR: 8000000000029032 <SF,EE,ME,IR,DR,RI>  CR: 42000024  XER: 20000000
      CFAR: c000000000015c24 SOFTE: 1
      GPR00: d000000003334644 c0000000ec59f3c0 c000000000e2fa40 c0000000e2f80000
      GPR04: 0000000000000800 0000000000002000 0000000000000001 8000000000000000
      GPR08: 0000000000000001 0000000000000001 0000000000002000 c000000000015c00
      GPR12: d00000000333da18 c00000000fb80900 0000000000000000 0000000000000000
      GPR16: 0000000000000000 0000000000000000 0000000000000000 00003fffce4e0fa1
      GPR20: 0000000000000010 0000000000000001 0000000000000002 00000000100b9a38
      GPR24: 0000000000000002 0000000000000000 0000000000000000 0000000000000013
      GPR28: 0000000000000000 c0000000eb85b3a0 0000000000002000 c0000000e2f80000
      NIP [c000000000015c84] .enable_kernel_fp+0x84/0x90
      LR [d000000003334644] .kvmppc_handle_ext+0x134/0x190 [kvm_pr]
      Call Trace:
      [c0000000ec59f3c0] [0000000000000010] 0x10 (unreliable)
      [c0000000ec59f430] [d000000003334644] .kvmppc_handle_ext+0x134/0x190 [kvm_pr]
      [c0000000ec59f4c0] [d00000000324b380] .kvmppc_set_msr+0x30/0x50 [kvm]
      [c0000000ec59f530] [d000000003337cac] .kvmppc_core_emulate_op_pr+0x16c/0x5e0 [kvm_pr]
      [c0000000ec59f5f0] [d00000000324a944] .kvmppc_emulate_instruction+0x284/0xa80 [kvm]
      [c0000000ec59f6c0] [d000000003336888] .kvmppc_handle_exit_pr+0x488/0xb70 [kvm_pr]
      [c0000000ec59f790] [d000000003338d34] kvm_start_lightweight+0xcc/0xdc [kvm_pr]
      [c0000000ec59f960] [d000000003336288] .kvmppc_vcpu_run_pr+0xc8/0x190 [kvm_pr]
      [c0000000ec59f9f0] [d00000000324c880] .kvmppc_vcpu_run+0x30/0x50 [kvm]
      [c0000000ec59fa60] [d000000003249e74] .kvm_arch_vcpu_ioctl_run+0x54/0x1b0 [kvm]
      [c0000000ec59faf0] [d000000003244948] .kvm_vcpu_ioctl+0x478/0x760 [kvm]
      [c0000000ec59fcb0] [c000000000224e34] .do_vfs_ioctl+0x4d4/0x790
      [c0000000ec59fd90] [c000000000225148] .SyS_ioctl+0x58/0xb0
      [c0000000ec59fe30] [c00000000000a1e4] syscall_exit+0x0/0x98
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      7562c4fd
    • Aneesh Kumar K.V's avatar
      KVM: PPC: BOOK3S: PR: Enable Little Endian PR guest · e5ee5422
      Aneesh Kumar K.V authored
      This patch make sure we inherit the LE bit correctly in different case
      so that we can run Little Endian distro in PR mode
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      e5ee5422
    • Alexander Graf's avatar
      KVM: PPC: E500: Add dcbtls emulation · 8f20a3ab
      Alexander Graf authored
      The dcbtls instruction is able to lock data inside the L1 cache.
      
      We don't want to give the guest actual access to hardware cache locks,
      as that could influence other VMs on the same system. But we can tell
      the guest that its locking attempt failed.
      
      By implementing the instruction we at least don't give the guest a
      program exception which it definitely does not expect.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      8f20a3ab
    • Alexander Graf's avatar
      KVM: PPC: E500: Ignore L1CSR1_ICFI,ICLFR · 07fec1c2
      Alexander Graf authored
      The L1 instruction cache control register contains bits that indicate
      that we're still handling a request. Mask those out when we set the SPR
      so that a read doesn't assume we're still doing something.
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      07fec1c2
  2. 22 May, 2014 6 commits
    • Nadav Amit's avatar
      KVM: vmx: DR7 masking on task switch emulation is wrong · 1f854112
      Nadav Amit authored
      The DR7 masking which is done on task switch emulation should be in hex format
      (clearing the local breakpoints enable bits 0,2,4 and 6).
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1f854112
    • Dave Hansen's avatar
      x86: fix page fault tracing when KVM guest support enabled · 65a7f03f
      Dave Hansen authored
      I noticed on some of my systems that page fault tracing doesn't
      work:
      
      	cd /sys/kernel/debug/tracing
      	echo 1 > events/exceptions/enable
      	cat trace;
      	# nothing shows up
      
      I eventually traced it down to CONFIG_KVM_GUEST.  At least in a
      KVM VM, enabling that option breaks page fault tracing, and
      disabling fixes it.  I tried on some old kernels and this does
      not appear to be a regression: it never worked.
      
      There are two page-fault entry functions today.  One when tracing
      is on and another when it is off.  The KVM code calls do_page_fault()
      directly instead of calling the traced version:
      
      > dotraplinkage void __kprobes
      > do_async_page_fault(struct pt_regs *regs, unsigned long
      > error_code)
      > {
      >         enum ctx_state prev_state;
      >
      >         switch (kvm_read_and_reset_pf_reason()) {
      >         default:
      >                 do_page_fault(regs, error_code);
      >                 break;
      >         case KVM_PV_REASON_PAGE_NOT_PRESENT:
      
      I'm also having problems with the page fault tracing on bare
      metal (same symptom of no trace output).  I'm unsure if it's
      related.
      
      Steven had an alternative to this which has zero overhead when
      tracing is off where this includes the standard noops even when
      tracing is disabled.  I'm unconvinced that the extra complexity
      of his apporach:
      
      	http://lkml.kernel.org/r/20140508194508.561ed220@gandalf.local.home
      
      is worth it, expecially considering that the KVM code is already
      making page fault entry slower here.  This solution is
      dirt-simple.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86@kernel.org
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: kvm@vger.kernel.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: default avatar"H. Peter Anvin" <hpa@zytor.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      65a7f03f
    • Paolo Bonzini's avatar
      KVM: x86: get CPL from SS.DPL · ae9fedc7
      Paolo Bonzini authored
      CS.RPL is not equal to the CPL in the few instructions between
      setting CR0.PE and reloading CS.  And CS.DPL is also not equal
      to the CPL for conforming code segments.
      
      However, SS.DPL *is* always equal to the CPL except for the weird
      case of SYSRET on AMD processors, which sets SS.DPL=SS.RPL from the
      value in the STAR MSR, but force CPL=3 (Intel instead forces
      SS.DPL=SS.RPL=CPL=3).
      
      So this patch:
      
      - modifies SVM to update the CPL from SS.DPL rather than CS.RPL;
      the above case with SYSRET is not broken further, and the way
      to fix it would be to pass the CPL to userspace and back
      
      - modifies VMX to always return the CPL from SS.DPL (except
      forcing it to 0 if we are emulating real mode via vm86 mode;
      in vm86 mode all DPLs have to be 3, but real mode does allow
      privileged instructions).  It also removes the CPL cache,
      which becomes a duplicate of the SS access rights cache.
      
      This fixes doing KVM_IOCTL_SET_SREGS exactly after setting
      CR0.PE=1 but before CS has been reloaded.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ae9fedc7
    • Paolo Bonzini's avatar
      KVM: x86: check CS.DPL against RPL during task switch · 5045b468
      Paolo Bonzini authored
      Table 7-1 of the SDM mentions a check that the code segment's
      DPL must match the selector's RPL.  This was not done by KVM,
      fix it.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5045b468
    • Paolo Bonzini's avatar
      KVM: x86: drop set_rflags callback · fb5e336b
      Paolo Bonzini authored
      Not needed anymore now that the CPL is computed directly
      during task switch.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      fb5e336b
    • Paolo Bonzini's avatar
      KVM: x86: use new CS.RPL as CPL during task switch · 2356aaeb
      Paolo Bonzini authored
      During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
      to all the other requirements) and will be the new CPL.  So far this
      worked by carefully setting the CS selector and flag before doing the
      task switch; setting CS.selector will already change the CPL.
      
      However, this will not work once we get the CPL from SS.DPL, because
      then you will have to set the full segment descriptor cache to change
      the CPL.  ctxt->ops->cpl(ctxt) will then return the old CPL during the
      task switch, and the check that SS.DPL == CPL will fail.
      
      Temporarily assume that the CPL comes from CS.RPL during task switch
      to a protected-mode task.  This is the same approach used in QEMU's
      emulation code, which (until version 2.0) manually tracks the CPL.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2356aaeb
  3. 16 May, 2014 13 commits
  4. 12 May, 2014 1 commit
  5. 08 May, 2014 1 commit
    • Gabriel L. Somlo's avatar
      kvm: x86: emulate monitor and mwait instructions as nop · 87c00572
      Gabriel L. Somlo authored
      Treat monitor and mwait instructions as nop, which is architecturally
      correct (but inefficient) behavior. We do this to prevent misbehaving
      guests (e.g. OS X <= 10.7) from crashing after they fail to check for
      monitor/mwait availability via cpuid.
      
      Since mwait-based idle loops relying on these nop-emulated instructions
      would keep the host CPU pegged at 100%, do NOT advertise their presence
      via cpuid, to prevent compliant guests from using them inadvertently.
      Signed-off-by: default avatarGabriel L. Somlo <somlo@cmu.edu>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      87c00572
  6. 07 May, 2014 3 commits
  7. 06 May, 2014 6 commits