1. 04 Feb, 2021 5 commits
    • Tian Tao's avatar
      KVM: X86: use vzalloc() instead of vmalloc/memset · c910662c
      Tian Tao authored
      fixed the following warning:
      /virt/kvm/dirty_ring.c:70:20-27: WARNING: vzalloc should be used for
      ring -> dirty_gfns, instead of vmalloc/memset.
      Signed-off-by: default avatarTian Tao <tiantao6@hisilicon.com>
      Message-Id: <1611547045-13669-1-git-send-email-tiantao6@hisilicon.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c910662c
    • Sean Christopherson's avatar
      KVM: x86: Take KVM's SRCU lock only if steal time update is needed · 15b51dc0
      Sean Christopherson authored
      Enter a SRCU critical section for a memslots lookup during steal time
      update if and only if a steal time update is actually needed.  Taking
      the lock can be avoided if steal time is disabled by the guest, or if
      KVM knows it has already flagged the vCPU as being preempted.
      
      Reword the comment to be more precise as to exactly why memslots will
      be queried.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210123000334.3123628-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      15b51dc0
    • Sean Christopherson's avatar
      KVM: x86: Remove obsolete disabling of page faults in kvm_arch_vcpu_put() · 19979fba
      Sean Christopherson authored
      Remove the disabling of page faults across kvm_steal_time_set_preempted()
      as KVM now accesses the steal time struct (shared with the guest) via a
      cached mapping (see commit b0431382, "x86/KVM: Make sure
      KVM_VCPU_FLUSH_TLB flag is not missed".)  The cache lookup is flagged as
      atomic, thus it would be a bug if KVM tried to resolve a new pfn, i.e.
      we want the splat that would be reached via might_fault().
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210123000334.3123628-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      19979fba
    • Paolo Bonzini's avatar
      KVM: do not assume PTE is writable after follow_pfn · bd2fae8d
      Paolo Bonzini authored
      In order to convert an HVA to a PFN, KVM usually tries to use
      the get_user_pages family of functinso.  This however is not
      possible for VM_IO vmas; in that case, KVM instead uses follow_pfn.
      
      In doing this however KVM loses the information on whether the
      PFN is writable.  That is usually not a problem because the main
      use of VM_IO vmas with KVM is for BARs in PCI device assignment,
      however it is a bug.  To fix it, use follow_pte and check pte_write
      while under the protection of the PTE lock.  The information can
      be used to fail hva_to_pfn_remapped or passed back to the
      caller via *writable.
      
      Usage of follow_pfn was introduced in commit add6a0cd ("KVM: MMU: try to fix
      up page faults before giving up", 2016-07-05); however, even older version
      have the same issue, all the way back to commit 2e2e3738 ("KVM:
      Handle vma regions with no backing page", 2008-07-20), as they also did
      not check whether the PFN was writable.
      
      Fixes: 2e2e3738 ("KVM: Handle vma regions with no backing page")
      Reported-by: default avatarDavid Stevens <stevensd@google.com>
      Cc: 3pvd@google.com
      Cc: Jann Horn <jannh@google.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      bd2fae8d
    • Ben Gardon's avatar
      KVM: x86/mmu: Fix TDP MMU zap collapsible SPTEs · 87aa9ec9
      Ben Gardon authored
      There is a bug in the TDP MMU function to zap SPTEs which could be
      replaced with a larger mapping which prevents the function from doing
      anything. Fix this by correctly zapping the last level SPTEs.
      
      Cc: stable@vger.kernel.org
      Fixes: 14881998 ("kvm: x86/mmu: Support disabling dirty logging for the tdp MMU")
      Signed-off-by: default avatarBen Gardon <bgardon@google.com>
      Message-Id: <20210202185734.1680553-11-bgardon@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      87aa9ec9
  2. 03 Feb, 2021 2 commits
    • Paolo Bonzini's avatar
      KVM: x86: cleanup CR3 reserved bits checks · c1c35cf7
      Paolo Bonzini authored
      If not in long mode, the low bits of CR3 are reserved but not enforced to
      be zero, so remove those checks.  If in long mode, however, the MBZ bits
      extend down to the highest physical address bit of the guest, excluding
      the encryption bit.
      
      Make the checks consistent with the above, and match them between
      nested_vmcb_checks and KVM_SET_SREGS.
      
      Cc: stable@vger.kernel.org
      Fixes: 761e4169 ("KVM: nSVM: Check that MBZ bits in CR3 and CR4 are not set on vmrun of nested guests")
      Fixes: a780a3ea ("KVM: X86: Fix reserved bits check for MOV to CR3")
      Reviewed-by: default avatarSean Christopherson <seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c1c35cf7
    • Sean Christopherson's avatar
      KVM: SVM: Treat SVM as unsupported when running as an SEV guest · ccd85d90
      Sean Christopherson authored
      Don't let KVM load when running as an SEV guest, regardless of what
      CPUID says.  Memory is encrypted with a key that is not accessible to
      the host (L0), thus it's impossible for L0 to emulate SVM, e.g. it'll
      see garbage when reading the VMCB.
      
      Technically, KVM could decrypt all memory that needs to be accessible to
      the L0 and use shadow paging so that L0 does not need to shadow NPT, but
      exposing such information to L0 largely defeats the purpose of running as
      an SEV guest.  This can always be revisited if someone comes up with a
      use case for running VMs inside SEV guests.
      
      Note, VMLOAD, VMRUN, etc... will also #GP on GPAs with C-bit set, i.e. KVM
      is doomed even if the SEV guest is debuggable and the hypervisor is willing
      to decrypt the VMCB.  This may or may not be fixed on CPUs that have the
      SVME_ADDR_CHK fix.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210202212017.2486595-1-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ccd85d90
  3. 02 Feb, 2021 1 commit
    • Sean Christopherson's avatar
      KVM: x86: Update emulator context mode if SYSENTER xfers to 64-bit mode · 943dea8a
      Sean Christopherson authored
      Set the emulator context to PROT64 if SYSENTER transitions from 32-bit
      userspace (compat mode) to a 64-bit kernel, otherwise the RIP update at
      the end of x86_emulate_insn() will incorrectly truncate the new RIP.
      
      Note, this bug is mostly limited to running an Intel virtual CPU model on
      an AMD physical CPU, as other combinations of virtual and physical CPUs
      do not trigger full emulation.  On Intel CPUs, SYSENTER in compatibility
      mode is legal, and unconditionally transitions to 64-bit mode.  On AMD
      CPUs, SYSENTER is illegal in compatibility mode and #UDs.  If the vCPU is
      AMD, KVM injects a #UD on SYSENTER in compat mode.  If the pCPU is Intel,
      SYSENTER will execute natively and not trigger #UD->VM-Exit (ignoring
      guest TLB shenanigans).
      
      Fixes: fede8076 ("KVM: x86: handle wrap around 32-bit address space")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJonny Barker <jonny@jonnybarker.com>
      [sean: wrote changelog]
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210202165546.2390296-1-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      943dea8a
  4. 01 Feb, 2021 3 commits
    • Vitaly Kuznetsov's avatar
      KVM: x86: Supplement __cr4_reserved_bits() with X86_FEATURE_PCID check · 4683d758
      Vitaly Kuznetsov authored
      Commit 7a873e45 ("KVM: selftests: Verify supported CR4 bits can be set
      before KVM_SET_CPUID2") reveals that KVM allows to set X86_CR4_PCIDE even
      when PCID support is missing:
      
      ==== Test Assertion Failure ====
        x86_64/set_sregs_test.c:41: rc
        pid=6956 tid=6956 - Invalid argument
           1	0x000000000040177d: test_cr4_feature_bit at set_sregs_test.c:41
           2	0x00000000004014fc: main at set_sregs_test.c:119
           3	0x00007f2d9346d041: ?? ??:0
           4	0x000000000040164d: _start at ??:?
        KVM allowed unsupported CR4 bit (0x20000)
      
      Add X86_FEATURE_PCID feature check to __cr4_reserved_bits() to make
      kvm_is_valid_cr4() fail.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20210201142843.108190-1-vkuznets@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4683d758
    • Zheng Zhan Liang's avatar
      KVM/x86: assign hva with the right value to vm_munmap the pages · b66f9bab
      Zheng Zhan Liang authored
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarZheng Zhan Liang <zhengzhanliang@huorong.cn>
      Message-Id: <20210201055310.267029-1-zhengzhanliang@huorong.cn>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b66f9bab
    • Paolo Bonzini's avatar
      KVM: x86: Allow guests to see MSR_IA32_TSX_CTRL even if tsx=off · 7131636e
      Paolo Bonzini authored
      Userspace that does not know about KVM_GET_MSR_FEATURE_INDEX_LIST
      will generally use the default value for MSR_IA32_ARCH_CAPABILITIES.
      When this happens and the host has tsx=on, it is possible to end up with
      virtual machines that have HLE and RTM disabled, but TSX_CTRL available.
      
      If the fleet is then switched to tsx=off, kvm_get_arch_capabilities()
      will clear the ARCH_CAP_TSX_CTRL_MSR bit and it will not be possible to
      use the tsx=off hosts as migration destinations, even though the guests
      do not have TSX enabled.
      
      To allow this migration, allow guests to write to their TSX_CTRL MSR,
      while keeping the host MSR unchanged for the entire life of the guests.
      This ensures that TSX remains disabled and also saves MSR reads and
      writes, and it's okay to do because with tsx=off we know that guests will
      not have the HLE and RTM features in their CPUID.  (If userspace sets
      bogus CPUID data, we do not expect HLE and RTM to work in guests anyway).
      
      Cc: stable@vger.kernel.org
      Fixes: cbbaa272 ("KVM: x86: fix presentation of TSX feature in ARCH_CAPABILITIES")
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7131636e
  5. 28 Jan, 2021 4 commits
  6. 25 Jan, 2021 13 commits
    • Paolo Bonzini's avatar
      KVM: x86: allow KVM_REQ_GET_NESTED_STATE_PAGES outside guest mode for VMX · 9a78e158
      Paolo Bonzini authored
      VMX also uses KVM_REQ_GET_NESTED_STATE_PAGES for the Hyper-V eVMCS,
      which may need to be loaded outside guest mode.  Therefore we cannot
      WARN in that case.
      
      However, that part of nested_get_vmcs12_pages is _not_ needed at
      vmentry time.  Split it out of KVM_REQ_GET_NESTED_STATE_PAGES handling,
      so that both vmentry and migration (and in the latter case, independent
      of is_guest_mode) do the parts that are needed.
      
      Cc: <stable@vger.kernel.org> # 5.10.x: f2c7ef3b: KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES
      Cc: <stable@vger.kernel.org> # 5.10.x
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9a78e158
    • Sean Christopherson's avatar
      KVM: x86: Revert "KVM: x86: Mark GPRs dirty when written" · aed89418
      Sean Christopherson authored
      Revert the dirty/available tracking of GPRs now that KVM copies the GPRs
      to the GHCB on any post-VMGEXIT VMRUN, even if a GPR is not dirty.  Per
      commit de3cd117 ("KVM: x86: Omit caching logic for always-available
      GPRs"), tracking for GPRs noticeably impacts KVM's code footprint.
      
      This reverts commit 1c04d8c9.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210122235049.3107620-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      aed89418
    • Sean Christopherson's avatar
      KVM: SVM: Unconditionally sync GPRs to GHCB on VMRUN of SEV-ES guest · 25009140
      Sean Christopherson authored
      Drop the per-GPR dirty checks when synchronizing GPRs to the GHCB, the
      GRPs' dirty bits are set from time zero and never cleared, i.e. will
      always be seen as dirty.  The obvious alternative would be to clear
      the dirty bits when appropriate, but removing the dirty checks is
      desirable as it allows reverting GPR dirty+available tracking, which
      adds overhead to all flavors of x86 VMs.
      
      Note, unconditionally writing the GPRs in the GHCB is tacitly allowed
      by the GHCB spec, which allows the hypervisor (or guest) to provide
      unnecessary info; it's the guest's responsibility to consume only what
      it needs (the hypervisor is untrusted after all).
      
        The guest and hypervisor can supply additional state if desired but
        must not rely on that additional state being provided.
      
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Fixes: 291bd20d ("KVM: SVM: Add initial support for a VMGEXIT VMEXIT")
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210122235049.3107620-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      25009140
    • Maxim Levitsky's avatar
      KVM: nVMX: Sync unsync'd vmcs02 state to vmcs12 on migration · d51e1d3f
      Maxim Levitsky authored
      Even when we are outside the nested guest, some vmcs02 fields
      may not be in sync vs vmcs12.  This is intentional, even across
      nested VM-exit, because the sync can be delayed until the nested
      hypervisor performs a VMCLEAR or a VMREAD/VMWRITE that affects those
      rarely accessed fields.
      
      However, during KVM_GET_NESTED_STATE, the vmcs12 has to be up to date to
      be able to restore it.  To fix that, call copy_vmcs02_to_vmcs12_rare()
      before the vmcs12 contents are copied to userspace.
      
      Fixes: 7952d769 ("KVM: nVMX: Sync rarely accessed guest fields only when needed")
      Reviewed-by: default avatarSean Christopherson <seanjc@google.com>
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20210114205449.8715-2-mlevitsk@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d51e1d3f
    • Lorenzo Brescia's avatar
      kvm: tracing: Fix unmatched kvm_entry and kvm_exit events · d95df951
      Lorenzo Brescia authored
      On VMX, if we exit and then re-enter immediately without leaving
      the vmx_vcpu_run() function, the kvm_entry event is not logged.
      That means we will see one (or more) kvm_exit, without its (their)
      corresponding kvm_entry, as shown here:
      
       CPU-1979 [002] 89.871187: kvm_entry: vcpu 1
       CPU-1979 [002] 89.871218: kvm_exit:  reason MSR_WRITE
       CPU-1979 [002] 89.871259: kvm_exit:  reason MSR_WRITE
      
      It also seems possible for a kvm_entry event to be logged, but then
      we leave vmx_vcpu_run() right away (if vmx->emulation_required is
      true). In this case, we will have a spurious kvm_entry event in the
      trace.
      
      Fix these situations by moving trace_kvm_entry() inside vmx_vcpu_run()
      (where trace_kvm_exit() already is).
      
      A trace obtained with this patch applied looks like this:
      
       CPU-14295 [000] 8388.395387: kvm_entry: vcpu 0
       CPU-14295 [000] 8388.395392: kvm_exit:  reason MSR_WRITE
       CPU-14295 [000] 8388.395393: kvm_entry: vcpu 0
       CPU-14295 [000] 8388.395503: kvm_exit:  reason EXTERNAL_INTERRUPT
      
      Of course, not calling trace_kvm_entry() in common x86 code any
      longer means that we need to adjust the SVM side of things too.
      Signed-off-by: default avatarLorenzo Brescia <lorenzo.brescia@edu.unito.it>
      Signed-off-by: default avatarDario Faggioli <dfaggioli@suse.com>
      Message-Id: <160873470698.11652.13483635328769030605.stgit@Wayrath>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d95df951
    • Zenghui Yu's avatar
      KVM: Documentation: Update description of KVM_{GET,CLEAR}_DIRTY_LOG · 01ead84c
      Zenghui Yu authored
      Update various words, including the wrong parameter name and the vague
      description of the usage of "slot" field.
      Signed-off-by: default avatarZenghui Yu <yuzenghui@huawei.com>
      Message-Id: <20201208043439.895-1-yuzenghui@huawei.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      01ead84c
    • Jay Zhou's avatar
      KVM: x86: get smi pending status correctly · 1f7becf1
      Jay Zhou authored
      The injection process of smi has two steps:
      
          Qemu                        KVM
      Step1:
          cpu->interrupt_request &= \
              ~CPU_INTERRUPT_SMI;
          kvm_vcpu_ioctl(cpu, KVM_SMI)
      
                                      call kvm_vcpu_ioctl_smi() and
                                      kvm_make_request(KVM_REQ_SMI, vcpu);
      
      Step2:
          kvm_vcpu_ioctl(cpu, KVM_RUN, 0)
      
                                      call process_smi() if
                                      kvm_check_request(KVM_REQ_SMI, vcpu) is
                                      true, mark vcpu->arch.smi_pending = true;
      
      The vcpu->arch.smi_pending will be set true in step2, unfortunately if
      vcpu paused between step1 and step2, the kvm_run->immediate_exit will be
      set and vcpu has to exit to Qemu immediately during step2 before mark
      vcpu->arch.smi_pending true.
      During VM migration, Qemu will get the smi pending status from KVM using
      KVM_GET_VCPU_EVENTS ioctl at the downtime, then the smi pending status
      will be lost.
      Signed-off-by: default avatarJay Zhou <jianjay.zhou@huawei.com>
      Signed-off-by: default avatarShengen Zhuang <zhuangshengen@huawei.com>
      Message-Id: <20210118084720.1585-1-jianjay.zhou@huawei.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1f7becf1
    • Like Xu's avatar
      KVM: x86/pmu: Fix HW_REF_CPU_CYCLES event pseudo-encoding in intel_arch_events[] · 98dd2f10
      Like Xu authored
      The HW_REF_CPU_CYCLES event on the fixed counter 2 is pseudo-encoded as
      0x0300 in the intel_perfmon_event_map[]. Correct its usage.
      
      Fixes: 62079d8a ("KVM: PMU: add proper support for fixed counter 2")
      Signed-off-by: default avatarLike Xu <like.xu@linux.intel.com>
      Message-Id: <20201230081916.63417-1-like.xu@linux.intel.com>
      Reviewed-by: default avatarSean Christopherson <seanjc@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      98dd2f10
    • Like Xu's avatar
      KVM: x86/pmu: Fix UBSAN shift-out-of-bounds warning in intel_pmu_refresh() · e61ab2a3
      Like Xu authored
      Since we know vPMU will not work properly when (1) the guest bit_width(s)
      of the [gp|fixed] counters are greater than the host ones, or (2) guest
      requested architectural events exceeds the range supported by the host, so
      we can setup a smaller left shift value and refresh the guest cpuid entry,
      thus fixing the following UBSAN shift-out-of-bounds warning:
      
      shift exponent 197 is too large for 64-bit type 'long long unsigned int'
      
      Call Trace:
       __dump_stack lib/dump_stack.c:79 [inline]
       dump_stack+0x107/0x163 lib/dump_stack.c:120
       ubsan_epilogue+0xb/0x5a lib/ubsan.c:148
       __ubsan_handle_shift_out_of_bounds.cold+0xb1/0x181 lib/ubsan.c:395
       intel_pmu_refresh.cold+0x75/0x99 arch/x86/kvm/vmx/pmu_intel.c:348
       kvm_vcpu_after_set_cpuid+0x65a/0xf80 arch/x86/kvm/cpuid.c:177
       kvm_vcpu_ioctl_set_cpuid2+0x160/0x440 arch/x86/kvm/cpuid.c:308
       kvm_arch_vcpu_ioctl+0x11b6/0x2d70 arch/x86/kvm/x86.c:4709
       kvm_vcpu_ioctl+0x7b9/0xdb0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3386
       vfs_ioctl fs/ioctl.c:48 [inline]
       __do_sys_ioctl fs/ioctl.c:753 [inline]
       __se_sys_ioctl fs/ioctl.c:739 [inline]
       __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Reported-by: syzbot+ae488dc136a4cc6ba32b@syzkaller.appspotmail.com
      Signed-off-by: default avatarLike Xu <like.xu@linux.intel.com>
      Message-Id: <20210118025800.34620-1-like.xu@linux.intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e61ab2a3
    • Sean Christopherson's avatar
      KVM: x86: Add more protection against undefined behavior in rsvd_bits() · eb79cd00
      Sean Christopherson authored
      Add compile-time asserts in rsvd_bits() to guard against KVM passing in
      garbage hardcoded values, and cap the upper bound at '63' for dynamic
      values to prevent generating a mask that would overflow a u64.
      Suggested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210113204515.3473079-1-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      eb79cd00
    • Quentin Perret's avatar
      KVM: Documentation: Fix spec for KVM_CAP_ENABLE_CAP_VM · a10f373a
      Quentin Perret authored
      The documentation classifies KVM_ENABLE_CAP with KVM_CAP_ENABLE_CAP_VM
      as a vcpu ioctl, which is incorrect. Fix it by specifying it as a VM
      ioctl.
      
      Fixes: e5d83c74 ("kvm: make KVM_CAP_ENABLE_CAP_VM architecture agnostic")
      Signed-off-by: default avatarQuentin Perret <qperret@google.com>
      Message-Id: <20210108165349.747359-1-qperret@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a10f373a
    • Paolo Bonzini's avatar
      Merge tag 'kvmarm-fixes-5.11-2' of... · 615099b0
      Paolo Bonzini authored
      Merge tag 'kvmarm-fixes-5.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
      
      KVM/arm64 fixes for 5.11, take #2
      
      - Don't allow tagged pointers to point to memslots
      - Filter out ARMv8.1+ PMU events on v8.0 hardware
      - Hide PMU registers from userspace when no PMU is configured
      - More PMU cleanups
      - Don't try to handle broken PSCI firmware
      - More sys_reg() to reg_to_encoding() conversions
      615099b0
    • Andrew Scull's avatar
      KVM: arm64: Don't clobber x4 in __do_hyp_init · e500b805
      Andrew Scull authored
      arm_smccc_1_1_hvc() only adds write contraints for x0-3 in the inline
      assembly for the HVC instruction so make sure those are the only
      registers that change when __do_hyp_init is called.
      Tested-by: default avatarDavid Brazdil <dbrazdil@google.com>
      Signed-off-by: default avatarAndrew Scull <ascull@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210125145415.122439-3-ascull@google.com
      e500b805
  7. 21 Jan, 2021 3 commits
  8. 14 Jan, 2021 4 commits
  9. 10 Jan, 2021 5 commits
    • Linus Torvalds's avatar
      Linux 5.11-rc3 · 7c53f6b6
      Linus Torvalds authored
      7c53f6b6
    • Linus Torvalds's avatar
      Merge tag 'kbuild-fixes-v5.11' of... · 20210a98
      Linus Torvalds authored
      Merge tag 'kbuild-fixes-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
      
      Pull Kbuild fixes from Masahiro Yamada:
      
       - Search for <ncurses.h> in the default header path of HOSTCC
      
       - Tweak the option order to be kind to old BSD awk
      
       - Remove 'kvmconfig' and 'xenconfig' shorthands
      
       - Fix documentation
      
      * tag 'kbuild-fixes-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
        Documentation: kbuild: Fix section reference
        kconfig: remove 'kvmconfig' and 'xenconfig' shorthands
        lib/raid6: Let $(UNROLL) rules work with macOS userland
        kconfig: Support building mconf with vendor sysroot ncurses
        kconfig: config script: add a little user help
        MAINTAINERS: adjust GCC PLUGINS after gcc-plugin.sh removal
      20210a98
    • Linus Torvalds's avatar
      Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi · 688daed2
      Linus Torvalds authored
      Pull SCSI fixes from James Bottomley:
       "This is two driver fixes (megaraid_sas and hisi_sas).
      
        The megaraid one is a revert of a previous revert of a cpu hotplug fix
        which exposed a bug in the block layer which has been fixed in this
        merge window.
      
        The hisi_sas performance enhancement comes from switching to interrupt
        managed completion queues, which depended on the addition of
        devm_platform_get_irqs_affinity() which is now upstream via the irq
        tree in the last merge window"
      
      * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
        scsi: hisi_sas: Expose HW queues for v2 hw
        Revert "Revert "scsi: megaraid_sas: Added support for shared host tagset for cpuhotplug""
      688daed2
    • Linus Torvalds's avatar
      Merge tag 'block-5.11-2021-01-10' of git://git.kernel.dk/linux-block · ed41fd07
      Linus Torvalds authored
      Pull block fixes from Jens Axboe:
      
       - Missing CRC32 selections (Arnd)
      
       - Fix for a merge window regression with bdev inode init (Christoph)
      
       - bcache fixes
      
       - rnbd fixes
      
       - NVMe pull request from Christoph:
          - fix a race in the nvme-tcp send code (Sagi Grimberg)
          - fix a list corruption in an nvme-rdma error path (Israel Rukshin)
          - avoid a possible double fetch in nvme-pci (Lalithambika Krishnakumar)
          - add the susystem NQN quirk for a Samsung driver (Gopal Tiwari)
          - fix two compiler warnings in nvme-fcloop (James Smart)
          - don't call sleeping functions from irq context in nvme-fc (James Smart)
          - remove an unused argument (Max Gurtovoy)
          - remove unused exports (Minwoo Im)
      
       - Use-after-free fix for partition iteration (Ming)
      
       - Missing blk-mq debugfs flag annotation (John)
      
       - Bdev freeze regression fix (Satya)
      
       - blk-iocost NULL pointer deref fix (Tejun)
      
      * tag 'block-5.11-2021-01-10' of git://git.kernel.dk/linux-block: (26 commits)
        bcache: set bcache device into read-only mode for BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET
        bcache: introduce BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE for large bucket
        bcache: check unsupported feature sets for bcache register
        bcache: fix typo from SUUP to SUPP in features.h
        bcache: set pdev_set_uuid before scond loop iteration
        blk-mq-debugfs: Add decode for BLK_MQ_F_TAG_HCTX_SHARED
        block/rnbd-clt: avoid module unload race with close confirmation
        block/rnbd: Adding name to the Contributors List
        block/rnbd-clt: Fix sg table use after free
        block/rnbd-srv: Fix use after free in rnbd_srv_sess_dev_force_close
        block/rnbd: Select SG_POOL for RNBD_CLIENT
        block: pre-initialize struct block_device in bdev_alloc_inode
        fs: Fix freeze_bdev()/thaw_bdev() accounting of bd_fsfreeze_sb
        nvme: remove the unused status argument from nvme_trace_bio_complete
        nvmet-rdma: Fix list_del corruption on queue establishment failure
        nvme: unexport functions with no external caller
        nvme: avoid possible double fetch in handling CQE
        nvme-tcp: Fix possible race of io_work and direct send
        nvme-pci: mark Samsung PM1725a as IGNORE_DEV_SUBNQN
        nvme-fcloop: Fix sscanf type and list_first_entry_or_null warnings
        ...
      ed41fd07
    • Linus Torvalds's avatar
      Merge tag 'io_uring-5.11-2021-01-10' of git://git.kernel.dk/linux-block · d430adfe
      Linus Torvalds authored
      Pull io_uring fixes from Jens Axboe:
       "A bit larger than I had hoped at this point, but it's all changes that
        will be directed towards stable anyway. In detail:
      
         - Fix a merge window regression on error return (Matthew)
      
         - Remove useless variable declaration/assignment (Ye Bin)
      
         - IOPOLL fixes (Pavel)
      
         - Exit and cancelation fixes (Pavel)
      
         - fasync lockdep complaint fix (Pavel)
      
         - Ensure SQPOLL is synchronized with creator life time (Pavel)"
      
      * tag 'io_uring-5.11-2021-01-10' of git://git.kernel.dk/linux-block:
        io_uring: stop SQPOLL submit on creator's death
        io_uring: add warn_once for io_uring_flush()
        io_uring: inline io_uring_attempt_task_drop()
        io_uring: io_rw_reissue lockdep annotations
        io_uring: synchronise ev_posted() with waitqueues
        io_uring: dont kill fasync under completion_lock
        io_uring: trigger eventfd for IOPOLL
        io_uring: Fix return value from alloc_fixed_file_ref_node
        io_uring: Delete useless variable ‘id’ in io_prep_async_work
        io_uring: cancel more aggressively in exit_work
        io_uring: drop file refs after task cancel
        io_uring: patch up IOPOLL overflow_flush sync
        io_uring: synchronise IOPOLL on task_submit fail
      d430adfe