1. 26 Sep, 2022 36 commits
  2. 30 Aug, 2022 2 commits
  3. 24 Aug, 2022 2 commits
    • Yosry Ahmed's avatar
      mm: add NR_SECONDARY_PAGETABLE to count secondary page table uses. · ebc97a52
      Yosry Ahmed authored
      We keep track of several kernel memory stats (total kernel memory, page
      tables, stack, vmalloc, etc) on multiple levels (global, per-node,
      per-memcg, etc). These stats give insights to users to how much memory
      is used by the kernel and for what purposes.
      
      Currently, memory used by KVM mmu is not accounted in any of those
      kernel memory stats. This patch series accounts the memory pages
      used by KVM for page tables in those stats in a new
      NR_SECONDARY_PAGETABLE stat. This stat can be later extended to account
      for other types of secondary pages tables (e.g. iommu page tables).
      
      KVM has a decent number of large allocations that aren't for page
      tables, but for most of them, the number/size of those allocations
      scales linearly with either the number of vCPUs or the amount of memory
      assigned to the VM. KVM's secondary page table allocations do not scale
      linearly, especially when nested virtualization is in use.
      
      From a KVM perspective, NR_SECONDARY_PAGETABLE will scale with KVM's
      per-VM pages_{4k,2m,1g} stats unless the guest is doing something
      bizarre (e.g. accessing only 4kb chunks of 2mb pages so that KVM is
      forced to allocate a large number of page tables even though the guest
      isn't accessing that much memory). However, someone would need to either
      understand how KVM works to make that connection, or know (or be told) to
      go look at KVM's stats if they're running VMs to better decipher the stats.
      
      Furthermore, having NR_PAGETABLE side-by-side with NR_SECONDARY_PAGETABLE
      is informative. For example, when backing a VM with THP vs. HugeTLB,
      NR_SECONDARY_PAGETABLE is roughly the same, but NR_PAGETABLE is an order
      of magnitude higher with THP. So having this stat will at the very least
      prove to be useful for understanding tradeoffs between VM backing types,
      and likely even steer folks towards potential optimizations.
      
      The original discussion with more details about the rationale:
      https://lore.kernel.org/all/87ilqoi77b.wl-maz@kernel.org
      
      This stat will be used by subsequent patches to count KVM mmu
      memory usage.
      Signed-off-by: default avatarYosry Ahmed <yosryahmed@google.com>
      Acked-by: default avatarShakeel Butt <shakeelb@google.com>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220823004639.2387269-2-yosryahmed@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
      ebc97a52
    • Miaohe Lin's avatar
      KVM: x86/mmu: fix memoryleak in kvm_mmu_vendor_module_init() · d7c9bfb9
      Miaohe Lin authored
      When register_shrinker() fails, KVM doesn't release the percpu counter
      kvm_total_used_mmu_pages leading to memoryleak. Fix this issue by calling
      percpu_counter_destroy() when register_shrinker() fails.
      
      Fixes: ab271bd4 ("x86: kvm: propagate register_shrinker return code")
      Signed-off-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
      Link: https://lore.kernel.org/r/20220823063237.47299-1-linmiaohe@huawei.com
      [sean: tweak shortlog and changelog]
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      d7c9bfb9