• David Matlack's avatar
    kvm: x86: reduce collisions in mmu_page_hash · 114df303
    David Matlack authored
    When using two-dimensional paging, the mmu_page_hash (which provides
    lookups for existing kvm_mmu_page structs), becomes imbalanced; with
    too many collisions in buckets 0 and 512. This has been seen to cause
    mmu_lock to be held for multiple milliseconds in kvm_mmu_get_page on
    VMs with a large amount of RAM mapped with 4K pages.
    
    The current hash function uses the lower 10 bits of gfn to index into
    mmu_page_hash. When doing shadow paging, gfn is the address of the
    guest page table being shadow. These tables are 4K-aligned, which
    makes the low bits of gfn a good hash. However, with two-dimensional
    paging, no guest page tables are being shadowed, so gfn is the base
    address that is mapped by the table. Thus page tables (level=1) have
    a 2MB aligned gfn, page directories (level=2) have a 1GB aligned gfn,
    etc. This means hashes will only differ in their 10th bit.
    
    hash_64() provides a better hash. For example, on a VM with ~200G
    (99458 direct=1 kvm_mmu_page structs):
    
    hash            max_mmu_page_hash_collisions
    --------------------------------------------
    low 10 bits     49847
    hash_64         105
    perfect         97
    
    While we're changing the hash, increase the table size by 4x to better
    support large VMs (further reduces number of collisions in 200G VM to
    29).
    
    Note that hash_64() does not provide a good distribution prior to commit
    ef703f49 ("Eliminate bad hash multipliers from hash_32() and
    hash_64()").
    Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
    Change-Id: I5aa6b13c834722813c6cca46b8b1ed6f53368ade
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    114df303
kvm_host.h 40.3 KB